CN112115282A - Question answering method, device, equipment and storage medium based on search - Google Patents
Question answering method, device, equipment and storage medium based on search Download PDFInfo
- Publication number
- CN112115282A CN112115282A CN202010983014.5A CN202010983014A CN112115282A CN 112115282 A CN112115282 A CN 112115282A CN 202010983014 A CN202010983014 A CN 202010983014A CN 112115282 A CN112115282 A CN 112115282A
- Authority
- CN
- China
- Prior art keywords
- information
- question
- answer
- multimedia
- multimedia information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
- G06F16/432—Query formulation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/005—Language recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Acoustics & Sound (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The disclosure relates to a question and answer method, a question and answer device, question and answer equipment and a storage medium based on search, and belongs to the technical field of natural language processing. The method comprises the following steps: the method comprises the steps of receiving a search request which is sent by a terminal and carries question information, searching according to the question information to obtain multimedia information matched with the question information and answer information in the multimedia information, sending the multimedia information and the answer information to the terminal, providing a question-answer function based on a search scene, searching the multimedia information matched with the question information based on the search function, and also obtaining the answer information matched with the question information in the multimedia information based on the question-answer function, so that a user can visually check the answer information and the multimedia information corresponding to the question information, the information quantity is improved, and the search requirements of the user are met. Moreover, information included in the multimedia information is fully considered when the answer information is acquired, so that the answer information is determined from the multimedia information, and the accuracy of the acquired answer information is improved.
Description
Technical Field
The present disclosure relates to the field of natural language technologies, and in particular, to a question answering method, device, apparatus, and storage medium based on search.
Background
With the rapid development of internet technology and the wide spread of multimedia information, users playing multimedia information such as audio or video have become more and more popular entertainment modes. And as the amount of multimedia information is more and more, more users can search for the multimedia information in a searching way.
Generally, a user can acquire multimedia information matched with question information when inputting the question information in a search interface, and the user can check the multimedia information obtained by searching in the search interface. However, since the user can only view the multimedia information matched with the problem information, the amount of information provided by the multimedia information is small, and the search requirement of the user cannot be met, so that the search effect is poor.
Disclosure of Invention
The invention provides a question-answering method, a device, equipment and a storage medium based on search, which additionally provide answer information on the basis of providing multimedia information, improve the information quantity, provide a question-answering function based on a search scene, enrich the functions, fully consider the information in the multimedia information, improve the information quantity considered when obtaining the answer information, and further improve the accuracy of the obtained answer information.
According to a first aspect of the embodiments of the present disclosure, there is provided a method for question answering based on search, the method including:
receiving a search request sent by a terminal, wherein the search request carries problem information input in a search interface of the terminal;
searching according to the question information to obtain multimedia information matched with the question information and answer information matched with the question information, wherein the answer information is positioned in the multimedia information;
and sending the multimedia information and the answer information to the terminal, wherein the terminal is used for displaying the multimedia information and the answer information in the search interface.
In some embodiments, the searching according to the question information to obtain the multimedia information matched with the question information and the answer information matched with the question information includes:
searching at least one candidate multimedia information matched with the question information;
acquiring content information of the at least one candidate multimedia information;
selecting the answer information matched with the question information from the acquired content information;
and determining the multimedia information corresponding to the selected answer information as the multimedia information matched with the question information.
In some embodiments, the searching for at least one candidate multimedia information matching the question information comprises:
performing word segmentation processing on the problem information to obtain at least one word;
and searching candidate multimedia information stored corresponding to the at least one word.
In some embodiments, before the searching for candidate multimedia information stored in correspondence with the at least one word, the method further comprises:
performing voice recognition on at least one multimedia message to obtain text information corresponding to each multimedia message;
performing word segmentation processing on the text information corresponding to each multimedia information to obtain at least one word;
and correspondingly storing each obtained word and the multimedia information to which the word belongs.
In some embodiments, the selecting, from the obtained content information, the answer information that matches the question information includes:
acquiring a first matching degree of the question information and each content information;
and acquiring answer information matched with the question information according to the first matching degree of the question information and each piece of content information.
In some embodiments, the selecting, from the obtained content information, the answer information that matches the question information includes:
acquiring a first matching degree of the question information and each content information;
acquiring a second matching degree of text information corresponding to each candidate multimedia information in the at least one candidate multimedia information and the question information;
and acquiring answer information matched with the question information according to the first matching degree of the question information and each content information and the second matching degree of the text information corresponding to the candidate multimedia information to which each content information belongs and the question information.
In some embodiments, the obtaining content information of the at least one candidate multimedia information comprises:
obtaining the question type to which the question information belongs, wherein the question type comprises a specified type or a non-specified type, and the specified type is the question type with fixed answer information;
and acquiring the content information of the at least one candidate multimedia information by adopting a processing mode corresponding to the problem type.
In some embodiments, the obtaining of the question type to which the question information belongs includes:
and calling a classification model to classify the problem information to obtain the problem type of the problem information.
In some embodiments, the obtaining content information of the at least one candidate multimedia information by using the processing manner corresponding to the question type includes:
if the problem type of the problem information is the unspecified type, obtaining continuous statement information with reference quantity from text information corresponding to the candidate multimedia information for each candidate multimedia information in the at least one candidate multimedia information, and combining the statement information with reference quantity to obtain content information of the candidate multimedia information.
In some embodiments, the obtaining content information of the at least one candidate multimedia information by using the processing manner corresponding to the question type includes:
and if the problem type of the problem information is the specified type, identifying text information corresponding to the candidate multimedia information for each candidate multimedia information in the at least one candidate multimedia information to obtain content information of the candidate multimedia information.
In some embodiments, the searching according to the question information to obtain the multimedia information matched with the question information and the answer information matched with the question information includes:
searching according to the question information to obtain multimedia information matched with the question information and first answer information matched with the question information, wherein the first answer information is positioned in the multimedia information;
obtaining subject information of the problem information, wherein the subject information is used for indicating a subject part and a predicate part of the problem information;
combining the main body information with the first answer information to obtain second answer information;
the sending the multimedia information and the answer information to the terminal includes: and sending the multimedia information and the second answer information to the terminal.
According to a second aspect of the embodiments of the present disclosure, there is provided a method for question answering based on search, the method including:
receiving question information input in a search interface;
sending a search request carrying the problem information to a server;
receiving multimedia information matched with the question information and answer information matched with the question information, which are sent by the server, wherein the answer information is positioned in the multimedia information;
and displaying the multimedia information and the answer information in the search interface.
In some embodiments, the answer information is displayed in a floating manner on an upper layer of the multimedia information; or,
the answer information is displayed in a profile area of the multimedia information.
According to a third aspect of the embodiments of the present disclosure, there is provided a search-based question answering apparatus, the apparatus including:
the terminal comprises a request receiving unit, a searching unit and a searching unit, wherein the request receiving unit is configured to execute a searching request sent by a receiving terminal, and the searching request carries problem information input in a searching interface of the terminal;
the searching unit is configured to execute searching according to the question information to obtain multimedia information matched with the question information and answer information matched with the question information, and the answer information is located in the multimedia information;
an information sending unit configured to execute sending the multimedia information and the answer information to the terminal, wherein the terminal is used for displaying the multimedia information and the answer information in the search interface.
In some embodiments, the search unit includes:
a search subunit configured to perform a search for at least one candidate multimedia information matching the question information;
a content acquisition subunit configured to perform acquisition of content information of the at least one candidate multimedia information;
a selection subunit configured to perform selection of the answer information matching the question information from the acquired content information;
and the determining subunit is configured to execute multimedia information corresponding to the selected answer information, and determine the multimedia information matched with the question information.
In some embodiments, the search subunit is configured to perform:
performing word segmentation processing on the problem information to obtain at least one word;
and searching candidate multimedia information stored corresponding to the at least one word.
In some embodiments, the apparatus further comprises:
the recognition unit is configured to perform voice recognition on at least one piece of multimedia information to obtain text information corresponding to each piece of multimedia information;
the word segmentation unit is configured to perform word segmentation processing on the text information corresponding to each piece of multimedia information to obtain at least one word;
and the storage unit is configured to perform corresponding storage of each obtained word and the multimedia information to which the word belongs.
In some embodiments, the selection subunit is configured to perform:
acquiring a first matching degree of the question information and each content information;
and acquiring answer information matched with the question information according to the first matching degree of the question information and each piece of content information.
In some embodiments, the selection subunit is configured to perform:
acquiring a first matching degree of the question information and each content information;
acquiring a second matching degree of text information corresponding to each candidate multimedia information in the at least one candidate multimedia information and the question information;
and acquiring answer information matched with the question information according to the first matching degree of the question information and each content information and the second matching degree of the text information corresponding to the candidate multimedia information to which each content information belongs and the question information.
In some embodiments, the content obtaining subunit is further configured to perform:
obtaining the question type to which the question information belongs, wherein the question type comprises a specified type or a non-specified type, and the specified type is the question type with fixed answer information;
and acquiring the content information of the at least one candidate multimedia information by adopting a processing mode corresponding to the problem type.
In some embodiments, the content obtaining subunit is configured to perform a classification model calling to classify the question information, so as to obtain a question type to which the question information belongs.
In some embodiments, the content acquisition subunit is configured to perform:
if the problem type of the problem information is the unspecified type, obtaining continuous statement information with reference quantity from text information corresponding to the candidate multimedia information for each candidate multimedia information in the at least one candidate multimedia information, and combining the statement information with reference quantity to obtain content information of the candidate multimedia information.
In some embodiments, the content acquisition subunit is configured to perform:
and if the problem type of the problem information is the specified type, identifying text information corresponding to the candidate multimedia information for each candidate multimedia information in the at least one candidate multimedia information to obtain content information of the candidate multimedia information.
In some embodiments, the search unit is configured to perform: searching according to the question information to obtain multimedia information matched with the question information and first answer information matched with the question information, wherein the first answer information is positioned in the multimedia information; obtaining subject information of the problem information, wherein the subject information is used for indicating a subject part and a predicate part of the problem information; combining the main body information with the first answer information to obtain second answer information;
the information sending unit is configured to execute sending the multimedia information and the second answer information to the terminal.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a search-based question answering apparatus, the apparatus including:
a first receiving unit configured to perform receiving question information input in a search interface;
a sending unit configured to execute sending a search request carrying the question information to a server;
a second receiving unit configured to perform receiving multimedia information matched with the question information and answer information matched with the question information, which are sent by the server, wherein the answer information is located in the multimedia information;
a display unit configured to perform displaying the multimedia information and the answer information in the search interface.
In some embodiments, the answer information is displayed in a floating manner on an upper layer of the multimedia information; or,
the answer information is displayed in a profile area of the multimedia information.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a server, the terminal including:
one or more processors;
volatile or non-volatile memory for storing the one or more processor-executable commands;
wherein the one or more processors are configured to perform the search based question-answering method of the first aspect.
According to a sixth aspect of the embodiments of the present disclosure, there is provided a terminal, including:
one or more processors;
volatile or non-volatile memory for storing the one or more processor-executable commands;
wherein the one or more processors are configured to perform the search based question-answering method of the second aspect.
According to a seventh aspect provided by an embodiment of the present disclosure, there is provided a non-transitory computer-readable storage medium, wherein when a processor of a terminal executes program codes in the storage medium, a server is enabled to execute the search-based question answering method according to the first aspect, or the terminal is enabled to execute the search-based question answering method according to the second aspect.
According to an eighth aspect of embodiments of the present disclosure, there is provided a computer program product, wherein the program code of the computer program product, when executed by a processor of a terminal, enables the terminal to perform the search based question-answering method according to the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the scheme provided by the embodiment of the disclosure, the search scene and the intelligent question and answer scene can be fused, the question and answer function based on the search scene is provided, the multimedia information matched with the question information is searched based on the search function, and on the basis, the answer information matched with the question information in the multimedia information can be obtained based on the question and answer function, so that the user can visually check the answer information and the multimedia information corresponding to the question information, the information quantity is improved, the search requirement of the user is met, and the search effect is improved. Moreover, information included in the multimedia information is fully considered when the answer information is acquired, so that the answer information is determined from the multimedia information, and the accuracy of the acquired answer information is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a block diagram illustrating one implementation environment in accordance with an exemplary embodiment.
FIG. 2 is a flow diagram illustrating a search based question-answering method in accordance with an exemplary embodiment.
FIG. 3 is a flow diagram illustrating a search based question-answering method in accordance with an exemplary embodiment.
FIG. 4 is a flow diagram illustrating a search based question-answering method in accordance with an exemplary embodiment.
FIG. 5 is a flow diagram illustrating a process for storing words and associated multimedia information, according to an example embodiment.
FIG. 6 is a flow diagram illustrating the building of an inverted index library, according to an example embodiment.
Fig. 7 is a flow diagram illustrating a sorting of content information according to an example embodiment.
Fig. 8 is a flow diagram illustrating a sorting of content information according to an example embodiment.
FIG. 9 is a schematic diagram illustrating a search interface in accordance with an exemplary embodiment.
FIG. 10 is a diagram illustrating a search interface in accordance with an exemplary embodiment.
FIG. 11 is a diagram illustrating a search interface in accordance with an exemplary embodiment.
FIG. 12 is a flow diagram illustrating a search based question-answering method in accordance with an exemplary embodiment.
Fig. 13 is a schematic diagram illustrating a structure of a search-based question answering apparatus according to an exemplary embodiment.
Fig. 14 is a schematic structural diagram illustrating another search-based question answering apparatus according to an exemplary embodiment.
Fig. 15 is a schematic diagram illustrating a structure of a search-based question answering apparatus according to an exemplary embodiment.
Fig. 16 is a block diagram illustrating a terminal according to an example embodiment.
Fig. 17 is a schematic diagram illustrating a configuration of a server according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
First, the terms referred to in the present application are explained:
multimedia information: the multimedia information is information adopting expression modes such as audio, video, image and the like. For example, the multimedia information is audio information, which can be converted into text information. Or the multimedia information is video information, and audio information in the video information can be converted into text information.
Problem information: the question information is information input by a user, and corresponding answer information can be searched according to the question information. The question information can be in the form of question sentences, question reversals and the like, and questions of the user can be expressed.
The type of problem: each question information corresponds to a question type, and the question types comprise a specified type and a non-specified type.
Wherein the specified type refers to a question type having fixed answer information. For example, if the question information is "zhang san which is old this year", the question information has the specified answer information, and thus the question information belongs to the specified category.
The non-specified type is a problem type other than the specified type. For example, if the question information is "ill and do nothing", the question information has a variety of answer information and does not have fixed answer information, and thus the question information belongs to a non-specified type.
The method provided by the embodiment of the disclosure can be applied to video search scenes. The terminal displays a search interface in the video search application, if a user needs to ask a question through the video search application, question information is input in the search interface, and by adopting the method provided by the embodiment of the disclosure, video information and answer information matched with the question information are obtained, and then the video information and the answer information are displayed in the search interface.
Or, the method provided by the embodiment of the present disclosure can be applied to a voice search scene. The terminal displays a search interface in the audio search application, if a user needs to ask a question through the audio search application, question information is input in the search interface, and by adopting the method provided by the embodiment of the disclosure, the audio information and the answer information matched with the question information can be obtained, and then the audio information and the answer information are displayed in the search interface.
The question and answer method based on search provided by the embodiment of the disclosure is applied to the terminal. Alternatively, the question answering method based on search provided by the embodiment of the present disclosure is applied to the terminal 101 and the server 102 shown in fig. 1. The terminal 101 and the server 102 are connected via a communication network.
The terminal 101 has installed therein a target application served by the server 102. The terminal 101 can implement functions such as data transmission, search, question answering, and the like through the target application.
In some embodiments, the target application is a target application in the operating system of the terminal 101 or a target application provided by a third party. For example, the target application is a video application, an audio application, or other type of application, among others. When the target application is a video application, the video application can have a video sharing function, a video searching function, a question answering function based on video searching, a video recommending function and the like.
The server 102 can serve any target application. The server 102 has a storage function and is capable of storing multimedia information uploaded by a target application. In addition, the server 102 has a search function, and can perform a search based on the question information uploaded by the terminal 101 to obtain multimedia information matching the question information. The server 102 also has a question answering function, and can obtain answer information based on question information uploaded by the terminal 101. Server 102 can combine the search function with the question-answering function to implement a search-based question-answering function.
The terminal can be various terminals such as a mobile phone, a tablet computer, a computer and the like, and the server can be a server, a server cluster consisting of a plurality of servers, or a cloud computing service center.
Fig. 2 is a flowchart illustrating a search-based question answering method according to an exemplary embodiment, applied to a terminal, and referring to fig. 2, the method includes:
The search request carries problem information input in a search interface of the terminal.
Wherein, the answer information is located in the multimedia information.
And step 203, sending the multimedia information and the answer information to the terminal.
The terminal is used for displaying the multimedia information and the answer information in the search interface.
The embodiment of the application provides a question-answering method based on search, which can integrate a search scene with an intelligent question-answering scene, provide a question-answering function based on the search scene, search multimedia information matched with question information based on the search function, and on the basis, can also obtain answer information matched with the question information in the multimedia information based on the question-answering function, so that a user can visually check the answer information and the multimedia information corresponding to the question information, the information quantity is improved, the search requirement of the user is met, and the search effect is improved. Moreover, information included in the multimedia information is fully considered when the answer information is acquired, so that the answer information is determined from the multimedia information, and the accuracy of the acquired answer information is improved.
In one possible implementation manner, the searching according to the question information to obtain the multimedia information matched with the question information and the answer information matched with the question information includes:
searching at least one candidate multimedia information matched with the question information;
acquiring content information of at least one candidate multimedia information;
selecting answer information matched with the question information from the acquired content information;
and determining the multimedia information corresponding to the selected answer information as the multimedia information matched with the question information.
In some embodiments, searching for at least one candidate multimedia information that matches the question information comprises:
performing word segmentation processing on the problem information to obtain at least one word;
candidate multimedia information stored in correspondence with at least one word is searched.
In some embodiments, the method further comprises, prior to the candidate multimedia information stored in correspondence with the at least one word:
performing voice recognition on at least one multimedia message to obtain text information corresponding to each multimedia message;
performing word segmentation processing on text information corresponding to each multimedia information to obtain at least one word;
and correspondingly storing each obtained word and the multimedia information to which the word belongs.
In some embodiments, selecting answer information matching the question information from the retrieved content information includes:
acquiring a first matching degree of the question information and each content information;
and acquiring answer information matched with the question information according to the first matching degree of the question information and each piece of content information.
In some embodiments, selecting answer information matching the question information from the obtained content information includes:
acquiring a first matching degree of the question information and each content information;
acquiring a second matching degree of text information and problem information corresponding to each candidate multimedia information in at least one candidate multimedia information;
and acquiring answer information matched with the question information according to the first matching degree of the question information and each content information and the second matching degree of the text information corresponding to the candidate multimedia information to which each content information belongs and the question information.
In some embodiments, obtaining content information for at least one candidate multimedia information comprises:
acquiring the question type to which the question information belongs, wherein the question type comprises a specified type or a non-specified type, and the specified type is the question type with fixed answer information;
and acquiring the content information of at least one candidate multimedia information by adopting a processing mode corresponding to the problem type.
In some embodiments, obtaining the type of issue to which the issue information pertains includes:
and calling a classification model to classify the problem information to obtain the problem type of the problem information.
In some embodiments, obtaining content information of at least one candidate multimedia information by using a processing manner corresponding to a question type includes:
and if the problem type of the problem information is a non-specified type, acquiring continuous statement information with reference quantity from text information corresponding to the candidate multimedia information for each candidate multimedia information in at least one candidate multimedia information, and combining the statement information with reference quantity to obtain the content information of the candidate multimedia information.
In some embodiments, obtaining content information of at least one candidate multimedia information by using a processing manner corresponding to a question type includes:
and if the problem type of the problem information is the specified type, identifying text information corresponding to the candidate multimedia information for each candidate multimedia information in the at least one candidate multimedia information to obtain the content information of the candidate multimedia information.
In some embodiments, searching according to the question information to obtain multimedia information matched with the question information and answer information matched with the question information includes:
searching according to the question information to obtain multimedia information matched with the question information and first answer information matched with the question information, wherein the first answer information is positioned in the multimedia information;
acquiring subject information of the problem information, wherein the subject information is used for indicating a subject part and a predicate part of the problem information;
combining the main body information with the first answer information to obtain second answer information;
sending multimedia information and answer information to the terminal, including: and sending the multimedia information and the second answer information to the terminal.
Fig. 3 is a flow chart illustrating a search based question-answering method according to an exemplary embodiment, referring to fig. 3, the method including:
And step 304, displaying the multimedia information and the answer information in a search interface.
The embodiment of the application provides a question-answering method based on search, which can integrate a search scene with an intelligent question-answering scene, provide a question-answering function based on the search scene, search multimedia information matched with question information based on the search function, and on the basis, can also obtain answer information matched with the question information in the multimedia information based on the question-answering function, so that a user can visually check the answer information and the multimedia information corresponding to the question information, the information quantity is improved, the search requirement of the user is met, and the search effect is improved. Moreover, information included in the multimedia information is fully considered when the answer information is acquired, so that the answer information is determined from the multimedia information, and the accuracy of the acquired answer information is improved.
In some embodiments, the answer information is displayed in a floating manner on the upper layer of the multimedia information; or,
the answer information is displayed in the brief section of the multimedia information.
Fig. 4 is a flowchart illustrating a search based question-answering method according to an exemplary embodiment, referring to fig. 4, the method including:
step 401, the terminal receives question information input in a search interface.
In the embodiment of the application, a user can input question information in a search interface displayed by a terminal, and the terminal can receive the question information input by the user through the search interface and search based on the question information so as to display multimedia information and answer information matched with the question information.
Wherein the search interface is displayed in the target application. For example, the target application is a video application, an audio application, or other type of application. And the target application can have various functions such as a sharing function, a searching function, an intelligent question and answer function, a recommendation function, and the like.
The following description will take the target application as a video application as an example. If the target application is a video application, the video application can have a video sharing function, a video searching function, a question answering function based on video searching, a video recommending function and the like.
The terminal can log in the video application based on the user identification, and the user can use any function in the video application. The user identification is a mobile phone number, a user nickname, a user account, or other identification of the user.
For example, if a user uses a terminal to shoot a scene video, the scene video can be uploaded to a video application logged in based on a user identifier, and then the scene video can be shared with other users for watching.
Or if the user needs to watch the fixed type of video, searching to obtain the video matched with the searching information by adopting the video searching function of the video application, and further watching the searched video.
Or, if the user needs to inquire the answer of a question, the user searches for answer information and multimedia information matched with the question information by adopting a question-answer function based on video search.
Or the video application can also automatically acquire other videos associated with the interests of the user according to the historical play records of the user and recommend the acquired videos to the user.
In some embodiments, the search interface includes a search box in which the user can input question information, and the terminal can obtain the question information input by the user from the search box.
For example, the user inputs voice information in the search box, the terminal performs voice recognition on the voice information, and text information corresponding to the voice information is determined as question information. Or, the user inputs text information in the search box, and the terminal determines the acquired text information as question information.
Step 402, the terminal sends a search request carrying problem information to the server.
In step 403, the server receives the search request sent by the terminal.
If the search request includes question information, the terminal can send the search request to the server, and the server can perform searching based on the question information in the search request.
Step 404, the server searches for at least one candidate multimedia message matching the question message.
In the embodiment of the application, the server can search according to the acquired question information to acquire the multimedia information matched with the question information and the answer information in the multimedia information.
After the server acquires the question information, searching is carried out based on the question information, at least one candidate multimedia information matched with the question information can be acquired, the acquired candidate multimedia information comprises information matched with the question information, and then answer information matched with the question information can be acquired from the candidate multimedia information subsequently.
In some embodiments, the problem information is subjected to word segmentation processing to obtain at least one word, then for each word in the at least one word, at least one piece of multimedia information stored corresponding to the word is searched, and the at least one piece of multimedia information stored corresponding to the at least one word obtained through searching is determined as candidate multimedia information.
In some embodiments, in order to search for the multimedia information and the answer information matching with the question information according to the question information, the words and the multimedia information need to be stored correspondingly, and then the search is performed according to the stored words and the corresponding multimedia information. Then before step 404, see fig. 5, the method further comprises steps 501-504:
The multimedia information is information uploaded by a user. When any user uploads one multimedia message, the multimedia message uploaded by the user can be acquired, and if a plurality of users upload the multimedia message, a plurality of multimedia messages can be acquired. Or the multimedia information is information acquired from a database, a plurality of pieces of multimedia information are stored in the database, and if the multimedia information needs to be acquired, at least one piece of multimedia information can be acquired from the database. Or the multimedia information is stored information, and the terminal can acquire and store a plurality of multimedia information so as to acquire at least one multimedia information from the stored plurality of multimedia information.
And if each piece of multimedia information comprises audio information, the audio information in each piece of multimedia information can be identified by adopting a voice identification technology, and then text information corresponding to each piece of multimedia information is obtained.
In some embodiments, the Speech Recognition technology is an ASR (Automatic Speech Recognition) technology, such as any of a phonetic and acoustic based method, a random model method, a neural network based method, or a probabilistic linguistic analysis method.
It should be noted that step 502 in the embodiment of the present application is an optional step, and is only described as an example of performing speech recognition on multimedia information. In another embodiment, text extraction can be further performed from each of the at least one multimedia message to obtain text information of each multimedia message.
In some embodiments, a plurality of images included in each multimedia message are acquired, text extraction is performed on text information in each image to obtain text information of each image, and the text information in the acquired image is the text information in the multimedia message.
For each piece of multimedia information in the obtained at least one piece of multimedia information, performing word segmentation processing on the text information corresponding to each piece of multimedia information, and then obtaining at least one word corresponding to each piece of multimedia information.
The word segmentation processing comprises word segmentation of text information included in the multimedia information. In addition, the word segmentation process can also perform part-of-speech tagging on words obtained by word segmentation so as to indicate the part-of-speech of each word.
In some embodiments, the tokenization process includes any of a forward maximum matching method, a reverse maximum matching method, a shortest path tokenization, a machine learning method, or a statistical tokenization.
And step 504, correspondingly storing each obtained word and the multimedia information to which the word belongs.
After word segmentation processing is carried out on each multimedia information, at least one word can be obtained, the multimedia information to which each word belongs can be determined, and each word and the multimedia information to which each word belongs are correspondingly stored. And searching at least one piece of multimedia information to which any word belongs according to the stored words and the corresponding multimedia information.
In some embodiments, the server can also construct an inverted index library, and each word and the multimedia information to which the word belongs are stored in the inverted index library.
In the process of constructing the inverted index library, each word is used as an index identifier, and each index identifier corresponds to at least one piece of multimedia information.
For example, taking multimedia information as a short video as an example for explanation, in the embodiment of the present application, a process of constructing an inverted index library is shown in fig. 6, first performing voice recognition on the short video to obtain text information of each short video, then performing word segmentation processing on the text information of each short video to obtain at least one word, and establishing the inverted index library according to the short video to which each word belongs.
The short video is a video with the duration not exceeding a preset duration. The preset time length is set by a terminal, or set by a server, or set by other modes. The predetermined time period may be 20 seconds, 25 seconds, 30 seconds, or other values.
It should be noted that, in the embodiment of the present application, steps 501 to 504 can be repeatedly performed to update each word and the corresponding multimedia information, thereby ensuring that the multimedia information corresponding to each word is more comprehensive.
According to the scheme provided by the embodiment of the application, words in the multimedia information can be acquired, each word and the affiliated multimedia information are correspondingly stored in advance, at least one candidate multimedia information matched with the problem information can be acquired subsequently in a searching mode, the efficiency of acquiring the candidate multimedia information is improved, information in the multimedia information is considered when each word and the affiliated multimedia information are correspondingly stored, the information amount is improved, and the accuracy of subsequent searching can be improved.
Step 405, the server obtains content information of at least one candidate multimedia information.
Wherein, the content information of the candidate multimedia information is information extracted from the multimedia information.
In some embodiments, a problem type to which the problem information belongs is obtained, and the content information of at least one candidate multimedia information is obtained by adopting a processing mode corresponding to the problem type. The question type comprises a specified type or a non-specified type, and the specified type refers to the type of the question with fixed answer information.
Because each question information has a question type to which the question information belongs, and different question information may correspond to different question types, the question type to which the obtained question information belongs is determined, and then the content information of at least one candidate multimedia information is obtained according to a processing mode corresponding to the question type.
In some embodiments, the following manner can be employed to obtain the question type to which the question information pertains: and calling a classification model to classify the problem information to obtain the problem type of the problem information.
In the embodiment of the present application, before the classification model is called to obtain the problem type to which the problem information belongs, the classification model is also required to be trained to obtain the trained classification model.
In the process of training the classification model, obtaining sample problem information and a sample problem type to which each sample problem information belongs, inputting the sample problem information into the classification model, obtaining a training problem type of the sample problem information, comparing the sample problem type to which each sample problem information belongs with the training problem type, and modifying the classification model according to a comparison result to obtain a modified classification model.
In some embodiments, for question information of different question types, the manner of obtaining content information from candidate multimedia information includes the following two cases:
(1) and if the problem type of the problem information is a non-specified type, acquiring continuous statement information with reference quantity from text information corresponding to the candidate multimedia information for each candidate multimedia information in at least one candidate multimedia information, and combining the statement information with reference quantity to obtain the content information of the candidate multimedia information.
If the question type to which the question information belongs is a non-specified type, the question information has non-fixed answer information, that is, the answer information of the question information includes a plurality of types, and in this case, content information obtained by splicing statement information in each candidate multimedia information can be regarded as information associated with the question information.
The reference number is set by a terminal, or set by an operator, or is a default value, or set in other manners. The reference number may be 2, 3, 4 or other numbers.
(2) And if the problem type of the problem information is the specified type, identifying text information corresponding to the candidate multimedia information for each candidate multimedia information in the at least one candidate multimedia information to obtain the content information of the candidate multimedia information.
If the question type to which the question information belongs is a specified type, the question information corresponds to fixed answer information, that is, the answer information of the question information includes one type, in this case, it can be considered that the candidate multimedia information includes the fixed answer information matched with the question information, and the answer information can be obtained by identifying the multimedia information, so that for each candidate multimedia information in at least one candidate multimedia information, text information corresponding to the candidate multimedia information is identified, and content information associated with the question information is identified from the candidate multimedia information.
In some embodiments, a reading understanding technology is used to identify text information corresponding to the candidate multimedia information, so as to obtain content information of the candidate multimedia information.
In some embodiments, if the problem type to which the problem information belongs is a specified type, in the embodiment of the present application, it can be further determined whether the problem information corresponding to the specified type belongs to an entity specified type or a number specified type.
Wherein the number specifies a type of the answer information as the question information as a type of the number. Or, the entity specifies the answer information of which the type is the question information as the type of the entity. The entity includes a person name, a place name, an organization name, and so on.
And if the problem type of the problem information belongs to the number designated type, identifying the text information corresponding to the candidate multimedia information by adopting a number extractor to obtain the content information of the candidate multimedia information.
And if the problem type of the problem information is the entity specified type, identifying the text information corresponding to the candidate multimedia information by adopting an entity extractor to obtain the content information of the candidate multimedia information.
The extractors in the embodiments of the present application are all extractors that use reading and understanding technology.
In step 406, the server selects answer information matching the question information from the acquired content information.
After the content information in the multimedia information is acquired through the steps, the content information matched with the question information can be acquired from the acquired content information, and the content information is used as answer information matched with the question information.
In the embodiment of the application, the matching degree of the content information and the question information can be obtained in two ways, and then the answer information matched with the question information is determined according to the obtained matching degree:
the first method comprises the following steps: and in the process of selecting answer information matched with the question information, acquiring the answer information matched with the question information according to the first matching degree of the question information and each acquired content information.
The first matching degree of the acquired answer information and the acquired question information is greater than the first matching degree of other content information and the acquired question information in the content information.
In some embodiments, the content information and the question information are ranked according to a first matching degree, a preset number of content information is obtained according to the ranking order, and the preset number of content information is determined as answer information matched with the question information.
The preset number is set by a terminal, or set by an operator, or set by other modes. The predetermined number may be 1, 2, 3, or other values.
In some embodiments, content information with a first matching degree greater than a preset matching degree is obtained according to the first matching degree of the content information and the question information, and the obtained content information is determined as answer information matched with the question information.
The preset matching degree is set by a terminal, or set by an operator, or set by other modes. The predetermined degree of match may be 0.8, 0.9, or some other value.
In addition, in the embodiment of the present application, the answer information can be determined according to the first matching degree of each piece of content information and question information, and determining the first matching degree of each piece of content information and question information includes the following two cases:
(1) if the problem type to which the problem information belongs is a non-specified type, after the content information is acquired according to the processing mode corresponding to the non-specified type, a matching model is called to acquire a first matching degree of each content information and the problem information.
For example, as shown in fig. 7, if the question type to which the question information belongs is a non-specified type, the sentence information of the reference number that is continuous in the text information corresponding to the candidate multimedia information is spliced to obtain the content information, a first matching degree of each content information and the question information is obtained, and then the content information is ranked according to the first matching degree of each content information and the question information to obtain answer information matched with the question information.
In some embodiments, before the matching model is called, the matching model is trained, in the training process, sample question information, sample answer information corresponding to each sample question information, and a sample matching degree of the sample question information and the sample answer information are obtained, the sample question information and the corresponding sample answer information are input into the matching model, a predicted matching degree of the sample question information and the corresponding sample answer information is obtained, and the matching model is adjusted according to a difference value between the predicted matching degree and the sample matching degree to obtain the trained matching model.
(2) If the problem type to which the problem information belongs is a non-specified type, the first matching degree of each piece of content information and the problem information can be acquired in the process of identifying the multimedia information.
For example, as shown in fig. 8, if the question type to which the question information belongs is a specified type, an extractor can be further determined, the determined extractor is used to extract content information in text information corresponding to candidate multimedia information, and during the extraction process, a first matching degree of each content information and the question information can be determined, and ranking is performed according to the first matching degree of the content information and the question information, so as to obtain answer information matched with the question information.
And the second method comprises the following steps: the method comprises the steps of obtaining a first matching degree of each piece of content information and question information, obtaining a second matching degree of text information and question information corresponding to each piece of candidate multimedia information in at least one piece of candidate multimedia information, and obtaining answer information matched with the question information according to the first matching degree of each piece of content information and question information and the second matching degree of text information and question information corresponding to the candidate multimedia information to which each piece of content information belongs.
After the first matching degree of each piece of content information and the question information is obtained, the second matching degree of the candidate multimedia information and the question information can be obtained, the product of the first matching degree of each piece of content information and the question information and the second matching degree of the text information and the question information corresponding to the candidate multimedia information to which each piece of content information belongs is obtained, the third matching degree of each piece of content information is obtained, and then the answer information matched with the question information is obtained according to the third matching degree of each piece of content information.
And the third matching degree of the acquired answer information and the acquired question information is greater than the third matching degree of other content information and the acquired question information in the content information.
In some embodiments, the content information and the question information are ranked according to the third matching degree, a preset number of content information is obtained according to the ranking order, and the preset number of content information is determined as answer information matched with the question information.
The preset number is set by a terminal, or set by an operator, or set by other modes. The predetermined number may be 1, 2, 3, or other values.
In some embodiments, content information with a third matching degree greater than a preset matching degree is obtained according to a third matching degree of the content information and the question information, and the obtained content information is determined as answer information matched with the question information.
The preset matching degree is set by a terminal, or set by an operator, or set by other modes. The predetermined degree of match may be 0.8, 0.9, or some other value.
In the embodiment of the application, the matching degree of the content information and the question information is considered, the matching degree of the candidate multimedia information to which the content information belongs and the question information is also considered, the two matching degrees are combined, the accuracy of the third matching degree of the obtained content information can be improved, and the accuracy of the determined answer information can be improved when the answer information is determined according to the third matching degree.
Step 407, the server determines the multimedia information corresponding to the selected answer information as the multimedia information matched with the question information.
In the embodiment of the application, the selected answer information is information obtained from the content information of the multimedia information, so that the answer information corresponds to the multimedia information, after the answer information matched with the question information is determined, the multimedia information corresponding to the answer information can be obtained, and the obtained multimedia information is determined to be the multimedia information matched with the question information.
Step 408, the server sends the multimedia information and the answer information to the terminal.
And step 409, the terminal receives the multimedia information matched with the question information and answer information matched with the question information, which are sent by the server.
And step 410, the terminal displays the multimedia information and the answer information in a search interface.
The terminal can receive the multimedia information and the answer information sent by the server, the multimedia information and the answer information are both information matched with the question information, and the multimedia information and the answer information can be displayed in the search interface after the multimedia information and the answer information corresponding to the question information are received.
And if the answer information is positioned in the multimedia information and shows that the multimedia information and the answer information are in a corresponding relationship, the multimedia information and the answer information are correspondingly displayed when the multimedia information and the answer information are displayed in the search interface.
For example, as shown in fig. 9, multimedia information is displayed in the search interface, and answer information matching the question information is also displayed in each multimedia information. In addition, referring to fig. 9, a search box in which question information input by the user is displayed is also displayed at the top of the search interface.
In some embodiments, the multimedia information and the answer information are displayed in the search interface, including any of:
(1) the answer information is displayed on the upper layer of the multimedia information in a floating mode.
In some embodiments, when the multimedia information is displayed in the search interface, a floating frame can be further displayed on the multimedia information, answer information is displayed in the floating frame to inform the user of answer information matching the question information, and the user can also view the multimedia information matching the question information.
For example, as shown in fig. 10, multimedia information a and multimedia information B are displayed in the search interface, and answer information is displayed in a floating manner on the upper layer of the multimedia information a, and answer information is also displayed in a floating manner on the upper layer of the multimedia information B.
(2) The answer information is displayed in the brief section of the multimedia information.
The multimedia information is displayed in the search interface and further includes a brief description area in which answer information can be displayed so that the answer information can be displayed while the multimedia information is displayed.
Wherein the profile area includes profile information for the multimedia information. For example, the profile area includes a title of the multimedia information, a content summary of the multimedia information, and answer information of the question information, etc.
For example, as shown in fig. 11, multimedia information a and multimedia information B are displayed in the search interface, and a brief section 1 is displayed above the multimedia information a, answer information is included in the brief section 1, and brief sections 2, 2 are displayed above the multimedia information B, answer information is included in the brief section 2.
It should be noted that, the embodiment of the present application is only described by taking the example of displaying the multimedia information matched with the question information and the answer information matched with the question information. In another embodiment, the server can also perform searching according to the question information to obtain multimedia information matched with the question information and first answer information matched with the question information, obtain main information of the question information, combine the main information with the first answer information to obtain second answer information, send the multimedia information and the second answer information to the terminal by the server, and after the terminal receives the multimedia information and the second answer information, can display the multimedia information and the second answer information in a search interface.
The first answer information is located in the multimedia information, and the subject information is used for indicating a subject part and a predicate part of the question information. For example, if the question information is "zhang san this year old" and the obtained first answer information is "22 years old", the main part of the question information and the first answer information are spliced to obtain "zhang san this year 22 years old".
For example, as shown in fig. 12, the question information input by the user is analyzed to obtain at least one word, a reverse recall is performed according to the at least one word to obtain at least one candidate multimedia information, each candidate multimedia information is traversed to determine whether the question information belongs to a non-specified type or a specified type, then content information is extracted from each candidate multimedia information according to the information type to which the question information belongs, the obtained content information is ranked, answer information and multimedia information to which the answer information belongs are obtained according to the ranking order of the content information, and the answer information and the multimedia information are displayed.
The embodiment of the application provides a question-answering method based on search, which can integrate a search scene with an intelligent question-answering scene, provide a question-answering function based on the search scene, search multimedia information matched with question information based on the search function, and on the basis, can also obtain answer information matched with the question information in the multimedia information based on the question-answering function, so that a user can visually check the answer information and the multimedia information corresponding to the question information, the information quantity is improved, the search requirement of the user is met, and the search effect is improved. Moreover, information included in the multimedia information is fully considered when the answer information is acquired, so that the answer information is determined from the multimedia information, and the accuracy of the acquired answer information is improved.
In addition, in the embodiment of the application, at least one candidate multimedia message matched with the question message is searched, answer information is selected from content information in the at least one candidate multimedia message, each candidate multimedia message obtained through searching can be regarded as information associated with the question message, each multimedia message associated with the question message is fully considered, the considered information amount is increased, and therefore the accuracy of obtaining the answer information matched with the question message and the accuracy of obtaining the multimedia message are improved.
Fig. 13 is a schematic diagram illustrating a structure of a search-based question answering apparatus according to an exemplary embodiment. Referring to fig. 13, the apparatus includes:
a request receiving unit 1301 configured to execute a search request sent by a receiving terminal, where the search request carries problem information input in a search interface of the terminal;
the searching unit 1302 is configured to perform searching according to the question information to obtain multimedia information matched with the question information and answer information matched with the question information, wherein the answer information is located in the multimedia information;
and an information sending unit 1303 configured to perform sending of the multimedia information and the answer information to a terminal, where the terminal is used for displaying the multimedia information and the answer information in a search interface.
The embodiment of the application provides a question answering device based on search, a search scene and an intelligent question answering scene can be fused, a question answering function based on the search scene is provided, multimedia information matched with question information is searched based on the search function, on the basis, answer information matched with the question information in the multimedia information can be obtained based on the question answering function, a user can visually check answer information and multimedia information corresponding to the question information, the information quantity is improved, the search requirement of the user is met, and the search effect is improved. Moreover, information included in the multimedia information is fully considered when the answer information is acquired, so that the answer information is determined from the multimedia information, and the accuracy of the acquired answer information is improved.
In some embodiments, referring to fig. 14, the search unit 1302 includes:
a searching subunit 13021 configured to perform a search for at least one candidate multimedia information matching the question information;
a content acquiring subunit 13022 configured to perform acquiring content information of at least one candidate multimedia information;
a selecting sub-unit 13023 configured to perform selecting answer information matching the question information from the acquired content information;
a determining subunit 13024 configured to execute determining the multimedia information corresponding to the selected answer information as the multimedia information matched with the question information.
In some embodiments, the searching subunit 13021 is configured to perform:
performing word segmentation processing on the problem information to obtain at least one word;
candidate multimedia information stored in correspondence with at least one word is searched.
In some embodiments, referring to fig. 14, the apparatus further comprises:
a recognition unit 1304 configured to perform speech recognition on at least one multimedia message, and obtain text information corresponding to each multimedia message;
a word segmentation unit 1305 configured to perform word segmentation processing on text information corresponding to each multimedia information to obtain at least one word;
and the storage unit 1306 is configured to perform corresponding storage of each obtained word and the multimedia information.
In some embodiments, the selecting sub-unit 13023 is configured to perform:
acquiring a first matching degree of the question information and each content information;
and acquiring answer information matched with the question information according to the first matching degree of the question information and each acquired content information.
In some embodiments, the selecting sub-unit 13023 is configured to perform:
acquiring a first matching degree of the question information and each content information;
acquiring a second matching degree of text information and problem information corresponding to each candidate multimedia information in at least one candidate multimedia information;
and acquiring answer information matched with the question information according to the first matching degree of the question information and each content information and the second matching degree of the text information corresponding to the candidate multimedia information to which each content information belongs and the question information.
In some embodiments, the content obtaining subunit 13022 is further configured to perform:
acquiring the question type to which the question information belongs, wherein the question type comprises a specified type or a non-specified type, and the specified type is the question type with fixed answer information;
and acquiring the content information of at least one candidate multimedia information by adopting a processing mode corresponding to the problem type.
In some embodiments, the content obtaining subunit 13022 is configured to execute calling a classification model to classify the question information, and obtain a question type to which the question information belongs.
In some embodiments, the content obtaining subunit 13022 is configured to perform:
and if the problem type of the problem information is a non-specified type, acquiring continuous statement information with reference quantity from text information corresponding to the candidate multimedia information for each candidate multimedia information in at least one candidate multimedia information, and combining the statement information with reference quantity to obtain the content information of the candidate multimedia information.
In some embodiments, the content obtaining subunit 13022 is configured to perform:
and if the problem type of the problem information is the specified type, identifying text information corresponding to the candidate multimedia information for each candidate multimedia information in the at least one candidate multimedia information to obtain the content information of the candidate multimedia information.
In some embodiments, the search unit 1302 is configured to perform: searching according to the question information to obtain multimedia information matched with the question information and first answer information matched with the question information, wherein the first answer information is positioned in the multimedia information; acquiring subject information of the problem information, wherein the subject information is used for indicating a subject part and a predicate part of the problem information; combining the main body information with the first answer information to obtain second answer information;
and an information transmitting unit 1303 configured to perform transmitting the multimedia information and the second answer information to the terminal.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 15 is a schematic diagram illustrating a structure of a search-based question answering apparatus according to an exemplary embodiment. Referring to fig. 15, the apparatus includes:
a first receiving unit 1501 configured to perform receiving of question information input in a search interface;
a sending unit 1502 configured to execute sending a search request carrying question information to a server;
a second receiving unit 1503 configured to perform receiving of the multimedia information matched with the question information and the answer information matched with the question information sent by the server, wherein the answer information is located in the multimedia information;
a display unit 1504 configured to perform displaying the multimedia information and the answer information in the search interface.
In some embodiments, the answer information is displayed in a floating manner on the upper layer of the multimedia information; or,
the answer information is displayed in the brief section of the multimedia information.
Fig. 16 is a block diagram illustrating a terminal according to an example embodiment. The terminal 1600 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1600 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, terminal 1600 includes: one or more processors 1601 and one or more memories 1602.
In some embodiments, the terminal 1600 may also optionally include: peripheral interface 1603 and at least one peripheral. Processor 1601, memory 1602 and peripheral interface 1603 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1603 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1604, a display 1605, a camera assembly 1606, audio circuitry 1607, a positioning assembly 1608, and a power supply 1609.
The Radio Frequency circuit 1604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1604 converts the electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1604 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 1605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1605 is a touch display screen, the display screen 1605 also has the ability to capture touch signals on or over the surface of the display screen 1605. The touch signal may be input to the processor 1601 as a control signal for processing. At this point, the display 1605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1605 may be one, providing the front panel of the terminal 1600; in other embodiments, the display screens 1605 can be at least two, respectively disposed on different surfaces of the terminal 1600 or in a folded design; in other embodiments, display 1605 can be a flexible display disposed on a curved surface or a folded surface of terminal 1600. Even further, the display 1605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1605 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or other materials.
The camera assembly 1606 is used to capture images or video. Optionally, camera assembly 1606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1606 can also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1601 for processing or inputting the electric signals to the radio frequency circuit 1604 to achieve voice communication. For stereo sound acquisition or noise reduction purposes, the microphones may be multiple and disposed at different locations of terminal 1600. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1601 or the radio frequency circuit 1604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1607 may also include a headphone jack.
The positioning component 1608 is configured to locate a current geographic Location of the terminal 1600 for purposes of navigation or LBS (Location Based Service). The Positioning component 1608 may be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian graves System, or the european union galileo System.
In some embodiments, terminal 1600 also includes one or more sensors 1610. The one or more sensors 1610 include, but are not limited to: acceleration sensor 1611, gyro sensor 1612, pressure sensor 1613, fingerprint sensor 1614, optical sensor 1615, and proximity sensor 1616.
Acceleration sensor 1611 may detect acceleration in three coordinate axes of a coordinate system established with terminal 1600. For example, the acceleration sensor 1611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1601 may control the display screen 1605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1611. The acceleration sensor 1611 may also be used for acquisition of motion data of a game or a user.
Gyroscope sensor 1612 can detect the organism direction and the turned angle of terminal 1600, and gyroscope sensor 1612 can gather the 3D action of user to terminal 1600 with acceleration sensor 1611 in coordination. From the data collected by the gyro sensor 1612, the processor 1601 may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1613 may be disposed on the side frames of terminal 1600 and/or underlying display 1605. When the pressure sensor 1613 is disposed on the side frame of the terminal 1600, a user's holding signal of the terminal 1600 can be detected, and the processor 1601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1613. When the pressure sensor 1613 is disposed at the lower layer of the display 1605, the processor 1601 controls the operability control on the UI interface according to the pressure operation of the user on the display 1605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1614 is configured to collect a fingerprint of the user, and the processor 1601 is configured to identify the user based on the fingerprint collected by the fingerprint sensor 1614, or the fingerprint sensor 1614 is configured to identify the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by processor 1601 to have relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1614 may be disposed on the front, back, or side of the terminal 1600. When a physical key or vendor Logo is provided on the terminal 1600, the fingerprint sensor 1614 may be integrated with the physical key or vendor Logo.
The optical sensor 1615 is used to collect ambient light intensity. In one embodiment, the processor 1601 may control the display brightness of the display screen 1605 based on the ambient light intensity collected by the optical sensor 1615. Specifically, when the ambient light intensity is high, the display luminance of the display screen 1605 is increased; when the ambient light intensity is low, the display brightness of the display screen 1605 is adjusted down. In another embodiment, the processor 1601 may also dynamically adjust the shooting parameters of the camera assembly 1606 based on the ambient light intensity collected by the optical sensor 1615.
A proximity sensor 1616, also referred to as a distance sensor, is typically disposed on the front panel of terminal 1600. The proximity sensor 1616 is used to collect the distance between the user and the front surface of the terminal 1600. In one embodiment, the processor 1601 controls the display 1605 to switch from the light screen state to the clear screen state when the proximity sensor 1616 detects that the distance between the user and the front surface of the terminal 1600 is gradually decreased; when the proximity sensor 1616 detects that the distance between the user and the front surface of the terminal 1600 is gradually increased, the display 1605 is controlled by the processor 1601 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 16 is not intended to be limiting of terminal 1600, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
Fig. 17 is a schematic structural diagram of a server according to an exemplary embodiment, where the server 1700 may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1701 and one or more memories 1702, where the memory 1702 stores at least one program code, and the at least one program code is loaded and executed by the processors 1701 to implement the methods provided by the above method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, which when executed by a processor of a server, enables the server to perform the steps performed by the server in the above-described search based question-answering method.
In an exemplary embodiment, a computer program product is also provided, which when executed by a processor of a server, enables the server to perform the steps performed by the server in the above-described search based question-answering method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. A search-based question-answering method, the method comprising:
receiving a search request sent by a terminal, wherein the search request carries problem information input in a search interface of the terminal;
searching according to the question information to obtain multimedia information matched with the question information and answer information matched with the question information, wherein the answer information is positioned in the multimedia information;
and sending the multimedia information and the answer information to the terminal, wherein the terminal is used for displaying the multimedia information and the answer information in the search interface.
2. The method of claim 1, wherein the searching according to the question information to obtain the multimedia information matched with the question information and the answer information matched with the question information comprises:
searching at least one candidate multimedia information matched with the question information;
acquiring content information of the at least one candidate multimedia information;
selecting the answer information matched with the question information from the acquired content information;
and determining the multimedia information corresponding to the selected answer information as the multimedia information matched with the question information.
3. The method of claim 2, wherein searching for at least one candidate multimedia information matching the question information comprises:
performing word segmentation processing on the problem information to obtain at least one word;
and searching candidate multimedia information stored corresponding to the at least one word.
4. The method of claim 3, wherein prior to searching for candidate multimedia information stored in correspondence with the at least one term, the method further comprises:
performing voice recognition on at least one multimedia message to obtain text information corresponding to each multimedia message;
performing word segmentation processing on the text information corresponding to each multimedia information to obtain at least one word;
and correspondingly storing each obtained word and the multimedia information to which the word belongs.
5. The method according to claim 2, wherein the selecting the answer information matching the question information from the obtained content information comprises:
acquiring a first matching degree of the question information and each content information;
and acquiring answer information matched with the question information according to the first matching degree of the question information and each piece of content information.
6. The method according to claim 2, wherein the selecting the answer information matching the question information from the obtained content information comprises:
acquiring a first matching degree of the question information and each content information;
acquiring a second matching degree of text information corresponding to each candidate multimedia information in the at least one candidate multimedia information and the question information;
and acquiring answer information matched with the question information according to the first matching degree of the question information and each content information and the second matching degree of the text information corresponding to the candidate multimedia information to which each content information belongs and the question information.
7. The method of claim 2, wherein the obtaining the content information of the at least one candidate multimedia information comprises:
obtaining the question type to which the question information belongs, wherein the question type comprises a specified type or a non-specified type, and the specified type is the question type with fixed answer information;
and acquiring the content information of the at least one candidate multimedia information by adopting a processing mode corresponding to the problem type.
8. The method of claim 7, wherein obtaining the question type to which the question information belongs comprises:
and calling a classification model to classify the problem information to obtain the problem type of the problem information.
9. The method according to claim 7, wherein the obtaining the content information of the at least one candidate multimedia information by using the processing manner corresponding to the question type includes:
if the problem type of the problem information is the unspecified type, obtaining continuous statement information with reference quantity from text information corresponding to the candidate multimedia information for each candidate multimedia information in the at least one candidate multimedia information, and combining the statement information with reference quantity to obtain content information of the candidate multimedia information.
10. The method according to claim 7, wherein the obtaining the content information of the at least one candidate multimedia information by using the processing manner corresponding to the question type includes:
and if the problem type of the problem information is the specified type, identifying text information corresponding to the candidate multimedia information for each candidate multimedia information in the at least one candidate multimedia information to obtain content information of the candidate multimedia information.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010983014.5A CN112115282A (en) | 2020-09-17 | 2020-09-17 | Question answering method, device, equipment and storage medium based on search |
| PCT/CN2021/107710 WO2022057435A1 (en) | 2020-09-17 | 2021-07-21 | Search-based question answering method, and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010983014.5A CN112115282A (en) | 2020-09-17 | 2020-09-17 | Question answering method, device, equipment and storage medium based on search |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN112115282A true CN112115282A (en) | 2020-12-22 |
Family
ID=73799922
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010983014.5A Pending CN112115282A (en) | 2020-09-17 | 2020-09-17 | Question answering method, device, equipment and storage medium based on search |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN112115282A (en) |
| WO (1) | WO2022057435A1 (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113901302A (en) * | 2021-09-29 | 2022-01-07 | 北京百度网讯科技有限公司 | Data processing method, apparatus, electronic device and medium |
| CN114168725A (en) * | 2021-12-08 | 2022-03-11 | 北京字节跳动网络技术有限公司 | Object question and answer processing method and device, electronic equipment, medium and product |
| WO2022057435A1 (en) * | 2020-09-17 | 2022-03-24 | 北京达佳互联信息技术有限公司 | Search-based question answering method, and storage medium |
| CN114372160A (en) * | 2022-01-12 | 2022-04-19 | 北京字节跳动网络技术有限公司 | Search request processing method and device, computer equipment and storage medium |
| CN114817584A (en) * | 2022-06-29 | 2022-07-29 | 阿里巴巴(中国)有限公司 | Information processing method, computer-readable storage medium, and electronic device |
| CN115858904A (en) * | 2022-11-28 | 2023-03-28 | 北京字跳网络技术有限公司 | Search result display method, image search processing method and device |
Citations (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101996195A (en) * | 2009-08-28 | 2011-03-30 | 中国移动通信集团公司 | Searching method and device of voice information in audio files and equipment |
| CN103425640A (en) * | 2012-05-14 | 2013-12-04 | 华为技术有限公司 | Multimedia questioning-answering system and method |
| US20160350304A1 (en) * | 2015-05-27 | 2016-12-01 | Google Inc. | Providing suggested voice-based action queries |
| CN106599028A (en) * | 2016-11-02 | 2017-04-26 | 华南理工大学 | Book content searching and matching method based on video image processing |
| CN108829765A (en) * | 2018-05-29 | 2018-11-16 | 平安科技(深圳)有限公司 | A kind of information query method, device, computer equipment and storage medium |
| CN109165285A (en) * | 2018-08-24 | 2019-01-08 | 北京小米智能科技有限公司 | Handle the method, apparatus and storage medium of multi-medium data |
| CN109684492A (en) * | 2018-12-28 | 2019-04-26 | 北京爱奇艺科技有限公司 | A kind of multimedia file lookup method, device and electronic equipment |
| CN109949723A (en) * | 2019-03-27 | 2019-06-28 | 浪潮金融信息技术有限公司 | A kind of device and method carrying out Products Show by Intelligent voice dialog |
| CN110569419A (en) * | 2019-07-31 | 2019-12-13 | 平安科技(深圳)有限公司 | question-answering system optimization method and device, computer equipment and storage medium |
| US20200074342A1 (en) * | 2018-08-29 | 2020-03-05 | Hitachi, Ltd. | Question answering system, question answering processing method, and question answering integrated system |
| CN111125384A (en) * | 2018-11-01 | 2020-05-08 | 阿里巴巴集团控股有限公司 | Multimedia answer generation method and device, terminal equipment and storage medium |
| CN111414498A (en) * | 2020-04-29 | 2020-07-14 | 北京字节跳动网络技术有限公司 | Multimedia information recommendation method and device and electronic equipment |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104657043A (en) * | 2013-11-19 | 2015-05-27 | 中兴通讯股份有限公司 | Multimedia data backup method, user terminal and synchronous device |
| CN109086448B (en) * | 2018-08-20 | 2021-04-30 | 广东小天才科技有限公司 | Voice question searching method based on gender characteristic information and family education equipment |
| CN112115282A (en) * | 2020-09-17 | 2020-12-22 | 北京达佳互联信息技术有限公司 | Question answering method, device, equipment and storage medium based on search |
-
2020
- 2020-09-17 CN CN202010983014.5A patent/CN112115282A/en active Pending
-
2021
- 2021-07-21 WO PCT/CN2021/107710 patent/WO2022057435A1/en not_active Ceased
Patent Citations (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101996195A (en) * | 2009-08-28 | 2011-03-30 | 中国移动通信集团公司 | Searching method and device of voice information in audio files and equipment |
| CN103425640A (en) * | 2012-05-14 | 2013-12-04 | 华为技术有限公司 | Multimedia questioning-answering system and method |
| US20160350304A1 (en) * | 2015-05-27 | 2016-12-01 | Google Inc. | Providing suggested voice-based action queries |
| CN106599028A (en) * | 2016-11-02 | 2017-04-26 | 华南理工大学 | Book content searching and matching method based on video image processing |
| CN108829765A (en) * | 2018-05-29 | 2018-11-16 | 平安科技(深圳)有限公司 | A kind of information query method, device, computer equipment and storage medium |
| CN109165285A (en) * | 2018-08-24 | 2019-01-08 | 北京小米智能科技有限公司 | Handle the method, apparatus and storage medium of multi-medium data |
| US20200074342A1 (en) * | 2018-08-29 | 2020-03-05 | Hitachi, Ltd. | Question answering system, question answering processing method, and question answering integrated system |
| CN111125384A (en) * | 2018-11-01 | 2020-05-08 | 阿里巴巴集团控股有限公司 | Multimedia answer generation method and device, terminal equipment and storage medium |
| CN109684492A (en) * | 2018-12-28 | 2019-04-26 | 北京爱奇艺科技有限公司 | A kind of multimedia file lookup method, device and electronic equipment |
| CN109949723A (en) * | 2019-03-27 | 2019-06-28 | 浪潮金融信息技术有限公司 | A kind of device and method carrying out Products Show by Intelligent voice dialog |
| CN110569419A (en) * | 2019-07-31 | 2019-12-13 | 平安科技(深圳)有限公司 | question-answering system optimization method and device, computer equipment and storage medium |
| CN111414498A (en) * | 2020-04-29 | 2020-07-14 | 北京字节跳动网络技术有限公司 | Multimedia information recommendation method and device and electronic equipment |
Non-Patent Citations (4)
| Title |
|---|
| 李佳等: "数字多媒体旅游咨询信息智能问答系统设计", 《现代电子技术》 * |
| 杨子伍: "数字图书馆中基于内容的多媒体检索", 《图书馆学刊》 * |
| 肖巧萍: "《外语学习策略理论与实践研究》", 30 June 2013, 太原:山西人民出版社 * |
| 马海龙等: "《智慧旅游导论》", 30 April 2020, 银川:宁夏人民教育出版社 * |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2022057435A1 (en) * | 2020-09-17 | 2022-03-24 | 北京达佳互联信息技术有限公司 | Search-based question answering method, and storage medium |
| CN113901302A (en) * | 2021-09-29 | 2022-01-07 | 北京百度网讯科技有限公司 | Data processing method, apparatus, electronic device and medium |
| CN114168725A (en) * | 2021-12-08 | 2022-03-11 | 北京字节跳动网络技术有限公司 | Object question and answer processing method and device, electronic equipment, medium and product |
| CN114372160A (en) * | 2022-01-12 | 2022-04-19 | 北京字节跳动网络技术有限公司 | Search request processing method and device, computer equipment and storage medium |
| CN114372160B (en) * | 2022-01-12 | 2023-08-15 | 抖音视界有限公司 | Search request processing method and device, computer equipment and storage medium |
| CN114817584A (en) * | 2022-06-29 | 2022-07-29 | 阿里巴巴(中国)有限公司 | Information processing method, computer-readable storage medium, and electronic device |
| CN114817584B (en) * | 2022-06-29 | 2022-11-15 | 阿里巴巴(中国)有限公司 | Information processing method, computer-readable storage medium, and electronic device |
| CN115858904A (en) * | 2022-11-28 | 2023-03-28 | 北京字跳网络技术有限公司 | Search result display method, image search processing method and device |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2022057435A1 (en) | 2022-03-24 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110572716B (en) | Multimedia data playing method, device and storage medium | |
| CN109040297B (en) | User portrait generation method and device | |
| CN109151593B (en) | Anchor recommendation method, device and storage medium | |
| CN110572711B (en) | Video cover generation method and device, computer equipment and storage medium | |
| CN112115282A (en) | Question answering method, device, equipment and storage medium based on search | |
| CN111079012A (en) | Live broadcast room recommendation method and device, storage medium and terminal | |
| CN109729372B (en) | Live broadcast room switching method, device, terminal, server and storage medium | |
| CN110932963B (en) | Multimedia resource sharing method, system, device, terminal, server and medium | |
| CN112148899B (en) | Multimedia recommendation method, device, equipment and storage medium | |
| CN109068160B (en) | Method, device and system for linking videos | |
| CN109151044B (en) | Information pushing method and device, electronic equipment and storage medium | |
| CN110377195B (en) | Method and device for displaying interaction function | |
| CN110933468A (en) | Playing method, playing device, electronic equipment and medium | |
| CN113918767B (en) | Video clip positioning method, device, equipment and storage medium | |
| WO2022048398A1 (en) | Multimedia data photographing method and terminal | |
| CN113987326B (en) | Resource recommendation method and device, computer equipment and medium | |
| CN112511850A (en) | Wheat connecting method, live broadcast display method, device, equipment and storage medium | |
| CN111031391A (en) | Video dubbing method, device, server, terminal and storage medium | |
| CN112052355A (en) | Video display method, device, terminal, server, system and storage medium | |
| CN109618192B (en) | Method, device, system and storage medium for playing video | |
| CN113032587A (en) | Multimedia information recommendation method, system, device, terminal and server | |
| CN112069350B (en) | Song recommendation method, device, equipment and computer storage medium | |
| CN109547847B (en) | Method and device for adding video information and computer readable storage medium | |
| CN111782767B (en) | Question and answer method, device, equipment and storage medium | |
| CN113609387A (en) | Playing content recommendation method and device, electronic equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201222 |