CN110019728B - Automatic interaction method, storage medium and terminal - Google Patents
Automatic interaction method, storage medium and terminal Download PDFInfo
- Publication number
- CN110019728B CN110019728B CN201711420428.1A CN201711420428A CN110019728B CN 110019728 B CN110019728 B CN 110019728B CN 201711420428 A CN201711420428 A CN 201711420428A CN 110019728 B CN110019728 B CN 110019728B
- Authority
- CN
- China
- Prior art keywords
- answer
- answers
- user
- question
- interaction method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3343—Query execution using phonetics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Animal Behavior & Ethology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Acoustics & Sound (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
An automatic interaction method, a storage medium and a terminal, wherein the automatic interaction method comprises the following steps: acquiring a user problem; obtaining a plurality of answers to the user question in at least two ways; the method comprises the steps of screening a plurality of answers according to source parameters and prediction probabilities of the answers, wherein the source parameters and the prediction probabilities of the answers are determined according to an obtaining mode of the answers, the source parameters of the answers comprise time consumption for generating the answers, and the screening specifically comprises the following steps: determining an importance weight corresponding to each answer according to the source parameter of the answer; calculating the product of the importance weight of each answer and the prediction probability; calculating the product of each answer and generating a time-consuming quotient to be used as the effective score of each answer; screening the answers according to the effective scores of the answers; and outputting the optimal answer obtained by screening. By the technical scheme, the accuracy and the continuity of response in the question-answer interaction process can be improved.
Description
Technical Field
The present invention relates to the field of natural language processing technologies, and in particular, to an automatic interaction method, a storage medium, and a terminal.
Background
In the field of artificial intelligence technology application, more and more intelligent question-answering products appear. In general, the accuracy of the replies to user questions and the speed of replies are important factors affecting the quality of intelligent question-answering products.
In the prior art, various question-answering processing modes exist, and a rule-based mode, a template matching-based mode, a retrieval-based mode, a generation-based mode and the like are commonly used. Wherein, the search-based mode is to generate an answer by searching the existing knowledge points in the knowledge base, the knowledge base generally comprises a plurality of knowledge points, and each knowledge point comprises a standard question and a corresponding extension question and an answer; the answer feedback mechanism based on the generation formula is to automatically generate an answer composed of a word sequence according to the information input by the current user.
However, in the manner of rule-based, template matching, retrieval, templates, examples or databases have limitations and lack of efficient language understanding, resulting in certain deficiencies in the accuracy and flexibility of answers; based on the generation mode, a model needs to be established and trained, the complexity of the model is high, and the stability of an answer acquisition process is low.
Disclosure of Invention
The technical problem solved by the invention is how to improve the response accuracy and continuity in the question-answer interaction process.
In order to solve the above technical problems, an embodiment of the present invention provides an automatic interaction method, including:
acquiring a user problem;
Obtaining a plurality of answers to the user questions in at least two ways, the ways including a knowledge base, a knowledge graph, or a learning model;
The method comprises the steps of screening a plurality of answers according to source parameters and prediction probabilities of the answers, wherein the source parameters and the prediction probabilities of the answers are determined according to an obtaining mode of the answers, the source parameters of the answers comprise time consumption for generating the answers, and the screening specifically comprises the following steps: determining an importance weight corresponding to each answer according to the source parameter of the answer; calculating the product of the importance weight of each answer and the prediction probability; calculating the product of each answer and generating a time-consuming quotient to be used as the effective score of each answer; screening the answers according to the effective scores of the answers;
and outputting the optimal answer obtained by screening.
Optionally, the predictive probability of the answer is determined in one or more of the following ways:
if the answer is from the knowledge base, calculating the semantic similarity between the user question and the standard question and/or the extended question in the knowledge base as the prediction probability of the answer;
if the answer is from the knowledge graph, determining the prediction probability of the answer according to the reliability of the answer determined by the knowledge graph;
if the answer is from a learning model, a predictive probability of the answer is determined based on a sum of conditional probabilities between adjacent terms of the answer.
Optionally, the source parameters include a priority, and the answers from the knowledge base are higher in priority than the answers from the knowledge graph, and the answers from the knowledge graph are higher in priority than the answers from the learning model.
Optionally, the screening the plurality of answers according to the source parameter and the prediction probability of each answer includes:
and judging whether the prediction probability of each answer is larger than a set threshold value according to the priority order of the answers, and taking the answer with the prediction probability larger than the set threshold value obtained by the first judgment as the optimal answer.
Optionally, the obtaining the plurality of answers to the user question in at least two ways includes obtaining the plurality of answers in any two or three of:
Calculating the semantic similarity between the user questions and standard questions and/or expansion questions in a knowledge base, and determining a first answer from the knowledge base;
matching the user questions with knowledge in a knowledge graph and determining a second answer from the knowledge graph;
and inputting the user questions into a learning model, and determining the output of the learning model as a third answer.
Optionally, the user question is speech; the obtaining the user problem comprises: and converting the user questions into texts, wherein the outputting and screening the obtained optimal answers comprises the following steps: and converting the obtained optimal answer into voice and then sending the voice to a user.
Optionally, the step of acquiring the user question is performed in response to the received handover indication information.
Optionally, the intention recognition result is obtained by carrying out intention recognition on the user problem by using a pre-trained intention classification model.
The embodiment of the invention also discloses a storage medium, on which computer instructions are stored, which execute the steps of the automatic interaction method when running.
The embodiment of the invention also discloses a terminal which comprises a memory and a processor, wherein the memory stores computer instructions capable of running on the processor, and the processor executes the steps of the automatic interaction method when running the computer instructions.
Compared with the prior art, the technical scheme of the embodiment of the invention has the following beneficial effects:
The technical scheme of the invention obtains the user problem; obtaining a plurality of answers to the user question in at least two ways; screening the answers according to source parameters and prediction probabilities of the answers, wherein the source parameters and the prediction probabilities of the answers are determined according to the obtaining mode of the answers; and outputting the optimal answer obtained by screening. The technical scheme of the invention utilizes at least two modes to obtain a plurality of answers, and then the optimal answer output is screened from the plurality of answers; because the modes of obtaining the answers are different, the richness of the answers can be improved from multiple angles, the situation that the answers cannot be obtained in a single mode can be avoided, the sustainability of the interaction with the user question and answer is ensured, and the user experience is improved. In addition, the optimal answer is selected from the plurality of answers according to the source parameters and the prediction probability of each answer, so that the accuracy of answer reply to the user question can be ensured. In the technical scheme, when an optimal answer is determined, an answer acquisition mode, an answer prediction probability and an answer generation time consumption are taken as consideration factors; the accuracy of the answer can be influenced by the answer acquisition mode and the answer prediction probability, the interaction timeliness can be influenced by the time consumption of the answer generation, and the answer accuracy and timeliness of the interaction with the user can be considered by combining the factors, so that the user experience is further improved. According to the technical scheme, the answers are obtained based on different technical principles by utilizing the knowledge base, the knowledge map and the learning model, so that the richness of the answers can be further improved; in addition, the answers to the user questions obtained through the three approaches are high in accuracy, the accuracy of the optimal answers obtained through further screening is high, and the response accuracy in the question-answer interaction process is further improved.
Drawings
FIG. 1 is a flow chart of an automatic interaction method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of one embodiment of step S102 shown in FIG. 1;
FIG. 3 is a flow chart of one embodiment of step S103 shown in FIG. 1;
fig. 4 is a flowchart of another embodiment of step S103 shown in fig. 1.
Detailed Description
As described in the background art, in the manner of rule-based, template matching, retrieval, templates, examples or databases have limitations and lack of efficient language understanding, resulting in certain deficiencies in the accuracy and flexibility of answers; based on the generation mode, a model needs to be established and trained, the complexity of the model is high, and the stability of an answer acquisition process is low.
The technical scheme of the invention utilizes at least two modes to obtain a plurality of answers, and then the optimal answer output is screened from the plurality of answers; because the modes of obtaining the answers are different, the richness of the answers can be improved from multiple angles, the situation that the answers cannot be obtained in a single mode can be avoided, the sustainability of the interaction with the user question and answer is ensured, and the user experience is improved. In addition, the optimal answer is selected from the plurality of answers according to the source parameters and the prediction probability of each answer, so that the accuracy of answer reply to the user question can be ensured.
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
FIG. 1 is a flow chart of an automatic interaction method according to an embodiment of the present invention.
The automatic interaction method shown in fig. 1 may be used in a question-answering system, and the automatic interaction method may include the steps of:
Step S101: acquiring a user problem;
Step S102: obtaining a plurality of answers to the user question in at least two ways;
Step S103: screening the answers according to source parameters and prediction probabilities of the answers, wherein the source parameters and the prediction probabilities of the answers are determined according to the obtaining mode of the answers;
step S104: and outputting the optimal answer obtained by screening.
In this embodiment, step S101 may be implemented in any manner to obtain the user problem, for example, the user problem may be directly obtained by external collection, or the user problem may be obtained by an interface calling manner. The user questions may be provided with semantics. In particular, the user questions may be in the form of speech, text, etc.
In a specific implementation of step S102, at least two answers may be obtained in at least two ways. Specifically, there are various ways to obtain the answer to the user question, for example, the user question is matched by using a knowledge base, and the answer corresponding to the standard question matched with the user question is used as the answer to the user question; the search algorithm may also be utilized to search in the intent space and take the searched answers as answers to the user questions. The intention space can be preset, and can be continuously supplemented by online learning. Or the deep learning model can be used for coding, identifying and decoding the user questions to output corresponding answers.
Because the answers are acquired in a plurality of modes, each acquiring mode adopts different principles to acquire the answers, and therefore the richness of the answers can be improved.
Furthermore, when the answer of the user question is obtained in a single mode, a situation that the answer cannot be obtained may occur, for example, the knowledge base does not have an answer matched with the user question, and the knowledge map does not have an ontology matched with the user question. Then, by acquiring the answer in various ways, the situation can be avoided, and the stability of the answer acquisition can be ensured.
In order to determine a more accurate answer to the user question, in step S103, a plurality of answers are filtered. The basis for the screening is the source parameters and the prediction probabilities of the respective answers.
The source parameters and the prediction probability of the answer are determined according to the obtaining mode of the answer. Because the accuracy and the importance of the answers obtained by adopting different acquisition modes are different, the source parameters are utilized to represent the accuracy and the importance of the different answers, and then when the answers are screened, the answers with higher accuracy or importance can be screened by utilizing the source parameters so as to improve the accuracy of the optimal answers. For example, the accuracy of the answers obtained by the database method is higher than that of the answers from the knowledge graph, and therefore, the source parameters of the answers obtained by the database method are different from those of the answers from the knowledge graph.
The predictive probability of an answer may characterize the accuracy of the answer in replying to the user question. The greater the prediction probability, the higher the accuracy of its corresponding answer. Therefore, the answers with higher accuracy can be screened out by utilizing the prediction probability of the answers, so that the accuracy of the optimal answers is improved.
Further, in step S104, the optimal answer obtained by the screening in step S103 is output. In particular, the optimal answer may be presented to the user directly as output, e.g., in text, speech, etc. Other operations can be performed based on the optimal answer to achieve interaction with the user. For example, a series of operations is performed based on the optimal answer.
The embodiment of the invention obtains a plurality of answers in at least two ways, and then screens the optimal answer output from the plurality of answers. Because the modes of obtaining the answers are different, the richness of the answers can be improved from multiple angles, the situation that the answers cannot be obtained in a single mode can be avoided, the sustainability of the interaction with the user question and answer is ensured, and the user experience is improved. In addition, the optimal answer is selected from the plurality of answers according to the source parameters and the prediction probability of each answer, so that the accuracy of answer reply to the user question can be ensured.
The automatic interaction method in this embodiment may be independently executed, independent of the direction of other computer instructions.
In a preferred embodiment of the invention, the at least two ways are selected from a knowledge base, a knowledge graph and a learning model.
In this embodiment, the knowledge base may be used to obtain the answer. The knowledge base includes questions and answers, which are obtained by matching user questions with questions in the knowledge base.
The embodiment may also use the knowledge graph to obtain the answer. The knowledge graph is a semantic network comprising nodes and edges connecting the nodes. Nodes represent entities or concepts and edges represent various semantic relationships between entities/concepts. Specifically, the data in the knowledge graph is stored in the form of triplet data, namely: < entity a, relationship, entity B >, for example: < Liu Dehua, birth place, hong Kong >. If the user problem is: "where is the birth place of Liu Dehua? And acquiring an answer of hong Kong by using the knowledge graph, wherein the prediction probability is 0.9986.
It should be noted that, the network structure of the knowledge graph may be any implementation manner in the prior art, which is not limited by the embodiment of the present invention.
The embodiment may also obtain an answer using a learning model. The learning model may be a deep learning model or a machine learning model, for example, a long-short term memory model (long-short term memory, LSTM). For an input question, the learning model may automatically generate an answer from the neural network.
Further, the predictive probability of an answer may be determined in one or more of the following ways: if the answer is from the knowledge base, calculating the semantic similarity between the user question and the standard question and/or the extended question in the knowledge base as the prediction probability of the answer; if the answer is from the knowledge graph, determining the prediction probability of the answer according to the reliability of the answer determined by the knowledge graph; if the answer is from a learning model, a predictive probability of the answer is determined based on a sum of conditional probabilities between adjacent terms of the answer.
In specific implementation, when matching the knowledge base to obtain the answer, the semantic similarity between the user question and the standard question and/or the extended question in the knowledge base can be used for representing the prediction probability of the obtained answer; when the knowledge graph is used for acquiring the answers, the knowledge graph can score the credibility of the answers determined by the knowledge graph, so that the prediction probability of the answers can be determined according to the scoring of the credibility of the answers; when the answer is obtained based on the generation mode of the deep learning model, the prediction probability of the answer can be the sum of conditional probabilities between the front word and the rear word in the answer.
In one embodiment of the present invention, as shown in fig. 2, step S102 may include at least two steps of:
Step S201: calculating the semantic similarity between the user questions and standard questions and/or expansion questions in a knowledge base, and determining a first answer from the knowledge base;
Step S202: matching the user questions with knowledge in a knowledge graph and determining a second answer from the knowledge graph;
step S203: and inputting the user questions into a learning model, and determining the output of the learning model as a third answer.
In this embodiment, when obtaining an answer by using the knowledge base, the user question is matched with the standard question and/or the extended question in the knowledge base, and if there is the standard question or the extended question whose semantic similarity with the user question reaches the set threshold, the answer corresponding to the standard question or the extended question is used as the first answer.
When the knowledge graph is utilized to obtain an answer, the user question is matched with the triplet data in the knowledge graph, and if the triplet data matched with the user question exists, the node in the triplet data is used as a second answer.
When the learning model is utilized to acquire the answer, the user question is input into the learning model, and the learning model can automatically generate the answer for the user question and take the answer as a third answer.
Further, the source parameters include a priority, preferably, the priority of the answers from the knowledge base is higher than the priority of the answers from the knowledge graph, and the priority of the answers from the knowledge graph is higher than the priority of the answers from the learning model.
In this embodiment, in view of the fact that the questions and answers in the knowledge base are preconfigured, the accuracy of the answers obtained by using the knowledge base is high. The learning model needs to be trained in advance, the training effect of the learning model is influenced by the corpus used for training, and the accuracy of answers generated by the learning model is low. The accuracy of the answers obtained by using the knowledge graph is between the two. Therefore, the priorities of the source parameters corresponding to the three modes are a knowledge base, a knowledge graph and a learning model in sequence from large to small.
It should be noted that, according to different practical application scenarios, the answer requirements may also be different, so that the priority of the source parameters may also be adaptively configured according to the specific application scenario, which is not limited in the embodiment of the present invention.
In one embodiment of the present invention, step S103 shown in fig. 1 may include the following steps: and judging whether the prediction probability of each answer is larger than a set threshold value according to the priority order of the answers, and taking the answer with the prediction probability larger than the set threshold value obtained by the first judgment as the optimal answer.
In this embodiment, the answer with the higher priority is preferentially selected. That is, it is first determined whether the prediction probability of the answer with higher priority is greater than the set threshold, and if so, the answer is regarded as the optimal answer. Otherwise, continuing to judge whether the prediction probability of the answer of the next priority is larger than a set threshold value or not until the optimal answer is screened out. The number of answers is plural, thereby ensuring that an optimal answer can be obtained.
As shown in fig. 3, in another embodiment of the present invention, step S103 shown in fig. 1 may include the following steps:
step S301: determining importance weight of each answer according to the source parameter of the answer;
step S302: determining the accuracy weight of each answer according to the prediction probability of the answer;
step S303: calculating the effective score of each answer by using the importance weight and the accuracy weight of each answer;
Step S304: and screening the answers according to the effective scores of the answers.
Compared with the previous embodiment, the method considers the source parameters of the answers first and considers the prediction probability of the answers; in this embodiment, the source parameters and the prediction probabilities of the answers are considered simultaneously.
In step S301 and step S302 of the present embodiment, the corresponding importance weights may be determined according to the source parameters of the answers, that is, the answer obtaining manner may have different importance weights. Further, the accuracy of the answers obtained by different obtaining modes is positively correlated with the importance weight of the answer. The corresponding accuracy weight can be determined according to the predictive probability of the answer. The magnitude of the accuracy weight is positively correlated with the magnitude of the prediction probability.
In step S303 and step S304 of the present embodiment, the significance scores of the answers are calculated using the significance weights and the accuracy weights of the respective answers. The effective score of an answer may comprehensively characterize the accuracy of the answer. The higher the effective score, the higher the accuracy of the answer. The optimal answer is the answer with the highest effective score in each answer.
Specifically, the effective score may be obtained by any executable mathematical operation of the importance weight and the accuracy weight, and may be a sum of the importance weight and the accuracy weight, or may be a product of the importance weight and the accuracy weight, which is not limited in the embodiment of the present invention.
Further, the source parameters of the answer include time consuming generation of the answer. Specifically, the time-consuming generation of answers represents the response speed, which will affect the timeliness of the replies to user questions. The longer the answer is generated, the longer it takes to get the answer, and the longer the user waits, thus reducing the user experience. Thus, the time-consuming generation of the answer can be used in the selection process of the optimal answer.
As shown in fig. 4, in yet another embodiment of the present invention, step S103 shown in fig. 1 may include the following steps:
Step S401: determining an importance weight corresponding to each answer according to the source parameter of the answer;
Step S402: calculating the product of the importance weight of each answer and the prediction probability;
step S403: calculating the product of each answer and generating a time-consuming quotient to be used as the effective score of each answer;
step S404: and screening the answers according to the effective scores of the answers.
The source parameters and the predictive probabilities of the answers are considered simultaneously with respect to the embodiment shown in fig. 3. The embodiment of the invention also takes the time consumption of generating the answer as the consideration factor of the optimal answer.
In implementations, the time consumption of the generation is inversely related to the effective fraction. The longer the generation takes, the less effective the answer. When calculating the effective score of the answer in step S403, the effective score may be obtained by multiplying the product of the importance weight and the prediction probability by the time-consuming generator.
More specific implementations of step S401, step S402, and step S404 can refer to step S301, step S302, and step S304 in the embodiment shown in fig. 3.
In a specific application scenario of the present invention, the user question may be speech. Step S101 shown in fig. 1 may include: the user questions are converted to text. That is, it is necessary to perform the subsequent steps after converting the voice data into text. The conversion of speech to text is done to calculate the semantic similarity of the user questions to the standard questions and/or the expanded questions in a subsequent step.
Further, step S104 shown in fig. 1 may include: and converting the obtained optimal answer into voice and then sending the voice to a user. In other words, in order to ensure consistency of interaction with the user, when the user interacts using a modality of voice, the optimal answer fed back to the user also adopts voice. Therefore, when the optimal answer is in a text form, the optimal answer is converted into voice and then output to the user.
In another specific application scenario of the present invention, the step of obtaining the user question is performed in response to the received handover indication information.
In this embodiment, the automatic interaction method may be used in combination with other interaction processes. That is, the response method steps of the present embodiment may be performed only when other interactive processes indicate that the response method steps of the present embodiment are performed.
Further, the handover indication information is sent when the matching of the user problem by using a service knowledge base fails. In the question-answer interaction process in the professional field by using the service knowledge base, the condition that the answer cannot be obtained may occur due to the professionality of the question-answer, so that the response method step of the embodiment is executed by indicating the switch instruction information to switch, the answer can be ensured to be obtained, the continuity of interaction is realized, and the user experience is improved. For example, when the matching of the user questions by using the service knowledge base fails, the response method step of the embodiment is executed, the obtained optimal answer is "hello", and the method enters a boring mode, so as to realize the continuity of interaction.
Further, the switching indication information is sent when the intention recognition result of the user problem is successfully matched with a preset intention classification.
In this embodiment, compared with the case that the matching of the user problem by using the service knowledge base fails, the method and the device send switching indication information, which is used for classifying the user problem before the matching of the user problem by using the service knowledge base, if the intention recognition result of the user problem is successfully matched with the preset intention classification, the switching indication information is sent to indicate the switching to execute the response method step of the embodiment. For example, if the preset intention is classified into the chat category, and the intention recognition result of the user question is the chat category, then switching instruction information is sent to instruct the switching to execute the response method step of the embodiment.
Further, the intention recognition result is obtained by carrying out intention recognition on the user problem by utilizing a pre-trained intention classification model.
In specific implementation, the intent classification model can be obtained by training through accumulated question and answer corpus in advance. The trained intent classification model may perform intent recognition on the user questions. For example, the user inputs "I am somewhat unconscious," which the intent classification model may classify as boring. Then sends out switching indication information to indicate switching execution of the response method steps of the embodiment, enters a machine chatting mode, and outputs the optimal answer' small i to be used for soothing you, laughing one prayer, and the smiling person has the most attractive-! ".
It should be noted that, the intent classification model may use any available classification algorithm, which is not limited in this embodiment of the present invention.
The embodiment of the invention also discloses a storage medium, on which computer instructions are stored, which when run can execute the steps of the automatic interaction method shown in any one of the embodiments of fig. 1 to 4. The storage medium may include ROM, RAM, magnetic or optical disks, and the like. The storage medium may also include a non-volatile memory (non-volatile) or a non-transitory memory (non-transitory) or the like.
The embodiment of the invention also discloses a terminal, which can comprise a memory and a processor, wherein the memory stores computer instructions capable of running on the processor. The processor, when executing the computer instructions, may perform the steps of the automated interaction method shown in any of the embodiments of fig. 1-4. The terminal comprises, but is not limited to, a mobile phone, a computer, a tablet personal computer and other terminal equipment.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention, and the scope of the invention should be assessed accordingly to that of the appended claims.
Claims (8)
1. An automatic interaction method, comprising:
acquiring a user problem;
Obtaining a plurality of answers to the user questions in at least two ways, the ways including a knowledge base, a knowledge graph, or a learning model;
The method comprises the steps of screening a plurality of answers according to source parameters and prediction probabilities of the answers, wherein the source parameters and the prediction probabilities of the answers are determined according to an obtaining mode of the answers, the source parameters of the answers comprise time consumption for generating the answers, and the screening specifically comprises the following steps: determining an importance weight corresponding to each answer according to the source parameter of the answer; calculating the product of the importance weight of each answer and the prediction probability; calculating the product of each answer and generating a time-consuming quotient to be used as the effective score of each answer; screening the answers according to the effective scores of the answers;
and outputting the optimal answer obtained by screening.
2. The automated interaction method of claim 1, wherein the predictive probability of an answer is determined by one or more of the following:
if the answer is from the knowledge base, calculating the semantic similarity between the user question and the standard question and/or the extended question in the knowledge base as the prediction probability of the answer;
if the answer is from the knowledge graph, determining the prediction probability of the answer according to the reliability of the answer determined by the knowledge graph;
if the answer is from a learning model, a predictive probability of the answer is determined based on a sum of conditional probabilities between adjacent terms of the answer.
3. The automated interaction method of claim 1, wherein the filtering the plurality of answers based on the source parameters and the predictive probabilities of the respective answers comprises:
and judging whether the prediction probability of each answer is larger than a set threshold value according to the priority order of the answers, and taking the answer with the prediction probability larger than the set threshold value obtained by the first judgment as the optimal answer.
4. The automated interaction method of claim 1, wherein the obtaining a plurality of answers to the user question in at least two ways comprises obtaining the plurality of answers in any two or three of:
Calculating the semantic similarity between the user questions and standard questions and/or expansion questions in a knowledge base, and determining a first answer from the knowledge base;
matching the user questions with knowledge in a knowledge graph and determining a second answer from the knowledge graph;
and inputting the user questions into a learning model, and determining the output of the learning model as a third answer.
5. The automatic interaction method of claim 1, wherein the user question is voice; the obtaining the user problem comprises: and converting the user questions into texts, wherein the outputting and screening the obtained optimal answers comprises the following steps: and converting the obtained optimal answer into voice and then sending the voice to a user.
6. The automatic interaction method of claim 1, wherein the step of obtaining a user question is performed in response to the received handover indication information.
7. A storage medium having stored thereon computer instructions which, when run, perform the steps of the automatic interaction method of any of claims 1 to 6.
8. A terminal comprising a memory and a processor, the memory having stored thereon computer instructions executable on the processor, wherein the processor, when executing the computer instructions, performs the steps of the automatic interaction method of any of claims 1 to 6.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201711420428.1A CN110019728B (en) | 2017-12-25 | 2017-12-25 | Automatic interaction method, storage medium and terminal |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201711420428.1A CN110019728B (en) | 2017-12-25 | 2017-12-25 | Automatic interaction method, storage medium and terminal |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN110019728A CN110019728A (en) | 2019-07-16 |
| CN110019728B true CN110019728B (en) | 2024-07-26 |
Family
ID=67187004
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201711420428.1A Active CN110019728B (en) | 2017-12-25 | 2017-12-25 | Automatic interaction method, storage medium and terminal |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN110019728B (en) |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110688849B (en) * | 2019-09-03 | 2023-09-15 | 平安科技(深圳)有限公司 | Progressive reading method, device, equipment and readable storage medium |
| WO2021168650A1 (en) * | 2020-02-25 | 2021-09-02 | 京东方科技集团股份有限公司 | Question query apparatus and method, device, and storage medium |
| CN112632239A (en) * | 2020-12-11 | 2021-04-09 | 南京三眼精灵信息技术有限公司 | Brain-like question-answering system based on artificial intelligence technology |
| CN113657075B (en) * | 2021-10-18 | 2022-02-08 | 腾讯科技(深圳)有限公司 | Answer generation method and device, electronic equipment and storage medium |
| CN113722465B (en) * | 2021-11-02 | 2022-01-21 | 北京卓建智菡科技有限公司 | Intention identification method and device |
| CN115577120B (en) * | 2022-10-09 | 2024-08-20 | 华院计算技术(上海)股份有限公司 | Digital human interaction method and device for continuous casting production and computing equipment |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104050256A (en) * | 2014-06-13 | 2014-09-17 | 西安蒜泥电子科技有限责任公司 | Initiative study-based questioning and answering method and questioning and answering system adopting initiative study-based questioning and answering method |
| CN106919655A (en) * | 2017-01-24 | 2017-07-04 | 网易(杭州)网络有限公司 | A kind of answer provides method and apparatus |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2622599B1 (en) * | 2010-09-28 | 2019-10-23 | International Business Machines Corporation | Evidence diffusion among candidate answers during question answering |
| US10565508B2 (en) * | 2014-12-12 | 2020-02-18 | International Business Machines Corporation | Inferred facts discovered through knowledge graph derived contextual overlays |
| US9875296B2 (en) * | 2015-03-25 | 2018-01-23 | Google Llc | Information extraction from question and answer websites |
| US10817790B2 (en) * | 2016-05-11 | 2020-10-27 | International Business Machines Corporation | Automated distractor generation by identifying relationships between reference keywords and concepts |
| CN107220380A (en) * | 2017-06-27 | 2017-09-29 | 北京百度网讯科技有限公司 | Question and answer based on artificial intelligence recommend method, device and computer equipment |
-
2017
- 2017-12-25 CN CN201711420428.1A patent/CN110019728B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104050256A (en) * | 2014-06-13 | 2014-09-17 | 西安蒜泥电子科技有限责任公司 | Initiative study-based questioning and answering method and questioning and answering system adopting initiative study-based questioning and answering method |
| CN106919655A (en) * | 2017-01-24 | 2017-07-04 | 网易(杭州)网络有限公司 | A kind of answer provides method and apparatus |
Also Published As
| Publication number | Publication date |
|---|---|
| CN110019728A (en) | 2019-07-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107908803B (en) | Question-answer interaction response method and device, storage medium and terminal | |
| CN110019729B (en) | Intelligent question-answering method, storage medium and terminal | |
| CN110019838B (en) | Intelligent question-answering system and intelligent terminal | |
| CN110019728B (en) | Automatic interaction method, storage medium and terminal | |
| CN109145099B (en) | Question-answering method and device based on artificial intelligence | |
| US10649990B2 (en) | Linking ontologies to expand supported language | |
| KR102288249B1 (en) | Information processing method, terminal, and computer storage medium | |
| CN108595696A (en) | A kind of human-computer interaction intelligent answering method and system based on cloud platform | |
| CN109325040B (en) | FAQ question-answer library generalization method, device and equipment | |
| CN113609264B (en) | Data query method and device for power system nodes | |
| WO2018006727A1 (en) | Method and apparatus for transferring from robot customer service to human customer service | |
| CN112287085B (en) | Semantic matching method, system, equipment and storage medium | |
| CN109960722B (en) | Information processing method and device | |
| CN110795913A (en) | Text encoding method and device, storage medium and terminal | |
| KR102117287B1 (en) | Method and apparatus of dialog scenario database constructing for dialog system | |
| CN110019304B (en) | Method for expanding question-answering knowledge base, storage medium and terminal | |
| CN114840671A (en) | Dialogue generation method, model training method, device, equipment and medium | |
| CN113505198A (en) | Keyword-driven generative dialogue reply method, device and electronic device | |
| US20220198358A1 (en) | Method for generating user interest profile, electronic device and storage medium | |
| KR20190046062A (en) | Method and apparatus of dialog scenario database constructing for dialog system | |
| CN118227868B (en) | Text processing method, device, electronic equipment and storage medium | |
| CN110019730B (en) | Automatic interaction system and intelligent terminal | |
| CN118193699A (en) | Question and answer method, device, equipment and medium | |
| CN117312521A (en) | Processing method for intelligent customer service dialogue and related products | |
| US11475069B2 (en) | Corpus processing method, apparatus and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |