Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
As used in this disclosure, "module," "device," "system," and the like are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, or software in execution. In particular, for example, an element may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. Also, an application or script running on a server, or a server, may be an element. One or more elements may be in a process and/or thread of execution and an element may be localized on one computer and/or distributed between two or more computers and may be operated by various computer-readable media. The elements may also communicate by way of local and/or remote processes based on a signal having one or more data packets, e.g., from a data packet interacting with another element in a local system, distributed system, and/or across a network in the internet with other systems by way of the signal.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
As shown in fig. 1, which is a flowchart of an embodiment of a human-computer conversation method according to the present invention, the human-computer conversation method is applied to a smart terminal device with a screen, where the smart terminal device with a screen may be a smart television, a smart phone, a tablet computer, a story machine with a display screen, a smart sound box with a display screen, and the like, and the present invention is not limited thereto. The man-machine conversation method comprises the following steps:
and S10, starting the full-duplex dialogue mode after detecting the operation of starting the dialogue by the user. Illustratively, the user's operation of opening a dialog may be by speaking a wake-up word or by pressing a specific function key on the remote controller.
S20, performing voice recognition on the detected current user sentence, determining the reply content corresponding to the current user sentence according to the obtained voice recognition result, and presenting the reply content to the user;
and S30, if a new user sentence is detected before determining the reply content corresponding to the current user sentence and presenting the reply content to the user, determining the new reply content corresponding to the current user according to the current user sentence and the new user sentence.
The embodiment of the invention adopts a full-duplex dialogue mode, can detect a new user sentence in real time while responding to the user sentence, and determines new responding content responding to the current user according to the current user sentence and the new user sentence if the new user sentence is detected before determining the responding content corresponding to the current user sentence and presenting the responding content to the user, thereby simultaneously synthesizing the context to more accurately and efficiently determine the responding content.
In some embodiments, if the time interval between the detection of the new user sentence and the detection of the current user sentence does not exceed a preset time threshold,
determining new reply content responsive to the current user from the current user statement and the new user statement comprises: and if the new user statement is determined to be the associated statement of the current user statement, determining new reply content responding to the current user according to the current user statement and the new user statement.
Illustratively, the preset time length threshold is the maximum time length which can be endured by a general user when a man-machine conversation is performed, and the time length can be determined by collecting massive man-machine conversation process data to perform analysis statistics. For example, the preset duration threshold may be 5 seconds, but the present invention does not limit the specific value of the preset duration threshold, and a person skilled in the art may appropriately adjust the preset duration threshold according to actual needs, or the preset duration threshold determined based on the collected massive human-machine conversation process data may change according to the passage of time.
If a new user sentence is detected within a preset time threshold, the explanation corresponds to whether the reply content of the current user sentence is presented to the user or is in the acceptable range content, otherwise, the reply content of the current user sentence is not considered.
Illustratively, an associative statement means that the new user statement is a statement that further defines the current user statement. For example, the current user statement is "i want to listen to songs", and the new user statement may be "Liu De Hua"; alternatively, the current user statement is "I want to watch a movie", and the new user statement may be "homemade"; alternatively, the current sentence is "i want to watch XX shows", the new user sentence is "last week (or most recent)", and so on.
In this embodiment, when it is determined that a new user sentence is an associated sentence of a current user sentence, new reply content responsive to the current user is determined jointly according to the current user sentence and the new user sentence. Therefore, the current user statement and the new user statement are considered, and the new reply content is finally comprehensively determined to be presented to the user.
If the reply content corresponding to the current user statement and the reply content corresponding to the new user statement are respectively determined, and then the two reply contents are sequentially presented to the user (for example, a 'movie interface' is presented first, and then the 'domestic movie interface' is presented in a refreshing mode), the whole interaction process is redundant and tedious, and is repeated, especially for a screen intelligent terminal device (for example, an intelligent television), page refreshing is required to be carried out at least twice, and the user experience is seriously influenced.
The method of the embodiment can directly present the final reply content to the user.
In some embodiments, if it is determined that the new user sentence is not an associated sentence of the current user sentence, first reply content responsive to the current user is determined from the current user sentence, and second reply content responsive to the current user is determined from the new user sentence.
Although the new user sentence is not the associated sentence of the current user sentence in the embodiment, the new user sentence is detected within the preset time length threshold, that is, the reply content of the current user sentence is still within the acceptable range content, so the determined first reply content and the second reply content are respectively used for presenting to the user, and the requirement of the user is met to the greatest extent.
In some embodiments, if the time interval between the detection of the new user sentence and the detection of the current user sentence exceeds a preset time threshold,
determining new reply content responsive to the current user from the current user statement and the new user statement comprises:
determining new reply content responsive to the current user from the new user statement.
In this embodiment, since the interval duration between when a new user sentence is detected and when a current user sentence is detected exceeds the preset duration threshold, it indicates that the maximum tolerable waiting duration of the user has been exceeded, and the user does not care about the reply content corresponding to the current user sentence any more. If the first reply content and the second reply content corresponding to the current user sentence and the new user sentence are respectively presented to the user at this time, the user experience will be seriously influenced.
It is more intuitive that the user asks a question a first, and if the answer content corresponding to the question a is presented to the user after the network delay, the user gives a feeling of answering a question if the answer content corresponding to the question a is presented to the user when the user does not get the answer of the machine within the preset time threshold due to delay caused by poor network conditions or other possible reasons.
The method based on the embodiment can realize the effect of cross-domain jumping (the domain to which the question a belongs is crossed to the domain to which the question b belongs), and the reply content of the question really concerned by the user at the moment is presented to the user.
Fig. 2 is a flow chart of another embodiment of the man-machine interaction method of the present invention, which includes the following steps:
1. starting the process;
2. a conversation is called through a remote controller or a wake-up word;
3. starting pickup and sending to a cloud for recognition;
4. performing semantic analysis according to the recognition result, if the user has other effective semantic input at the moment, interrupting the current link and reentering the recognition link;
5. after the dialogue is ended, carrying out dialogue management, recording the semantic slot value of the current dialogue, outputting the dialogue result, and if the user has other effective semantic input, interrupting the current link and re-entering the recognition link;
6. if the conversation service has voice synthesis, the voice synthesis is carried out and is issued to the client for broadcasting, and if the user has other effective semantic input at the moment, the current link is interrupted and the recognition link is entered again;
7. the user does not have effective semantic analysis for a long time, or the user triggers an exit action, and the process is finished.
Illustratively, the method can be used for realizing one-time awakening and continuous conversation on an intelligent television or OTT television box, so that the interaction is easier and smoother, and the method is increasingly close to a voice assistant implementation scheme for human-to-human interaction.
The implementation principle of the scheme is as follows:
1. after the dialog is called, continuously picking up sound for recognition, and quitting the dialog when the user clearly quits or does not speak for a long time (without effective semantics);
2. and key information is recorded in the conversation service process, and when new voice input exists, association can be carried out according to the conversation context recorded by the system, so that the user intention can be accurately presumed.
The scheme has the advantages that:
1. only one time of awakening is needed, continuous interaction can be realized, and the condition that an awakening word needs to be input or a voice key needs to be pressed in each sentence is avoided;
2. the pickup is continuously carried out, so that interruption can be carried out at any link in the conversation process, and the user is prevented from carrying out meaningless waiting;
3. the conversation context is recorded, the conversation context is not limited in a single skill any more, and the context association of cross-domain skills can be realized;
4. the audio transmission identification is carried out after the conversation is called, so that the system resource occupation is reduced;
5. interaction experience is optimized, and human-computer interaction is close to human-human interaction.
In some embodiments, the operation of the user to open a dialog is speaking a wake-up voice, and the initiating the full-duplex dialog mode after the operation of the user to open a dialog is detected comprises:
determining user characteristic information of the current user according to the detected awakening word voice;
querying a user characteristic information base to determine a dialog mode applicable to the current user; the user characteristic information base stores characteristic information of a plurality of users who use the current intelligent terminal equipment with a screen, and records conversation modes respectively suitable for the users;
and starting the full-duplex dialogue mode when the inquiry result shows that the dialogue mode corresponding to the user characteristic information of the current user is the full-duplex mode.
In some embodiments, the half-duplex dialog mode is initiated when the query result indicates that the dialog mode corresponding to the user characteristic information of the current user is a half-duplex mode.
The embodiment realizes the self-adaptive selection of the initial dialogue mode after the system is awakened, thereby being capable of self-adapting to users with different applicable habits. For example, for the elderly who are not familiar with voice control of the smart television or the users who use the smart television for the first time, the system is usually required to prompt how to operate in each step of operation, and the selection is performed after the prompt of the system is finished, and obviously, the half-duplex conversation mode is started. For the young or the people familiar with the semantic control process of the intelligent television, the voice control is directly performed according to the content seen by the young or the people who are familiar with the semantic control process of the intelligent television without listening to the prompt tone of the system, and at this time, the system needs to be configured into a full-duplex conversation mode so as to realize the interruption of the conversation at any time and meet the requirements of the user group.
It should be noted that for simplicity of explanation, the foregoing method embodiments are described as a series of acts or combination of acts, but those skilled in the art will appreciate that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention. In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
As shown in fig. 3, a schematic block diagram of an embodiment of a man-machine interaction system 300 of the present invention is applied to a smart terminal device with a screen, where the system 300 includes:
a conversation mode starting module 310, configured to start a full-duplex conversation mode after detecting an operation of starting a conversation by a user;
a voice recognition module 320, configured to perform voice recognition on the detected current user sentence, so as to determine, according to an obtained voice recognition result, a reply content corresponding to the current user sentence, and present the reply content to the user;
a reply content determining module 330, configured to determine a new reply content responsive to the current user according to the current user sentence and the new user sentence if a new user sentence is detected before determining the reply content corresponding to the current user sentence and presenting to the user.
In some embodiments, if the time interval between the detection of the new user sentence and the detection of the current user sentence does not exceed a preset time threshold,
determining new reply content responsive to the current user from the current user statement and the new user statement comprises: and if the new user statement is determined to be the associated statement of the current user statement, determining new reply content responding to the current user according to the current user statement and the new user statement.
In some embodiments, if it is determined that the new user sentence is not an associated sentence of the current user sentence, first reply content responsive to the current user is determined from the current user sentence, and second reply content responsive to the current user is determined from the new user sentence.
In some embodiments, if the time interval between the detection of the new user sentence and the detection of the current user sentence exceeds a preset time threshold,
determining new reply content responsive to the current user from the current user statement and the new user statement comprises: determining new reply content responsive to the current user from the new user statement.
The man-machine conversation system of the embodiment of the invention can be used for executing the man-machine conversation method of the embodiment of the invention, and accordingly achieves the technical effect achieved by the man-machine conversation method of the embodiment of the invention, and the details are not repeated here. In the embodiment of the present invention, the relevant functional module may be implemented by a hardware processor (hardware processor).
Fig. 4 is a schematic hardware structure diagram of an electronic device for performing a man-machine interaction method according to another embodiment of the present invention, as shown in fig. 4, the electronic device includes:
one or more processors 410 and a memory 420, with one processor 410 being an example in fig. 4.
The apparatus for performing the man-machine conversation method may further include: an input device 430 and an output device 440.
The processor 410, the memory 420, the input device 430, and the output device 440 may be connected by a bus or other means, such as the bus connection in fig. 4.
The memory 420, which is a non-volatile computer-readable storage medium, may be used for storing non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the man-machine interaction method in the embodiments of the present invention. The processor 410 executes various functional applications of the server and data processing by executing nonvolatile software programs, instructions and modules stored in the memory 420, namely, implements the man-machine interaction method of the above-mentioned method embodiment.
The memory 420 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the human-machine conversation apparatus, and the like. Further, the memory 420 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 420 may optionally include memory located remotely from processor 410, which may be connected to the human dialog device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 430 may receive input numeric or character information and generate signals related to user settings and function control of the human-machine interaction device. The output device 440 may include a display device such as a display screen.
The one or more modules are stored in the memory 420 and, when executed by the one or more processors 410, perform the human-machine dialog method of any of the method embodiments described above.
The product can execute the method provided by the embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the method provided by the embodiment of the present invention.
The electronic device of embodiments of the present invention exists in a variety of forms, including but not limited to:
(1) mobile communication devices, which are characterized by mobile communication capabilities and are primarily targeted at providing voice and data communications. Such terminals include smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) The ultra-mobile personal computer equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include PDA, MID, and UMPC devices, such as ipads.
(3) Portable entertainment devices such devices may display and play multimedia content. Such devices include audio and video players (e.g., ipods), smart stereos, story machines, robots, handheld game consoles, electronic books, as well as smart toys and portable car navigation devices.
(4) The server is similar to a general computer architecture, but has higher requirements on processing capability, stability, reliability, safety, expandability, manageability and the like because of the need of providing highly reliable services.
(5) And other electronic devices with data interaction functions.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. Based on such understanding, the above technical solutions substantially or contributing to the related art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.