[go: up one dir, main page]

CN112989895A - Man-machine interaction method and system and self-moving equipment - Google Patents

Man-machine interaction method and system and self-moving equipment Download PDF

Info

Publication number
CN112989895A
CN112989895A CN201911307523.XA CN201911307523A CN112989895A CN 112989895 A CN112989895 A CN 112989895A CN 201911307523 A CN201911307523 A CN 201911307523A CN 112989895 A CN112989895 A CN 112989895A
Authority
CN
China
Prior art keywords
target user
topic
current
interaction
feedback
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911307523.XA
Other languages
Chinese (zh)
Inventor
郑思远
高倩
邵长东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecovacs Robotics Suzhou Co Ltd
Ecovacs Commercial Robotics Co Ltd
Original Assignee
Ecovacs Robotics Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecovacs Robotics Suzhou Co Ltd filed Critical Ecovacs Robotics Suzhou Co Ltd
Priority to CN201911307523.XA priority Critical patent/CN112989895A/en
Publication of CN112989895A publication Critical patent/CN112989895A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Manipulator (AREA)

Abstract

本申请公开一种人机交互方法、系统以及自移动设备,该方法包括:确定当前交互话题;通过至少两种获取渠道,获取目标用户的行为特征信息;根据各个获取渠道获得的所述目标用户的反馈性行为特征信息,分别判断各个反馈性行为特征信息反映的所述目标用户对所述当前交互话题的交互意向。本申请通过当前情境确定当前交互话题,并通过至少两种获取渠道,获取目标用户的行为特征信息,并根据各个获取渠道获得的所述目标用户的反馈性行为特征信息,从而分别判断各个反馈性行为特征信息反映的所述目标用户对所述当前交互话题的交互意向,且多个获取渠道结合可以提升判断的准确性,从而提升了服务型机器人对用户的交互体验。

Figure 201911307523

The present application discloses a human-computer interaction method, system and self-mobile device. The method includes: determining a current interaction topic; acquiring behavioral feature information of a target user through at least two acquisition channels; obtaining the target user according to each acquisition channel The feedback behavior characteristic information of each of the feedback behavior characteristics information is respectively determined, and the interaction intention of the target user on the current interaction topic reflected by each feedback behavior characteristic information. The present application determines the current interactive topic through the current situation, and obtains the behavioral characteristic information of the target user through at least two acquisition channels, and judges each feedback characteristic according to the feedback behavioral characteristic information of the target user obtained from each acquisition channel. The behavior feature information reflects the interaction intention of the target user on the current interaction topic, and the combination of multiple acquisition channels can improve the accuracy of the judgment, thereby improving the service robot's interactive experience for the user.

Figure 201911307523

Description

Man-machine interaction method and system and self-moving equipment
Technical Field
The application relates to the field of artificial intelligence, in particular to a human-computer interaction method, a human-computer interaction system and self-moving equipment.
Background
A robot is a machine device that automatically performs work. The intelligent control system can accept human commands, run programs which are arranged in advance, and act according to rules established by an artificial intelligence technology. With the development of society, the technology of robots has also been rapidly developed, and therefore, the application of robots is becoming more and more common, and the design of robots is also varied.
One type of robot is called a service type robot, and is generally used in the service industry to perform full-time service on users, so that the requirement on humanization is high. The service robot generally acquires the interaction information of the user by using a voice recognition technology and a natural language processing technology, and replies To the interaction information by using an audio playing or TTS (Text To Speech) technology.
Disclosure of Invention
The application provides a man-machine interaction method, which aims to overcome the defects in the prior art. The application also provides a man-machine interaction system and the self-moving equipment.
The application provides a man-machine interaction method, which comprises the following steps:
determining a current interactive topic according to the current situation and outputting the current interactive topic in at least one output mode;
acquiring behavior characteristic information of a target user through at least two acquisition channels;
according to the feedback behavior feature information of the target user obtained by each acquisition channel, respectively judging the interaction intention of the target user to the current interaction topic reflected by each feedback behavior feature information; wherein the behavior feature information comprises the feedback behavior feature information;
and if at least one feedback behavior characteristic information reflects that the target user has the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic.
Optionally, after the behavior feature information of the target user is acquired through at least two acquisition channels, the method further includes:
and judging whether the feedback behavior characteristic information of the target user contains a clear expression of the non-interactive intention of the current interactive topic, and if so, directly judging that the target user has no interactive intention of the current interactive topic.
Optionally, in the step of obtaining the behavior feature information of the target user through at least two obtaining channels, if the feedback behavior feature information of the target user cannot be obtained within a reasonable time range, it is directly determined that the target user has no interaction intention on the current interaction topic.
Optionally, the obtaining channel for obtaining the behavior feature information of the target user at least includes: a voice channel, a video channel and a screen trigger operation channel;
correspondingly, the feedback behavior feature information of the target user includes: feedback voice information, feedback image information, and feedback screen trigger operation information.
Optionally, the respectively determining, according to the feedback behavior feature information of the target user obtained by each acquisition channel, an interaction intention of the target user to the current interaction topic, which is reflected by each feedback behavior feature information, includes:
obtaining feedback voice information of the target user according to the voice channel;
judging whether the feedback voice information is related to the current interactive topic;
if at least one feedback behavior feature information reflects that the target user has the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic, including:
and if the feedback voice information is associated with the current interactive topic, judging that the target user has the interactive intention on the current interactive topic.
Optionally, the respectively determining, according to the feedback behavior feature information of the target user obtained by each acquisition channel, an interaction intention of the target user to the current interaction topic, which is reflected by each feedback behavior feature information, includes:
obtaining feedback image information of the target user according to the video channel;
judging whether the feedback image information is related to the current interactive topic;
if at least one feedback behavior feature information reflects that the target user has the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic, including:
and if the feedback image information is associated with the current interactive topic, judging that the target user has the interactive intention on the current interactive topic.
Optionally, the feedback image information includes: face image information of the target user;
the determining whether the feedback image information is associated with the current interactive topic includes:
and judging whether the face image information of the target user is in a first specified acquisition region or not aiming at the current interactive topic, and if so, judging that the face image information of the target user is associated with the current interactive topic.
Optionally, the determining, for the current interactive topic, whether the face image information of the target user is in a first specified acquisition area includes:
and judging whether the front face image information of the target user is in a first specified acquisition area or not aiming at the current interactive topic.
Optionally, the feedback image information includes: human body contour image information of the target user;
the determining whether the feedback image information is associated with the current interactive topic includes:
and judging whether the human body contour image information of the target user is in a second specified acquisition region or not aiming at the current interactive topic, and if so, judging that the human body contour image information of the target user is associated with the current interactive topic.
Optionally, the respectively determining, according to the feedback behavior feature information of the target user obtained by each acquisition channel, an interaction intention of the target user to the current interaction topic, which is reflected by each feedback behavior feature information, includes:
obtaining feedback screen trigger operation information of the target user according to the screen trigger operation channel;
judging whether the feedback screen trigger operation information is related to the current interactive topic;
if at least one feedback behavior feature information reflects that the target user has the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic, including:
and if the feedback screen trigger operation information is associated with the current interactive topic, judging that the target user has the interactive intention on the current interactive topic.
The present application further provides a human-computer interaction system, including:
the current interactive topic determining unit is used for determining the current interactive topic according to the current situation and outputting the current interactive topic in at least one output mode;
the behavior characteristic information acquisition unit is used for acquiring the behavior characteristic information of the target user through at least two acquisition channels;
the interaction intention judging unit is used for respectively judging the interaction intention of the target user to the current interaction topic, which is reflected by each piece of feedback behavior characteristic information, according to the feedback behavior characteristic information of the target user, which is obtained by each acquisition channel; wherein the behavior feature information comprises the feedback behavior feature information; and if at least one feedback behavior characteristic information reflects that the target user has the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic.
The present application further provides a self-moving device, comprising: the organism with set up the human-computer interaction system on the organism, wherein, human-computer interaction system includes:
the current interactive topic determining unit is used for determining the current interactive topic according to the current situation and outputting the current interactive topic in at least one output mode;
the behavior characteristic information acquisition unit is used for acquiring the behavior characteristic information of the target user through at least two acquisition channels;
the interaction intention judging unit is used for respectively judging the interaction intention of the target user to the current interaction topic, which is reflected by each piece of feedback behavior characteristic information, according to the feedback behavior characteristic information of the target user, which is obtained by each acquisition channel; wherein the behavior feature information comprises the feedback behavior feature information; and if at least one feedback behavior characteristic information reflects that the target user has the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic.
Compared with the prior art, the method has the following advantages:
the application provides a man-machine interaction method, which comprises the following steps: determining a current interactive topic according to the current situation and outputting the current interactive topic in at least one output mode; acquiring behavior characteristic information of a target user through at least two acquisition channels; according to the feedback behavior feature information of the target user obtained by each acquisition channel, respectively judging the interaction intention of the target user to the current interaction topic reflected by each feedback behavior feature information; wherein the behavior feature information comprises the feedback behavior feature information; and if at least one feedback behavior characteristic information reflects that the target user has the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic. The method and the device determine the current interactive topic through the current situation, acquire the behavior characteristic information of the target user through at least two acquisition channels, and acquire the feedback behavior characteristic information of the target user according to each acquisition channel, so that the target user can judge the interaction intention of the current interactive topic, and the accuracy of judgment can be improved by combining a plurality of acquisition channels, thereby improving the interaction experience of the service type robot to the user.
Drawings
Fig. 1 is a schematic flowchart of a human-computer interaction method according to a first embodiment of the present application;
FIG. 2 is a flowchart of a method for determining that a target user has no intention to interact with a current interaction topic according to a first embodiment of the present application;
FIG. 3 is a flowchart of a method for determining interaction intention of a target user with respect to a current interaction topic according to a first embodiment of the present application;
FIG. 4 is a block diagram of a human-computer interaction system provided in a second embodiment of the present application;
fig. 5 is a schematic structural diagram of a self-moving device according to a third embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth to provide a thorough understanding of embodiments of the present application. The embodiments of this application are capable of embodiments in many different forms than those described herein and can be similarly generalized by those skilled in the art without departing from the spirit and scope of the embodiments of this application and, therefore, the embodiments of this application are not limited to the specific embodiments disclosed below.
The first embodiment of the application provides a man-machine interaction method, an application subject of the method can be self-moving equipment, and the method is applied to judging whether a user has interaction intention on a current interaction topic in the interaction process of the self-moving equipment and the user, so that the interaction experience of the self-moving equipment and the user is improved. In this embodiment, the self-moving device may be a public service robot that provides services in public places, and the service robot may be applied to places such as shopping malls, supermarkets, banks, hospitals, tourist attractions, and the like, so as to provide interactive services for users; or a home service robot in a home environment, such as a cleaning robot. As shown in fig. 1, fig. 1 is a flowchart of a human-computer interaction method according to a first embodiment of the present application. The method comprises the following steps.
And S101, determining the current interactive topic according to the current situation and outputting the current interactive topic in at least one output mode.
The method of the embodiment mainly introduces a scene that the service robot provides interactive service for users in a shopping mall. The service robot determines a current interaction topic with the user for a current context, in this embodiment, the current context refers to a context in which people, scenes, and objects around the service robot are combined with each other, which is sensed by the service robot, for example, when the service robot detects that a target user pushes a cart with a hand through a camera, the service robot determines that the current interaction topic is a shopping interaction topic, which may include some goods being discounted in a local shopping mall and their corresponding floor positions, or a description about the use of the goods, and the like. For another example, when the service robot acquires the hand-held goods of the user through the camera and the pace of the target user is rapid, the service robot determines that the current interactive topic is a navigation departure interactive topic, and the navigation departure interactive topic may include information such as an active inquiry of departure requirements and an optimal path to a parking position for the user. There are also many ways for the service robot to determine the current interactive topic according to the current situation, for example, an interactive topic related to commodity shopping guide, an interactive topic related to a target location, an interactive topic related to knowledge solution, and the like. The present embodiment is intended to cover any current interactive topic that is determined by the service robot to facilitate the service of the target user.
After determining the current interactive topic according to the current situation, the service robot outputs the current interactive topic in at least one of the following output modes. For example, the service robot may play the current interactive topic through a voice playing module, or the service robot may output the current interactive topic through a video display or playing mode, and the like. Of course, there are many ways for the service robot to output the current interactive topic, and any way that can output the current interactive topic is the scope to be protected by the present embodiment.
And step S102, acquiring the behavior characteristic information of the target user through at least two acquisition channels.
After determining the current interactive topic according to the current situation in step S101, behavior feature information of the target user is acquired through at least two acquisition channels. In this embodiment, the behavior feature information of the target user includes feedback behavior feature information of the target user and non-feedback behavior feature information of the target user. The feedback behavior feature information refers to feedback behavior features made by a target user aiming at related interactive topics output by the service robot, so that the service robot obtains the feedback behavior feature information corresponding to the feedback behavior features. The related interactive topics comprise the current interactive topic and other interactive topics, and the target user can make feedback behavior characteristics for the current interactive topic and also can make feedback behavior characteristics for other interactive topics. For example, the service robot outputs that the current interactive topic is a shopping guide article interactive topic, and the target user makes information such as price for inquiring related articles and article applicable objects aiming at the interactive topic, so that the service robot obtains information for inquiring articles aiming at the shopping guide article interactive topic by the target user, and such feedback behavior characteristic information is obtained through feedback behavior characteristics made by the target user on the current interactive topic. Similarly, the target user also makes a query about how to leave the shopping mall for the shopping guide item interaction topic, so that the service robot cannot obtain the relevant information of the target user for the shopping guide item interaction topic.
The non-feedback behavior feature information refers to that behavior feature information corresponding to behavior features made by the target user is irrelevant to the service robot, for example, when the target user normally walks and passes through the service robot, the service robot acquires image information of the target user, and the information is not acquired by the service robot according to the requirements of the target user but acquired by the service robot normally. For example, the service robot may acquire the image information of the target user as long as the image information is acquired by the camera of the service robot, and the image information is irrelevant to the interaction of the target user with the service robot. The service robot of this embodiment mainly determines whether the target user has an intention on the current interactive topic by using the feedback behavior feature information, and therefore in this embodiment, the present embodiment is explained mainly with reference to the feedback behavior feature information.
In this embodiment, the behavior feature information of the target user is obtained through at least two obtaining channels, where the obtaining channels may include a voice channel, a video channel, and a screen trigger operation channel. Specifically, the service robot may obtain the behavioral characteristic information of the target user in the speech aspect through a speech channel, where the speech channel may specifically be a sound sensor that employs a speech recognition technology and a natural language processing technology. The service robot can obtain behavior characteristic information of the target user in the aspects of videos, images and the like through a video channel, wherein the video channel can be a video/image acquisition device such as a camera and an image sensor. The service robot can obtain the trigger operation information of the target user through the screen trigger operation channel.
After the service robot acquires the behavior feature information of the target user, it may acquire the feedback behavior feature information of the target user. In this embodiment, the feedback behavior feature information of the target user includes: feedback voice information, feedback image information, and feedback screen trigger operation information. Since the feedback behavior feature information of the target user may be feedback behavior feature information for the current interaction topic, and may also be feedback behavior feature information for other interaction topics, the feedback voice information, the feedback image information, and the feedback screen trigger operation information may also be for the current interaction topic or other interaction topics.
In this embodiment, after the service robot acquires the behavior feature information of the target user, if the feedback behavior feature information of the target user cannot be acquired within a reasonable time range, it is directly determined that the target user has no interaction intention on the current interaction topic. That is to say, in the present embodiment, as long as at least one piece of feedback information of the target user, the feedback image information, and the feedback screen trigger operation information cannot be obtained within a reasonable time range, it is directly determined that the target user has no intention to interact with the current interaction topic. Of course, the feedback behavior feature information of the target user may be other information, and this embodiment is not particularly limited thereto.
In this embodiment, the interaction intention of the target user may also be determined by whether the feedback behavior feature information of the target user includes a clear expression of whether there is an interaction intention on the current interaction topic. If the feedback behavior characteristic information of the target user contains the clear expression of the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic; and if the feedback behavior characteristic information of the target user contains the express that the target user has no interaction intention on the current interaction topic, judging that the target user has no interaction intention on the current interaction topic.
The following contents mainly include judging whether feedback behavior characteristic information of the target user contains explicit expression of the interactive intention of the current interactive topic, and directly judging explanation of the interactive intention of the target user on the current interactive topic.
Specifically, as shown in fig. 2, fig. 2 is a flowchart of a method for determining that a target user has no intention to interact with a current interaction topic according to this embodiment. The method comprises the following steps.
Step S201, determining whether the body contour image information of the target user is in the second designated acquisition area. The second specified acquisition area is a range which can be covered by a camera of the service robot after the current interactive topic is output. After the service robot outputs the current interactive topic, the service robot captures a human body outline image of the target user in a second specified acquisition area through a camera or an image sensor, and if the target user has an intention to the current interactive topic, the target user approaches the service robot, so that the target user appears in the second specified acquisition area. And corresponding to step S201, if the human body contour image information of the target user is in the second designated acquisition area, step S202 is executed. If the human body contour image information of the target user is not in the second specified acquisition region, it is determined that the human body contour image information of the target user is not associated with the current interactive topic, and step S205 is executed, where the target user has no interactive intention with respect to the current interactive topic.
Step S202, judging whether a screen presented by the current interactive topic is exited, if so, executing step S205; if not, go to step S203. It can be understood that if the target user has an intention on the current interactive topic, the trigger operation on the screen where the current interactive topic is located is further triggered, and otherwise, the target user exits the screen presented by the current interactive topic.
Step S203, judging whether the feedback screen trigger operation information is related to other interactive topics; if yes, go to step S205; if not, go to step S204. The service robot can judge whether the operation is an interactive topic different from the current interactive topic selected by the target user according to the feedback screen trigger operation information. If the target user selects other interactive topics, the service robot can naturally judge that the target user is not interested in the current interactive topic, namely that the target user has no interactive intention on the current interactive topic.
Step S204, judging whether the feedback voice information is related to other interactive topics; if yes, go to step S205; if not, step S201 is executed. Wherein the feedback voice information is feedback of the target user aiming at the voice information sent by the service robot. For example, the service robot outputs that the current interactive topic is a shopping guide article interactive topic, and obtains the feedback voice information of the target user through the sound sensor, wherein if the feedback voice information of the target user contains information on how to leave a shopping mall and the like, the feedback voice information is related to other interactive topics, and then it is determined that the target user has no interactive intention on the current interactive topic.
Through the steps, the situation that the target user has no interactive intention on the current interactive topic can be judged. It should be noted that fig. 2 is only a flowchart illustrating a method for determining that the target user has no interaction intention with respect to the current interaction topic. That is, there are many steps for determining that the target user has no interactive intention with respect to the current interactive topic, or there are many orders of determining steps, and the determining steps are not limited to the flow steps shown in fig. 2 in the embodiment.
Step S103, according to the feedback behavior feature information of the target user obtained by each acquisition channel, respectively judging the interaction intention of the target user to the current interactive topic reflected by each feedback behavior feature information.
The above contents already state that the obtaining channel for the service robot to obtain the behavior feature information of the target user at least includes: voice channels, video channels, and screen-triggered operation channels, among others. And the feedback behavior characteristic information of the target user comprises: feedback voice information, feedback image information, feedback screen trigger operation information, and the like. In the step, the feedback behavior characteristic information correspondingly acquired by each channel is respectively judged, and the interaction intention of the target user on the current interaction topic reflected by each feedback behavior characteristic information is specifically explained.
Taking the example that the obtaining channel for obtaining the behavior feature information of the target user is a voice channel, specifically, whether the feedback voice information of the target user is related to the current interactive topic is judged according to the feedback voice information obtained by the voice channel. For example, the service robot outputs that the current interactive topic is a shopping guide article interactive topic, and obtains the feedback voice information of the target user through the sound sensor, wherein if the feedback voice information of the target user contains information such as price for inquiring related articles, article applicable objects and the like, the feedback voice information is related to the current interactive topic. If the feedback voice information of the target user contains information such as how to leave a shopping mall and a parking lot, it is indicated that the feedback voice information is not related to the current interactive topic.
Taking the example that the obtaining channel for obtaining the behavior feature information of the target user is a video channel, specifically, whether the feedback image information is related to the current interactive topic is judged according to the feedback image information of the target user obtained by the video channel. In this embodiment, the determining whether the feedback image information is related to the current interactive topic includes: and judging whether the face image information of the target user is in the first specified acquisition region or not aiming at the current interactive topic, and if so, judging that the face image information of the target user is associated with the current interactive topic. The first designated acquisition area of this embodiment may be a range that can be covered by the camera after the camera is set at a predetermined position, and the range is mainly used for capturing a face of a target user.
It should be noted that, the service robot may capture front-face image information and non-front-face image information of the target user in the first designated acquisition area by the camera of the service robot, and then determine whether the face image information of the target user is in the first designated acquisition area, including: and judging whether the front face image information of the target user is in a first specified acquisition area or not according to the current interactive topic. And when the left angle and the right angle of the face are within a threshold range, and/or when the pitch angle and the elevation angle of the face are within the threshold range, determining that the face image information of the target user is the front face image information. For example, the service robot outputs that the current interactive topic is a shopping guide article interactive topic, and obtains feedback image information of the target user through the camera, wherein if the face image information in the feedback image information of the target user is front face image information, the feedback image information is associated with the current interactive topic; if the feedback image information of the target user contains the face image information which is non-frontal face image information, the fact that the feedback image information is not related to the current interactive topic is shown.
In this embodiment, the feedback image information further includes human body contour image information of the target user, and then, determining whether the feedback image information is related to the current interactive topic includes: and judging whether the human body contour image information of the target user is in the second specified acquisition region or not aiming at the current interactive topic, and if so, judging that the human body contour image information of the target user is associated with the current interactive topic. The second specified acquisition area is a range which can be covered by a camera of the service robot after the current interactive topic is output. The camera is mainly used for capturing the human body outline of a target user. For example, the service robot outputs that the current interactive topic is a shopping guide article interactive topic, and obtains the human body contour information of the target user in the second specified acquisition area through the camera, so that the human body contour information is related to the current interactive topic; and if the camera cannot obtain the human body contour information of the target user in the second specified acquisition area, indicating that the feedback image information is not related to the current interactive topic.
Taking the example that the obtaining channel for obtaining the behavior feature information of the target user is a screen trigger operation channel, specifically, feedback screen trigger operation information of the target user is obtained according to the screen trigger operation channel, and whether the feedback screen trigger operation information is related to the current interactive topic is judged. For example, the service robot outputs that the current interactive topic is a shopping guide article interactive topic, displays the shopping guide article interactive topic through a screen floating window area, and if the feedback screen trigger operation information of the target user contains trigger operation information aiming at the shopping guide article interactive topic displayed in the screen floating window area, the feedback screen trigger operation information is related to the current interactive topic; if the feedback screen triggering operation information of the target user contains triggering operation information of the interactive topic showing how to go to the parking lot aiming at other screen floating window areas, the feedback screen triggering operation information is not related to the current interactive topic.
Of course, in this implementation, the obtaining channel for obtaining the behavior feature information of the target user may also be a keyboard input operation channel, and correspondingly, the feedback behavior feature information of the target user includes feedback character information. The feedback character information of the target user can be obtained according to the keyboard input operation channel, and whether the feedback character information is related to the current interactive topic is judged. For example, the service robot outputs that the current interactive topic is a shopping guide article interactive topic, and obtains the feedback character information of the target user through a keyboard input mode, wherein if the feedback character information of the target user contains information such as price of related articles, article applicable objects and the like, the feedback character information is related to the current interactive topic. If the feedback character information of the target user contains information such as how to leave a shopping mall and a parking lot, the feedback character information is not related to the current interactive topic.
Step S104, if at least one feedback behavior characteristic information reflects that the target user has the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic; wherein the behavior feature information comprises feedback behavior feature information.
Step S103, respectively judging the interaction intention of the target user to the current interaction topic reflected by each piece of feedback behavior characteristic information, and if at least one piece of feedback behavior characteristic information reflects the interaction intention of the target user to the current interaction topic, judging that the target user has the interaction intention to the current interaction topic; wherein the behavior feature information comprises feedback behavior feature information.
Specifically, taking the feedback voice information as an example, the service robot outputs that the current interactive topic is a shopping guide article interactive topic, and obtains the feedback voice information of the target user through the sound sensor, wherein if the feedback voice information of the target user includes information for inquiring about the price of the related article, the article applicable object and the like, the feedback voice information is related to the current interactive topic, and then it is determined that the target user has an interactive intention on the current interactive topic. If the feedback voice information of the target user contains information on how to leave the shopping mall and the like, the feedback voice information is not related to the current interactive topic, and then the fact that the target user does not have the interactive intention on the current interactive topic is judged.
Taking the feedback image information as an example, if the service robot outputs that the current interactive topic is the shopping guide article interactive topic, and obtains the feedback image information of the target user through the camera, wherein if the face image information in the feedback image information of the target user is the front face image information, it is indicated that the feedback image information is associated with the current interactive topic, and then it is determined that the target user has an interaction intention on the current interactive topic. If the feedback image information of the target user contains the face image information which is non-front face image information, the fact that the feedback image information is not related to the current interactive topic is shown, and then it is judged that the target user does not have the interactive intention on the current interactive topic.
Taking the feedback screen trigger operation information as an example, the service robot outputs that the current interactive topic is the shopping guide article interactive topic, displays the shopping guide article interactive topic through the screen floating window area, if the feedback screen trigger operation information of the target user contains the trigger operation information aiming at the shopping guide article interactive topic displayed in the screen floating window area, the feedback screen trigger operation information is related to the current interactive topic, and then the target user is judged to have the interactive intention to the current interactive topic. If the feedback screen trigger operation information of the target user contains trigger operation information of the interactive topic showing how to go to the parking lot aiming at other screen floating window areas, the feedback screen trigger operation information is not associated with the current interactive topic, and then the target user is judged not to have the interactive intention of the current interactive topic.
The above explanation shows that if at least one feedback behavior feature information reflects that the target user has an interaction intention on the current interaction topic, it is determined that the target user has the interaction intention on the current interaction topic; and if at least one feedback behavior characteristic information reflects that the target user does not have the interaction intention on the current interaction topic, judging that the target user does not have the interaction intention on the current interaction topic. Therefore, the service robot can directly judge the intention of the target user corresponding to the current interaction, namely, the service robot can better serve the target user in the interactive topics intended by the target user, and can also close the interactive topics not intended and push the interactive topics intended by the target user as far as possible, so that the interactive experience of the user is improved.
It should be noted that, in this embodiment, the multiple types of feedback behavior feature information may also be determined simultaneously, as shown in fig. 3, fig. 3 is a flowchart of a method for determining that the target user has an interaction intention with respect to the current interaction topic, provided by this embodiment. The method comprises the following steps.
Step S301, judging whether the feedback screen trigger operation information is related to the current interactive topic; if yes, go to step S304; if not, go to step S302.
Step S302, judging whether the feedback voice information is related to the current interactive topic; if yes, go to step S304; if not, go to step S303.
Step S303, judging whether the front face image information of the target user is in a first appointed acquisition area; if yes, go to step S304; if not, step S301 is executed.
Through the steps, the target user can be judged to have the interaction intention on the current interaction topic. It should be noted that fig. 3 is only a flowchart illustrating a method for determining that a target user has an interaction intention with respect to a current interaction topic. That is, there are many steps for determining that the target user has an interaction intention with respect to the current interaction topic, or there are many orders of the determining steps, and the determining steps are not limited to the flow steps shown in fig. 3 in the embodiment.
A first embodiment of the present application provides a human-computer interaction method, including: determining a current interactive topic according to the current situation and outputting the current interactive topic in at least one output mode; acquiring behavior characteristic information of a target user through at least two acquisition channels; according to the feedback behavior feature information of the target user obtained by each acquisition channel, respectively judging the interaction intention of the target user on the current interaction topic reflected by each feedback behavior feature information; wherein the behavior feature information comprises feedback behavior feature information; and if at least one feedback behavior characteristic information reflects that the target user has the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic. According to the first embodiment of the application, the current interactive topic is determined through the current situation, the behavior characteristic information of the target user is obtained through at least two obtaining channels, the feedback behavior characteristic information of the target user, which is obtained through each obtaining channel, is obtained, so that the interactive intention of the target user on the current interactive topic, which is reflected by each feedback behavior characteristic information, is respectively judged, and the judgment accuracy can be improved through the combination of a plurality of obtaining channels, so that the interactive experience of the service type robot on the user is improved.
The second embodiment of the present application provides a human-computer interaction system, which is substantially similar to the method embodiment and therefore is described more simply, and the details of the related technical features are given by referring to the corresponding description of the method embodiment provided above, and the following description of the system embodiment is only illustrative.
Please refer to fig. 4 to understand the embodiment, fig. 4 is a block diagram of units of the system provided in the embodiment, and as shown in fig. 4, the human-computer interaction system includes:
a current interactive topic determining unit 401, configured to determine a current interactive topic according to a current context, and output the current interactive topic in at least one output manner;
a behavior feature information obtaining unit 402, configured to obtain behavior feature information of a target user through at least two obtaining channels;
the interaction intention determining unit 403 is configured to respectively determine, according to the feedback behavior feature information of the target user obtained by each acquisition channel, an interaction intention of the target user to the current interaction topic, which is reflected by each feedback behavior feature information; wherein the behavior feature information comprises feedback behavior feature information; and if at least one feedback behavior characteristic information reflects that the target user has the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic.
The human-computer interaction system provided by the second embodiment of the application determines the current interaction topic through the current situation, acquires the behavior characteristic information of the target user through at least two acquisition channels, and respectively judges the interaction intention of the target user to the current interaction topic, which is reflected by the feedback behavior characteristic information, according to the feedback behavior characteristic information of the target user acquired by each acquisition channel, and the combination of the acquisition channels can improve the judgment accuracy, thereby improving the interaction experience of the service robot to the user.
In the above embodiments, a human-computer interaction method and a human-computer interaction system are provided, and in addition, a third embodiment of the present application further provides a self-moving device, where the self-moving device may be a public service robot providing services in public places, and the service robot may be applied to places such as shopping malls, supermarkets, banks, hospitals, tourist attractions, and the like, so as to provide interaction services for users; or a home service robot in a home environment, such as a cleaning robot. Since the self-moving apparatus embodiment is basically similar to the method embodiment, it is relatively simple to describe, and please refer to the corresponding description of the method embodiment provided above for the details of the related technical features, and the following description of the self-moving apparatus embodiment is only illustrative.
The self-moving device embodiment is as follows:
please refer to fig. 5 for understanding the present embodiment, fig. 5 is a schematic diagram of a self-moving device provided in the present embodiment.
The present embodiment provides a self-moving device 500, including: organism 501 and the human-computer interaction system who sets up on organism 501, wherein, human-computer interaction system includes:
a current interactive topic determining unit 401, configured to determine a current interactive topic according to a current context, and output the current interactive topic in at least one output manner;
a behavior feature information obtaining unit 402, configured to obtain behavior feature information of a target user through at least two obtaining channels;
the interaction intention determining unit 403 is configured to respectively determine, according to the feedback behavior feature information of the target user obtained by each acquisition channel, an interaction intention of the target user to the current interaction topic, which is reflected by each feedback behavior feature information; wherein the behavior feature information comprises feedback behavior feature information; and if at least one feedback behavior characteristic information reflects that the target user has the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic.
The self-moving device can obtain better use effect than the existing self-moving device under different scenes, and specific application scenes are taken as an example for description.
Application scenario 1
The self-moving equipment is a service robot which is arranged in a shopping mall, when a target user passes through the service robot, the service robot detects that the target user pushes a trolley by hand through a camera, and the service robot can determine that the current interactive topic is a shopping interactive topic and broadcasts voice information about shopping through a sound sensor. And after hearing the voice information, the target user approaches the service robot, and at the moment, the service robot judges that the target user has an intention on the current interactive topic, otherwise, the target user has no interactive intention. Then, displaying the shopping interaction topic on a screen display page of the service robot, enabling the target user to watch the shopping interaction topic displayed on the screen display page in a front view mode and triggering the triggering operation of the current screen display page, wherein at the moment, the service robot judges that the target user has an intention on the current interaction topic, and otherwise, the service robot has no interaction intention; and the service robot enters the sub-content under the shopping interactive topic directory according to the trigger operation. Or, the target user sends out an inquiry request message, the inquiry request message contains information such as the price of related articles in the aspect of shopping and the applicable objects of the articles, at this time, the service robot judges that the target user has intention to the current interactive topic, the service robot enters the sub-content in the shopping interactive topic catalog according to the inquiry request message, otherwise, the service robot has no interactive intention. By adopting the method, the service robot can accurately judge whether the target user has the intention on the current interactive topic, and the accuracy of intention judgment is improved through the combined judgment of various sensors, so that the interactive experience of the user is improved.
Application scenario 2
The self-moving equipment is a service robot, the service robot is arranged in a bank, when a target user passes through the service robot, the service robot detects the target user through a camera, and the service robot can determine that the current interactive topic is a service handling interactive topic and broadcasts voice information of the service handling through a sound sensor. And after hearing the voice information, the target user approaches the service robot, and at the moment, the service robot judges that the target user has an intention on the current interactive topic, otherwise, the target user has no interactive intention. Then, the service robot displays the business transaction interactive topic on a screen display page of the service robot, the target user watches the business transaction interactive topic displayed on the screen display page and triggers the triggering operation of the current screen display page, at the moment, the service robot judges that the target user has intention on the current interactive topic, and otherwise, the service robot has no interaction intention. And the service robot enters the sub-content under the business handling interactive topic directory according to the triggering operation. Or, the target user sends out an inquiry request message, the inquiry request message contains information such as a bank card password changing and a short message reminding service, and at the moment, the service robot judges that the target user has an intention on the current interactive topic, otherwise, the service robot has no interactive intention. And the service robot enters the sub-content under the business transaction interactive topic directory according to the inquiry request message. By adopting the method, the service robot can accurately judge whether the target user has the intention on the current interactive topic, and the accuracy of intention judgment is improved through the combined judgment of various sensors, so that the interactive experience of the user is improved.
Application scenario 3
The self-moving equipment is a service robot which is arranged in a hospital, when a target user passes through the service robot, the service robot detects the painful expression information of the target user through a camera, the service robot can determine that the current interactive topic is a department diagnosis interactive topic, and broadcasts voice information about department diagnosis through a sound sensor. And after hearing the voice information, the target user approaches the service robot, and at the moment, the service robot judges that the target user has an intention on the current interactive topic, otherwise, the target user has no interactive intention. Then, the department diagnosis interactive topic is displayed on a screen display page of the service robot, the target user watches the department diagnosis interactive topic displayed on the screen display page and triggers the triggering operation on the current screen display page, at the moment, the service robot judges that the target user has an intention on the current interactive topic, and otherwise, the service robot has no interactive intention. And the service robot enters the sub-content of the office diagnosis interactive topic catalog according to the trigger operation. Or, the target user sends out an inquiry request message, the inquiry request message contains information such as information of an attending physician of a department, queuing conditions of office visitors and the like, at this time, the service robot judges that the target user has intention on the current interactive topic, and otherwise, the service robot has no interactive intention. And the service robot enters the sub-content in the directory of the office diagnosis interactive topics according to the inquiry request message. By adopting the method, the service robot can accurately judge whether the target user has the intention on the current interactive topic, and the accuracy of intention judgment is improved through the combined judgment of various sensors, so that the interactive experience of the user is improved.
Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the claims that follow.

Claims (12)

1. A human-computer interaction method, comprising:
determining a current interactive topic according to the current situation and outputting the current interactive topic in at least one output mode;
acquiring behavior characteristic information of a target user through at least two acquisition channels;
according to the feedback behavior feature information of the target user obtained by each acquisition channel, respectively judging the interaction intention of the target user to the current interaction topic reflected by each feedback behavior feature information; wherein the behavior feature information comprises the feedback behavior feature information;
and if at least one feedback behavior characteristic information reflects that the target user has the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic.
2. The human-computer interaction method according to claim 1, further comprising, after the obtaining the behavior feature information of the target user through at least two obtaining channels:
and judging whether the feedback behavior characteristic information of the target user contains a clear expression of the non-interactive intention of the current interactive topic, and if so, directly judging that the target user has no interactive intention of the current interactive topic.
3. The human-computer interaction method according to claim 1, wherein in the step of obtaining behavior feature information of the target user through at least two obtaining channels, if the feedback behavior feature information of the target user cannot be obtained within a reasonable time range, it is directly determined that the target user has no interaction intention on the current interaction topic.
4. The human-computer interaction method according to claim 1, wherein the channel for acquiring the behavior feature information of the target user at least comprises: a voice channel, a video channel and a screen trigger operation channel;
correspondingly, the feedback behavior feature information of the target user includes: feedback voice information, feedback image information, and feedback screen trigger operation information.
5. The human-computer interaction method according to claim 4, wherein the step of respectively judging the interaction intention of the target user on the current interaction topic, which is reflected by each piece of feedback behavior feature information, according to the feedback behavior feature information of the target user, which is obtained by each acquisition channel, comprises:
obtaining feedback voice information of the target user according to the voice channel;
judging whether the feedback voice information is related to the current interactive topic;
if at least one feedback behavior feature information reflects that the target user has the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic, including:
and if the feedback voice information is associated with the current interactive topic, judging that the target user has the interactive intention on the current interactive topic.
6. The human-computer interaction method according to claim 4, wherein the step of respectively judging the interaction intention of the target user on the current interaction topic, which is reflected by each piece of feedback behavior feature information, according to the feedback behavior feature information of the target user, which is obtained by each acquisition channel, comprises:
obtaining feedback image information of the target user according to the video channel;
judging whether the feedback image information is related to the current interactive topic;
if at least one feedback behavior feature information reflects that the target user has the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic, including:
and if the feedback image information is associated with the current interactive topic, judging that the target user has the interactive intention on the current interactive topic.
7. The human-computer interaction method of claim 6, wherein the feedback image information comprises: face image information of the target user;
the determining whether the feedback image information is associated with the current interactive topic includes:
and judging whether the face image information of the target user is in a first specified acquisition region or not aiming at the current interactive topic, and if so, judging that the face image information of the target user is associated with the current interactive topic.
8. The human-computer interaction method according to claim 7, wherein the determining whether the face image information of the target user is within a first specified acquisition area for the current interaction topic comprises:
and judging whether the front face image information of the target user is in a first specified acquisition area or not aiming at the current interactive topic.
9. The human-computer interaction method of claim 6, wherein the feedback image information comprises: human body contour image information of the target user;
the determining whether the feedback image information is associated with the current interactive topic includes:
and judging whether the human body contour image information of the target user is in a second specified acquisition region or not aiming at the current interactive topic, and if so, judging that the human body contour image information of the target user is associated with the current interactive topic.
10. The human-computer interaction method according to claim 4, wherein the step of respectively judging the interaction intention of the target user on the current interaction topic, which is reflected by each piece of feedback behavior feature information, according to the feedback behavior feature information of the target user, which is obtained by each acquisition channel, comprises:
obtaining feedback screen trigger operation information of the target user according to the screen trigger operation channel;
judging whether the feedback screen trigger operation information is related to the current interactive topic;
if at least one feedback behavior feature information reflects that the target user has the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic, including:
and if the feedback screen trigger operation information is associated with the current interactive topic, judging that the target user has the interactive intention on the current interactive topic.
11. A human-computer interaction system, comprising:
the current interactive topic determining unit is used for determining the current interactive topic according to the current situation and outputting the current interactive topic in at least one output mode;
the behavior characteristic information acquisition unit is used for acquiring the behavior characteristic information of the target user through at least two acquisition channels;
the interaction intention judging unit is used for respectively judging the interaction intention of the target user to the current interaction topic, which is reflected by each piece of feedback behavior characteristic information, according to the feedback behavior characteristic information of the target user, which is obtained by each acquisition channel; wherein the behavior feature information comprises the feedback behavior feature information; and if at least one feedback behavior characteristic information reflects that the target user has the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic.
12. An autonomous mobile device, comprising: the organism with set up the human-computer interaction system on the organism, wherein, human-computer interaction system includes:
the current interactive topic determining unit is used for determining the current interactive topic according to the current situation and outputting the current interactive topic in at least one output mode;
the behavior characteristic information acquisition unit is used for acquiring the behavior characteristic information of the target user through at least two acquisition channels;
the interaction intention judging unit is used for respectively judging the interaction intention of the target user to the current interaction topic, which is reflected by each piece of feedback behavior characteristic information, according to the feedback behavior characteristic information of the target user, which is obtained by each acquisition channel; wherein the behavior feature information comprises the feedback behavior feature information; and if at least one feedback behavior characteristic information reflects that the target user has the interaction intention on the current interaction topic, judging that the target user has the interaction intention on the current interaction topic.
CN201911307523.XA 2019-12-17 2019-12-17 Man-machine interaction method and system and self-moving equipment Pending CN112989895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911307523.XA CN112989895A (en) 2019-12-17 2019-12-17 Man-machine interaction method and system and self-moving equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911307523.XA CN112989895A (en) 2019-12-17 2019-12-17 Man-machine interaction method and system and self-moving equipment

Publications (1)

Publication Number Publication Date
CN112989895A true CN112989895A (en) 2021-06-18

Family

ID=76343764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911307523.XA Pending CN112989895A (en) 2019-12-17 2019-12-17 Man-machine interaction method and system and self-moving equipment

Country Status (1)

Country Link
CN (1) CN112989895A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658251A (en) * 2021-08-25 2021-11-16 北京市商汤科技开发有限公司 Distance measuring method, device, electronic equipment, storage medium and system
CN115762508A (en) * 2022-10-18 2023-03-07 上海自然智动网络科技有限公司 A voice interaction method and system for an intelligent robot based on image recognition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105446491A (en) * 2015-12-16 2016-03-30 北京光年无限科技有限公司 Intelligent robot based interactive method and apparatus
CN107278302A (en) * 2017-03-02 2017-10-20 深圳前海达闼云端智能科技有限公司 A robot interaction method and an interactive robot
WO2018006375A1 (en) * 2016-07-07 2018-01-11 深圳狗尾草智能科技有限公司 Interaction method and system for virtual robot, and robot
CN107870994A (en) * 2017-10-31 2018-04-03 北京光年无限科技有限公司 Man-machine interaction method and system for intelligent robot
CN109858391A (en) * 2019-01-11 2019-06-07 北京光年无限科技有限公司 It is a kind of for drawing the man-machine interaction method and device of robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105446491A (en) * 2015-12-16 2016-03-30 北京光年无限科技有限公司 Intelligent robot based interactive method and apparatus
WO2018006375A1 (en) * 2016-07-07 2018-01-11 深圳狗尾草智能科技有限公司 Interaction method and system for virtual robot, and robot
CN107278302A (en) * 2017-03-02 2017-10-20 深圳前海达闼云端智能科技有限公司 A robot interaction method and an interactive robot
CN107870994A (en) * 2017-10-31 2018-04-03 北京光年无限科技有限公司 Man-machine interaction method and system for intelligent robot
CN109858391A (en) * 2019-01-11 2019-06-07 北京光年无限科技有限公司 It is a kind of for drawing the man-machine interaction method and device of robot

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658251A (en) * 2021-08-25 2021-11-16 北京市商汤科技开发有限公司 Distance measuring method, device, electronic equipment, storage medium and system
CN115762508A (en) * 2022-10-18 2023-03-07 上海自然智动网络科技有限公司 A voice interaction method and system for an intelligent robot based on image recognition

Similar Documents

Publication Publication Date Title
Kuribayashi et al. Linechaser: a smartphone-based navigation system for blind people to stand in lines
JP2019197499A (en) Program, recording medium, augmented reality presentation device, and augmented reality presentation method
US20130145272A1 (en) System and method for providing an interactive data-bearing mirror interface
US20120234631A1 (en) Simple node transportation system and control method thereof
JP2017533106A (en) Customer service robot and related systems and methods
KR20210088601A (en) State recognition method, apparatus, electronic device and recording medium
Xie et al. Iterative design and prototyping of computer vision mediated remote sighted assistance
US11074040B2 (en) Presenting location related information and implementing a task based on gaze, gesture, and voice detection
JP2003050559A (en) Autonomous mobile robot
CN113536073B (en) Robot-based question-answering service method, device, intelligent device, and storage medium
WO2018154933A1 (en) Information processing device, information processing method and program
CN112312211A (en) Prompting method and device
US20200349937A1 (en) Presenting location related information and implementing a task based on gaze, gesture, and voice detection
CN112989895A (en) Man-machine interaction method and system and self-moving equipment
CN116520982B (en) Virtual character switching method and system based on multi-mode data
Lei et al. “I Shake The Package To Check If It’s Mine” A Study of Package Fetching Practices and Challenges of Blind and Low Vision People in China
CN109129460B (en) Robot Management System
CN115079688A (en) Control device, control method, and control system
CN113961133A (en) Display control method and device for electronic equipment, electronic equipment and storage medium
CN205354115U (en) Food and beverage service system
US20240193876A1 (en) Method involving digital avatar
WO2020195613A1 (en) Information processing device, management system, information processing method, and recording medium
US20200098012A1 (en) Recommendation Method and Reality Presenting Device
JP2024108887A (en) Lost property management system and program
CN114153310A (en) Robot guest greeting method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination