[go: up one dir, main page]

WO2018120033A1 - Procédé et dispositif destinés à aider un utilisateur à trouver un objet - Google Patents

Procédé et dispositif destinés à aider un utilisateur à trouver un objet Download PDF

Info

Publication number
WO2018120033A1
WO2018120033A1 PCT/CN2016/113534 CN2016113534W WO2018120033A1 WO 2018120033 A1 WO2018120033 A1 WO 2018120033A1 CN 2016113534 W CN2016113534 W CN 2016113534W WO 2018120033 A1 WO2018120033 A1 WO 2018120033A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
target object
virtual space
dimensional spatial
updated
Prior art date
Application number
PCT/CN2016/113534
Other languages
English (en)
Chinese (zh)
Inventor
南一冰
廉士国
李强
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to CN201680007027.0A priority Critical patent/CN107278301B/zh
Priority to PCT/CN2016/113534 priority patent/WO2018120033A1/fr
Publication of WO2018120033A1 publication Critical patent/WO2018120033A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries

Definitions

  • the present application relates to the field of artificial intelligence technologies, and in particular, to a method and apparatus for assisting a user in searching for objects.
  • the prior art provides a visual alternative method based on cognition and target recognition, which obtains a scene image by shooting the current scene of the blind person, and directly detects all objects appearing in the scene image. And only the information of all objects in the scene image is provided to the blind person, and the blind person makes a decision on the basis of the obtained information to find the target object.
  • This method has limited help for blind people in the process of searching for objects. It is impossible to guide blind people in the process of finding people in the blind, so it is not conducive to blind people to find the required objects in a timely and effective manner.
  • the embodiment of the present application provides a method and an apparatus for assisting a user to find an object, and mainly solves the problem that the existing technology cannot effectively help a blind person to find a desired object.
  • the present application provides a method for assisting a user in searching for objects, including: determining a target object and an initial three-dimensional spatial position of the target object relative to the user, the target object being an object to be sought by a user; generating and The initial virtual space sound corresponding to the initial three-dimensional spatial position; in the user's object-seeking process, real-time updating the three-dimensional spatial position of the target object relative to the user, and generating a new virtual corresponding to the updated three-dimensional spatial position Space sound.
  • the present application provides an apparatus for assisting a user in searching for objects, including: a detecting unit, configured to determine a target object, the target object is an object to be searched by a user, and a position determining unit configured to determine the detecting unit to detect An initial three-dimensional spatial position of the target object with respect to the user; a virtual space sound generating unit for generating and An initial virtual space sound corresponding to the initial three-dimensional spatial position determined by the position determining unit; the position determining unit is further configured to update a three-dimensional spatial position of the target object relative to the user in real time during a user object searching process; The virtual space sound generating unit is further configured to generate a new virtual space sound corresponding to the updated three-dimensional space position determined by the position determining unit.
  • the present application provides an electronic device, including: a memory, a communication interface, and a processor, the memory is configured to store computer executable code, and the processor is configured to execute the computer executable code control to execute the auxiliary user a method of searching for the data transmission of the electronic device and an external device.
  • the present application provides a robot, including the above electronic device.
  • the present application provides a computer storage medium for storing computer software instructions, including program code designed to perform the above-described method of assisting a user to find objects.
  • the present application provides a computer program product that can be directly loaded into an internal memory of a computer and includes software code, and the software code can be loaded and executed by a computer to implement the above-described method for assisting a user to find objects.
  • the method and apparatus for assisting a user to find objects firstly determining an object to be sought by a user (referred to as a target object in the present application) and an initial three-dimensional spatial position of the target object with respect to the user; generating an initial corresponding to the initial three-dimensional spatial position
  • the virtual space sound; in the user's object searching process, the three-dimensional spatial position of the target object relative to the current position of the user is updated in real time, and a new virtual space sound corresponding to the updated three-dimensional space position is generated.
  • only the information of all objects in the scene in the scene is provided to the user at the initial time, and the user decides and searches for the object in comparison with the object.
  • the position of the target object relative to the user is converted into a virtual space.
  • the spatial sound converts the visual position information of the target object into sound information, so that the blind person can judge the position of the target object according to the virtual space sound; and in the process of searching for the blind person, the target object is relatively blind as the position of the blind person changes continuously.
  • the position of the three-dimensional space is also constantly changing. Based on the changed position of the three-dimensional space, the virtual space sound is updated in real time, which helps the blind person to accurately judge the position of the target object, thereby helping the blind person to find things in time.
  • FIG. 1 is a schematic diagram of an auxiliary user object-seeking device according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a method for assisting a user in searching for objects according to an embodiment of the present application
  • FIG. 3 is a schematic flowchart diagram of another method for assisting a user in searching for objects according to an embodiment of the present application
  • FIG. 4 is a schematic flowchart diagram of another method for assisting a user in searching for objects according to an embodiment of the present application
  • FIG. 5 is a schematic flowchart of still another method for assisting a user to find objects according to an embodiment of the present application
  • FIG. 6 is a schematic structural diagram of an auxiliary user object-seeking device according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of another auxiliary user object-seeking device according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of another auxiliary user object-seeking device according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device for assisting the user to find objects is referred to as an auxiliary user object-seeking device in the embodiment of the present application.
  • the device for assisting the user to find the object may be set in the smart helmet, and the user may wear the helmet to implement the embodiment of the present application.
  • the device for assisting the user to find objects can also be integrated in the mobile guide robot, and the mobile guide robot can perform the method described in the embodiments of the present application to assist the blind user to find objects.
  • the auxiliary user object-seeking device can also be configured as other wearable devices, which is not limited in this embodiment of the present application.
  • the above-mentioned auxiliary user object-seeking device can implement the method provided by the embodiment of the present application to implement auxiliary user searching.
  • the embodiment of the present application provides a method for assisting a user in searching for objects. As shown in FIG. 2, the method includes:
  • Step 101 Determine a target object.
  • the target object is an object to be searched by a user.
  • the user may input, to the auxiliary user object searching device, prompt information related to the object to be obtained, where the prompt information may be the name of the object to be obtained or the object to be searched for by the user.
  • the auxiliary user object-seeking device can receive the prompt information input by the user, and perform the keyword extraction and the like analysis to determine the object that the user wants to find; and then assist the user to search for the device and then corresponding to the scene of the pre-stored user.
  • Target detection is performed in the panoramic image to determine the target object.
  • the manner in which the user inputs the prompt information may be voice input; the manner of inputting the user may also be a key input.
  • the object category corresponding to some key combinations may be predefined, and the user selects the desired one by pressing a button.
  • the panoramic image can be obtained by the image capturing device after the scene where the user is located, and sent by the image collecting device to the auxiliary user searching device.
  • the image capturing device can use a panoramic camera, and the panoramic camera can detect not only objects in the user's field of view, such as objects in front of the user, but also objects outside the user's field of view, such as behind the user. object.
  • the target detection (also known as object detection) technique can detect a two-dimensional position of a given category of objects contained in the panoramic image, including two-dimensional coordinates and the width and height of the object.
  • Step 102 Determine an initial three-dimensional spatial position of the target object with respect to the user.
  • the depth sensor can obtain depth information of the detected scene. , the distance between each object and the sensor in the scene.
  • the depth sensor may be a stereo vision sensor such as binocular or a laser scanning radar. The specific implementation of the depth sensor can refer to the prior art, and details are not described herein again.
  • Step 103 Generate an initial virtual space sound corresponding to the initial three-dimensional spatial position.
  • the virtual space sound can simulate the transfer function between the sound source and the two ears according to the perceptual characteristics of the human ear to the sound signal to reconstruct the complex three-dimensional virtual space sound field.
  • the virtual space sound technology reference may be made to the prior art, and details are not repeatedly described in the embodiments of the present application.
  • the user can use the virtual space sound generated in this step to obtain a prompt of the location of the target object.
  • Step 104 Update the three-dimensional spatial position of the target object relative to the user in real time during the user's object searching process.
  • Step 105 Generate a new virtual space sound corresponding to the updated three-dimensional space position.
  • the auxiliary user object-seeking device continuously tracks the current position of the user and continuously generates a new virtual space sound to continuously give the user the latest prompt.
  • the method for assisting a user to find objects by the present application first determines an object to be sought by the user (referred to as a target object in the present application) and an initial three-dimensional spatial position of the target object with respect to the user; and generates an initial virtual space corresponding to the initial three-dimensional spatial position. Sound; in the process of searching for objects, the three-dimensional spatial position of the target object relative to the current position of the user is updated in real time, and a new virtual space sound corresponding to the updated three-dimensional spatial position is generated.
  • a target object in the present application an object to be sought by the user
  • an initial virtual space corresponding to the initial three-dimensional spatial position Sound
  • the three-dimensional spatial position of the target object relative to the current position of the user is updated in real time, and a new virtual space sound corresponding to the updated three-dimensional spatial position is generated.
  • only the information of all objects in the scene in the scene is provided to the user at the initial time, and the user decides and searches for the object in comparison with the object.
  • the position of the target object relative to the user is converted into a virtual space.
  • the spatial sound converts the visual position information of the target object into sound information, so that the blind person can judge the position of the target object according to the virtual space sound; and in the process of searching for the blind person, the target object is relatively blind as the position of the blind person changes continuously.
  • the position of the three-dimensional space is also constantly changing. Based on the changed position of the three-dimensional space, the virtual space sound is updated in real time, which helps the blind person to accurately judge the position of the target object, thereby helping the blind person to find things in time.
  • step 101 when the target object is determined according to step 101, if a plurality of candidate target objects are detected according to the prompt information input by the user, prompt information is sent to the user for prompting the user to detect the plurality of candidate target objects from the detected target objects. Determine the final target object.
  • the prompt information may prompt the user to input more keywords.
  • information for each candidate target object detected may be provided to the user so that the user inputs the prompt information again to determine the final target object.
  • the target object that the user wants to find is a cup, and actually detects a plurality of cups such as a coffee cup, a red mug, etc., and then issues a reminder to prompt the user to input more detailed prompt information, such as color, function, etc. Then determine the final target object, such as a red mug, based on the user's response.
  • the sound category of the virtual space sound may also be determined according to the type of the target object, so that the sound category and the sound of the virtual space sound are The type of the target object corresponds.
  • the sound of the virtual space sound can be set as a flowing water.
  • the virtual space sound can be set as the whistle sound; when the object to be searched by the user is the mobile phone, the virtual space can be set.
  • the virtual space sound may also be the name of the target object.
  • the virtual space sound can be set to continuously repeat the "car" pronunciation.
  • the tracking target technology is used to lock the target object after the target object is determined. Therefore, after the step 101 “determining the target object”, as shown in FIG. 3, the method further includes:
  • the tracking target object For the specific implementation of the tracking target object, reference may be made to the target tracking technology in the prior art, such as a tracking technology based on computer vision technology, and details are not described herein.
  • One available method for detecting a target object is to continuously perform target detection in real time during the user's object-seeking process to continuously determine the target object.
  • this method may bring the following disadvantages: during a certain detection process, the detected target object is different from the initially detected target object, or a new target object is detected.
  • the role of the "locking" target object can be achieved, and the uniqueness of the target object during the object searching process can be ensured.
  • the real-time updating of the three-dimensional spatial position of the target object relative to the user in the process of the user fetching includes:
  • the virtual space sound generally reflects the positional relationship and cannot reflect the distance relationship.
  • the virtual space sound can only prompt the user that the target object is located in front, but cannot reflect the distance between the target object and the user. Therefore, in order to better prompt the user of the distance from the target object, as shown in FIG. 4, the step 104 “updates the three-dimensional spatial position of the target object relative to the current position of the user in real time during the user's object searching process”, after that, The method further includes:
  • Step 301 Detect whether the updated three-dimensional spatial location is closer to the user than the updated three-dimensional spatial location.
  • step 302 may be performed when performing the step 105 “generating a new virtual space sound corresponding to the updated three-dimensional space position”.
  • Step 302 Generate a new virtual space sound corresponding to the updated three-dimensional spatial position, and the frequency of the new virtual space sound is greater than the virtual space sound corresponding to the three-dimensional spatial position before the update.
  • the user may be prompted to be closer to the target object by increasing the volume of the virtual space sound;
  • the volume of the sound prompts the user to move away from the target object.
  • the method further includes:
  • Step 401 When the distance between the target object and the user is less than a preset threshold, the voice prompts the user to guide the user to gradually approach the target object.
  • the preset threshold may be set according to actual needs.
  • the target object when the target object is very close to the user, for example, located on the left side of the user and reachable to the user, the user can be voiced to be located on the left side of the user.
  • This prompting method does not require a complicated process of generating a virtual space sound, and is simple and effective.
  • the target object may move.
  • the field of view of the image detecting device causes the tracking to fail; or, in some cases, the target object is occluded, which may cause interference to the object or cause the object to fail.
  • the user should be promptly given a corresponding reminder to inform the user to adjust the location of the user, etc., and the system automatically restarts the method from step 101, if after a preset time or after multiple adjustments, If the target object cannot be detected, the user can be prompted to end the search.
  • the above method provided by the embodiment of the present application can assist any user to find objects, for example, assisting a blind user to find objects, or an ordinary user wearing a helmet capable of implementing the above method to perform a game of searching or the like.
  • auxiliary user object-seeking device includes corresponding hardware structures and/or software modules for executing the respective functions in order to implement the above functions.
  • present application can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present application.
  • the embodiment of the present application may divide the function module of the auxiliary user object-seeking device or the like according to the above method example.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 6 is a schematic diagram showing a possible structure of the auxiliary user object-seeking device involved in the above embodiment, and the auxiliary user object-seeking device includes: a detecting unit 501, a position, in a case where each function module is divided by a corresponding function.
  • the detecting unit 501 is configured to support the auxiliary user searching device to perform the process 101 in FIG.
  • the position determining unit 502 is configured to support the auxiliary user searching device to perform step 102, step 104, step 202, step 203, step 301, step 302, and Step 401:
  • the virtual space sound generating unit 503 is configured to support the auxiliary user object searching device to perform step 103, step 105, and step 303.
  • the auxiliary user object-seeking device involved in the foregoing embodiment further includes The tracking unit 601 is configured to support the auxiliary user object searching device to perform step 201. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
  • FIG. 8 shows a possible structural diagram involved in the above embodiment.
  • the auxiliary user object finding device includes a processing module 701 and a communication module 702.
  • the processing module 701 is configured to control and manage the actions of the auxiliary user object searching device.
  • the processing module 701 is configured to support the auxiliary user object searching device to perform the processes 101 to 105 in FIG. 2, and the processes 201 to 204 in FIG. 3, FIG. Processes 301 through 303, process 401 in FIG. 5, and/or other processes for the techniques described herein.
  • the communication module 702 is configured to support communication between the auxiliary user object-seeking device and other network entities, such as with the functional modules or network entities shown in FIG.
  • the auxiliary user object-seeking device may further include a storage module 703 for storing program codes and data of the auxiliary user-seeking device.
  • the processing module 701 can be a processor or a controller, for example, a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application-specific integrated circuit (Application-Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 702 can be a transceiver, a transceiver circuit, a communication interface, or the like.
  • the storage module 703 can be a memory.
  • the processing module 701 is a processor
  • the communication module 702 is a communication interface
  • the storage module 703 is a memory
  • the auxiliary user object-seeking device involved in the embodiment of the present application may be the electronic device shown in FIG.
  • the electronic device includes a processor 801, a communication interface 802, a memory 803, and a bus 804.
  • the processor 801, the communication interface 802, and the memory 803 are connected to each other through a bus 804.
  • the bus 804 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. Wait.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For the sake of convenience, only one thick line is shown in Figure 9, but it does not mean There is only one bus or one type of bus.
  • the steps of a method or algorithm described in connection with the present disclosure may be implemented in a hardware or may be implemented by a processor executing software instructions.
  • the software instructions may be composed of corresponding software modules, which may be stored in a random access memory (RAM), a flash memory, a read only memory (ROM), an erasable programmable read only memory ( Erasable Programmable ROM (EPROM), electrically erasable programmable read only memory (EEPROM), registers, hard disk, removable hard disk, compact disk read only (CD-ROM) or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC.
  • the functions described herein can be implemented in hardware, software, firmware, or any combination thereof.
  • the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium.
  • Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
  • a storage medium may be any available media that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé et un dispositif destinés à aider un utilisateur à trouver un objet, qui se rapportent au domaine technique de l'intelligence artificielle, et qui résolvent principalement le problème de la technologie existante lié au fait qu'il n'existe pas de procédé destiné à aider efficacement une personne aveugle à trouver un objet souhaité. Le procédé destiné à aider un utilisateur à trouver un objet consiste : à déterminer un objet cible (101), à déterminer une position spatiale tridimensionnelle initiale de l'objet cible relativement à un utilisateur (102), l'objet cible étant un objet devant être trouvé par l'utilisateur ; à produire un son spatial virtuel initial correspondant à la position spatiale tridimensionnelle initiale (103) ; pendant un processus au cours duquel l'utilisateur recherche un objet, à mettre à jour une position spatiale tridimensionnelle de l'objet cible par rapport à l'utilisateur en temps réel (104) ; à produire un nouveau son spatial virtuel correspondant à la position spatiale tridimensionnelle mise à jour (105). Ledit procédé est appliqué pendant un processus au cours duquel une personne aveugle recherche un objet.
PCT/CN2016/113534 2016-12-30 2016-12-30 Procédé et dispositif destinés à aider un utilisateur à trouver un objet WO2018120033A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680007027.0A CN107278301B (zh) 2016-12-30 2016-12-30 一种辅助用户寻物的方法及装置
PCT/CN2016/113534 WO2018120033A1 (fr) 2016-12-30 2016-12-30 Procédé et dispositif destinés à aider un utilisateur à trouver un objet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/113534 WO2018120033A1 (fr) 2016-12-30 2016-12-30 Procédé et dispositif destinés à aider un utilisateur à trouver un objet

Publications (1)

Publication Number Publication Date
WO2018120033A1 true WO2018120033A1 (fr) 2018-07-05

Family

ID=60052252

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/113534 WO2018120033A1 (fr) 2016-12-30 2016-12-30 Procédé et dispositif destinés à aider un utilisateur à trouver un objet

Country Status (2)

Country Link
CN (1) CN107278301B (fr)
WO (1) WO2018120033A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112672057A (zh) * 2020-12-25 2021-04-16 维沃移动通信有限公司 拍摄方法及装置
CN116662465A (zh) * 2023-04-26 2023-08-29 奇瑞新能源汽车股份有限公司 一种辅助寻物系统、方法及汽车
CN118859959A (zh) * 2024-09-27 2024-10-29 上海傅利叶智能科技有限公司 基于人形机器人的快速寻物方法及相关装置

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108426578B (zh) * 2017-12-29 2024-06-25 达闼科技(北京)有限公司 一种基于云端的导航方法、电子设备和可读存储介质
CN108366338B (zh) * 2018-01-31 2021-07-16 联想(北京)有限公司 用于查找电子设备的方法和装置
CN108427914B (zh) 2018-02-08 2020-08-18 阿里巴巴集团控股有限公司 入离场状态检测方法和装置
US10860165B2 (en) 2018-09-26 2020-12-08 NextVPU (Shanghai) Co., Ltd. Tracking method and apparatus for smart glasses, smart glasses and storage medium
KR102242719B1 (ko) * 2018-09-26 2021-04-23 넥스트브이피유 (상하이) 코포레이트 리미티드 스마트 안경 추적 방법과 장치, 및 스마트 안경과 저장 매체
CN110955043B (zh) * 2018-09-26 2024-06-18 上海肇观电子科技有限公司 一种智能眼镜焦点跟踪方法、装置及智能眼镜、存储介质
CN110559127A (zh) * 2019-08-27 2019-12-13 上海交通大学 基于听觉与触觉引导的智能助盲系统及方法
CN111121749B (zh) * 2019-12-26 2023-05-23 韩可 一种基于神经网络的3d音效增强现实盲人导航系统的导航方法
CN111443650B (zh) * 2020-06-15 2020-10-16 季华实验室 机器人导盲犬用的终端及其安全控制方法、电子设备
CN112546629B (zh) * 2020-12-10 2025-04-08 厦门盈趣科技股份有限公司 游戏交互方法、系统、移动终端及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101040808A (zh) * 2007-04-19 2007-09-26 上海交通大学 利用听觉辅助盲人取物的方法
CN101040810A (zh) * 2007-04-19 2007-09-26 上海交通大学 基于物体辨识的盲人生活辅助装置
CN204744865U (zh) * 2015-06-08 2015-11-11 深圳市中科微光医疗器械技术有限公司 基于听觉的为视觉障碍人士传达周围环境信息的装置
CN105223551A (zh) * 2015-10-12 2016-01-06 吉林大学 一种可穿戴的声源定位跟踪系统及方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7844462B2 (en) * 2007-02-01 2010-11-30 Sap Ag Spatial sound generation for screen navigation
CN101385677A (zh) * 2008-10-16 2009-03-18 上海交通大学 基于运动物体跟踪的导盲方法及装置
CN105761235A (zh) * 2014-12-19 2016-07-13 天津市巨海机电设备安装有限公司 一种将视觉信息转换成听觉信息的视觉辅助方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101040808A (zh) * 2007-04-19 2007-09-26 上海交通大学 利用听觉辅助盲人取物的方法
CN101040810A (zh) * 2007-04-19 2007-09-26 上海交通大学 基于物体辨识的盲人生活辅助装置
CN204744865U (zh) * 2015-06-08 2015-11-11 深圳市中科微光医疗器械技术有限公司 基于听觉的为视觉障碍人士传达周围环境信息的装置
CN105223551A (zh) * 2015-10-12 2016-01-06 吉林大学 一种可穿戴的声源定位跟踪系统及方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112672057A (zh) * 2020-12-25 2021-04-16 维沃移动通信有限公司 拍摄方法及装置
CN112672057B (zh) * 2020-12-25 2022-07-15 维沃移动通信有限公司 拍摄方法及装置
CN116662465A (zh) * 2023-04-26 2023-08-29 奇瑞新能源汽车股份有限公司 一种辅助寻物系统、方法及汽车
CN118859959A (zh) * 2024-09-27 2024-10-29 上海傅利叶智能科技有限公司 基于人形机器人的快速寻物方法及相关装置
CN118859959B (zh) * 2024-09-27 2024-12-17 上海傅利叶智能科技有限公司 基于人形机器人的快速寻物方法及相关装置

Also Published As

Publication number Publication date
CN107278301A (zh) 2017-10-20
CN107278301B (zh) 2020-12-08

Similar Documents

Publication Publication Date Title
WO2018120033A1 (fr) Procédé et dispositif destinés à aider un utilisateur à trouver un objet
CN111340766B (zh) 目标对象的检测方法、装置、设备和存储介质
EP2509070B1 (fr) Appareil et procédé pour déterminer la pertinence d'une saisie vocale
CN110728717B (zh) 定位方法及装置、设备、存储介质
JP6348574B2 (ja) 総体的カメラ移動およびパノラマカメラ移動を使用した単眼視覚slam
CN119339008B (zh) 基于多模态数据的场景空间三维模型动态建模方法
JP6694233B2 (ja) 眼輻輳に基づいた視覚不注意の検出
CN111105454B (zh) 一种获取定位信息的方法、装置及介质
US11741671B2 (en) Three-dimensional scene recreation using depth fusion
JP2021522564A (ja) 非制約環境において人間の視線及びジェスチャを検出するシステムと方法
WO2019179442A1 (fr) Procédé et appareil de détermination de cible d'interaction pour dispositif intelligent
WO2020000395A1 (fr) Systèmes et procédés pour auto-relocalisation solide dans une carte visuelle pré-construite
JP2022546201A (ja) ターゲット検出方法および装置、電子機器並びに記憶媒体
WO2020020375A1 (fr) Procédé et appareil de traitement vocal, dispositif électronique et support de stockage lisible
WO2019144263A1 (fr) Procédé de commande et dispositif destiné à une plateforme mobile, et support d'informations lisible par ordinateur
WO2023015938A1 (fr) Procédé et appareil de détection de point tridimensionnel, dispositif électronique et support de stockage
WO2022193456A1 (fr) Procédé de suivi de cible, appareil, dispositif électronique, et support d'informations
US10583067B2 (en) Source-of-sound based navigation for a visually-impaired user
US9301722B1 (en) Guiding computational perception through a shared auditory space
JP7224592B2 (ja) 情報処理装置、情報処理方法、およびプログラム
CN106991376A (zh) 结合深度信息的侧脸验证方法及装置与电子装置
WO2020038111A1 (fr) Procédé et dispositif de détection d'orientation, dispositif électronique et support d'enregistrement
WO2020037553A1 (fr) Procédé et dispositif de traitement d'image et dispositif mobile
CN105208283A (zh) 一种声控拍照的方法及装置
CN113487537B (zh) 乳腺癌超声高回声晕的信息处理方法、装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16925353

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/10/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16925353

Country of ref document: EP

Kind code of ref document: A1