CN113256715B - Positioning method and device for robot - Google Patents
Positioning method and device for robot Download PDFInfo
- Publication number
- CN113256715B CN113256715B CN202010089211.2A CN202010089211A CN113256715B CN 113256715 B CN113256715 B CN 113256715B CN 202010089211 A CN202010089211 A CN 202010089211A CN 113256715 B CN113256715 B CN 113256715B
- Authority
- CN
- China
- Prior art keywords
- frame image
- image
- view mark
- robot
- target view
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
本公开的实施例公开了机器人的定位方法和装置。该方法的一具体实施方式包括:响应于接收到机器人采集的当前帧图像,从预先建立的目标地图中获取与当前帧图像匹配的参考帧图像,其中,参考帧图像包括至少一个视图标志和视图标志的标识信息,标识信息包括视图标志的边缘信息和机器人采集参考帧图像的参考位姿;在当前图像帧中确定目标视图标志,对所确定的目标视图标志进行边缘提取得到目标视图标志的提取边缘图像;基于参考帧图像中的目标视图标志的标识信息和当前帧图像中的目标视图标志的提取边缘图像,确定机器人的当前位姿。该实施方式利用图像中视图标志的边缘图像确定机器人的当前位姿,提高了机器人定位的准确性。
The embodiments of the present disclosure disclose a method and device for positioning a robot. A specific implementation of the method includes: in response to receiving a current frame image collected by the robot, obtaining a reference frame image matching the current frame image from a pre-established target map, wherein the reference frame image includes at least one view mark and identification information of the view mark, and the identification information includes edge information of the view mark and a reference pose of the robot collecting the reference frame image; determining a target view mark in the current image frame, performing edge extraction on the determined target view mark to obtain an extracted edge image of the target view mark; determining the current pose of the robot based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image. This implementation uses the edge image of the view mark in the image to determine the current pose of the robot, thereby improving the accuracy of the robot positioning.
Description
技术领域Technical Field
本公开的实施例涉及计算机技术领域,具体涉及机器人的定位方法和装置。Embodiments of the present disclosure relate to the field of computer technology, and more particularly to a method and device for positioning a robot.
背景技术Background technique
视觉SLAM是目前SLAM研究热点之一。机器人可以采用视觉SLAM(SimultaneousLocalization And Mapping,同步定位与建图)技术并发地构建地图与定位。在视觉SLAM中,机器人可以通过视觉的回环检测,对比当前场景的图像以及所建立的地图中的所有图像,识别机器人曾经到过的地方。然后,通过特征点匹配等方法对机器人进行定位,从而消除定位和建图过程中所产生的累积误差。Visual SLAM is one of the current hot topics in SLAM research. Robots can use visual SLAM (Simultaneous Localization And Mapping) technology to concurrently build maps and locate. In visual SLAM, the robot can use visual loop detection to compare the image of the current scene with all the images in the established map to identify the places the robot has been to. Then, the robot is located through methods such as feature point matching, thereby eliminating the accumulated errors generated during the positioning and mapping process.
相关技术中,在光照恒定、图像采集场景不变或者变化不大的情况下,视觉SLAM所建立的地图可以重复使用,定位结果准确。但是,在光照变化较大和动态场景下,机器人定位结果准确性较差。In the related art, when the lighting is constant and the image acquisition scene is unchanged or changes little, the map established by visual SLAM can be reused and the positioning result is accurate. However, in the case of large lighting changes and dynamic scenes, the accuracy of robot positioning results is poor.
发明内容Summary of the invention
本公开的实施例提出了机器人的定位方法和装置。The embodiments of the present disclosure provide a method and device for positioning a robot.
第一方面,本公开的实施例提供了一种机器人的定位方法,该方法包括:响应于接收到机器人采集的当前帧图像,从预先建立的目标地图中获取与当前帧图像匹配的参考帧图像,其中,参考帧图像包括至少一个视图标志和视图标志的标识信息,标识信息包括视图标志的边缘信息和机器人采集参考帧图像的参考位姿;在当前图像帧中确定目标视图标志,对所确定的目标视图标志进行边缘提取得到目标视图标志的提取边缘图像,其中,参考帧图像包括目标视图标志;基于参考帧图像中的目标视图标志的标识信息和当前帧图像中的目标视图标志的提取边缘图像,确定机器人的当前位姿。In a first aspect, an embodiment of the present disclosure provides a robot positioning method, the method comprising: in response to receiving a current frame image captured by the robot, acquiring a reference frame image matching the current frame image from a pre-established target map, wherein the reference frame image comprises at least one view mark and identification information of the view mark, the identification information comprises edge information of the view mark and a reference pose of the robot when capturing the reference frame image; determining a target view mark in the current image frame, performing edge extraction on the determined target view mark to obtain an extracted edge image of the target view mark, wherein the reference frame image comprises the target view mark; determining the current pose of the robot based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image.
在一些实施例中,响应于接收到机器人采集的当前帧图像,从预先建立的目标地图中获取与当前帧图像匹配的参考帧图像,包括:响应于接收到机器人采集的当前帧图像,对当前帧图像进行场景识别;基于场景识别结果,从预先建立的目标地图中确定与当前图像帧匹配的参考图像帧。In some embodiments, in response to receiving a current frame image captured by a robot, a reference frame image matching the current frame image is obtained from a pre-established target map, including: in response to receiving a current frame image captured by a robot, performing scene recognition on the current frame image; based on the scene recognition result, determining a reference image frame matching the current image frame from a pre-established target map.
在一些实施例中,在当前图像帧中确定目标视图标志,包括:在当前帧图像中识别出至少一个视图标志;针对所识别出的至少一个视图标志中的视图标志,响应于确定出参考帧图像中包含该视图标志,将该视图标志确定为目标视图标志。In some embodiments, determining a target view flag in a current image frame includes: identifying at least one view flag in the current frame image; and determining, for a view flag among the at least one identified view flag, in response to determining that the view flag is included in a reference frame image, determining the view flag as a target view flag.
在一些实施例中,基于参考帧图像中的目标视图标志的标识信息和当前帧图像中的目标视图标志的提取边缘图像,确定机器人的当前位姿,包括:基于参考帧图像中的目标视图标志的标识信息,在当前帧图像中确定出目标视图标志的理论边缘图像;采用非线性最小二乘法,将当前帧图像中的目标视图标志的提取边缘图像和所确定的理论边缘图像进行拟合,得到机器人的当前位姿。In some embodiments, the current posture of the robot is determined based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image, including: determining a theoretical edge image of the target view mark in the current frame image based on the identification information of the target view mark in the reference frame image; and fitting the extracted edge image of the target view mark in the current frame image and the determined theoretical edge image using a nonlinear least squares method to obtain the current posture of the robot.
在一些实施例中,方法还包括:将当前帧图像中的目标视图标志的提取边缘图像和理论边缘图像之间的欧式距离平方和确定为非线性最小二乘法的目标函数;确定目标函数的最小值,以将当前帧图像中的目标视图标志的提取边缘图像和理论边缘图像拟合。In some embodiments, the method also includes: determining the sum of squared Euclidean distances between the extracted edge image of the target view mark in the current frame image and the theoretical edge image as the objective function of the nonlinear least squares method; and determining the minimum value of the objective function to fit the extracted edge image of the target view mark in the current frame image and the theoretical edge image.
在一些实施例中,目标地图通过如下步骤建立:获取机器人采集的环境图像和机器人的位姿;对环境图像进行边缘提取,得到环境图像中的视图标志的边缘信息,其中,边缘信息包括像素深度信息和像素坐标信息;采用SLAM算法处理所采集的环境图像,生成并保存实时地图;在实时地图中为视图标志设置标识信息,得到目标地图,其中,标识信息包括边缘信息和机器人的位姿。In some embodiments, a target map is established through the following steps: acquiring an environmental image collected by a robot and the position and posture of the robot; performing edge extraction on the environmental image to obtain edge information of a view mark in the environmental image, wherein the edge information includes pixel depth information and pixel coordinate information; using a SLAM algorithm to process the collected environmental image to generate and save a real-time map; setting identification information for the view mark in the real-time map to obtain a target map, wherein the identification information includes edge information and the position and posture of the robot.
第二方面,本公开的实施例提供了一种机器人的定位装置,装置包括:获取单元,被配置成响应于接收到机器人采集的当前帧图像,从预先建立的目标地图中获取与当前帧图像匹配的参考帧图像,其中,参考帧图像包括至少一个视图标志和视图标志的标识信息,标识信息包括视图标志的边缘信息和机器人采集参考帧图像的参考位姿;边缘提取单元,被配置成在当前图像帧中确定目标视图标志,对所确定的目标视图标志进行边缘提取得到目标视图标志的提取边缘图像,其中,参考帧图像包括目标视图标志;确定单元,被配置成基于参考帧图像中的目标视图标志的标识信息和当前帧图像中的目标视图标志的提取边缘图像,确定机器人的当前位姿。In a second aspect, an embodiment of the present disclosure provides a positioning device for a robot, the device comprising: an acquisition unit, configured to, in response to receiving a current frame image collected by the robot, obtain a reference frame image matching the current frame image from a pre-established target map, wherein the reference frame image comprises at least one view mark and identification information of the view mark, the identification information comprises edge information of the view mark and a reference pose of the robot collecting the reference frame image; an edge extraction unit, configured to determine a target view mark in a current image frame, perform edge extraction on the determined target view mark to obtain an extracted edge image of the target view mark, wherein the reference frame image comprises the target view mark; a determination unit, configured to determine the current pose of the robot based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image.
在一些实施例中,获取单元进一步被配置成:响应于接收到机器人采集的当前帧图像,对当前帧图像进行场景识别;基于场景识别结果,从预先建立的目标地图中确定与当前图像帧匹配的参考图像帧。In some embodiments, the acquisition unit is further configured to: in response to receiving a current frame image captured by the robot, perform scene recognition on the current frame image; and based on the scene recognition result, determine a reference image frame that matches the current image frame from a pre-established target map.
在一些实施例中,边缘提取单元进一步被配置成在当前帧图像中识别出至少一个视图标志;针对所识别出的至少一个视图标志中的视图标志,响应于确定出参考帧图像中包含该视图标志,将该视图标志确定为目标视图标志。In some embodiments, the edge extraction unit is further configured to identify at least one view mark in the current frame image; for a view mark among the at least one identified view mark, in response to determining that the view mark is included in the reference frame image, the view mark is determined as a target view mark.
在一些实施例中,确定单元进一步被配置成:基于参考帧图像中的目标视图标志的标识信息,在当前帧图像中确定出目标视图标志的理论边缘图像;采用非线性最小二乘法,将当前帧图像中的目标视图标志的提取边缘图像和所确定的理论边缘图像进行拟合,得到机器人的当前位姿。In some embodiments, the determination unit is further configured to: determine a theoretical edge image of the target view mark in the current frame image based on the identification information of the target view mark in the reference frame image; and use a nonlinear least squares method to fit the extracted edge image of the target view mark in the current frame image and the determined theoretical edge image to obtain the current posture of the robot.
在一些实施例中,确定单元进一步还被配置成:将当前帧图像中的目标视图标志的提取边缘图像和理论边缘图像之间的欧式距离平方和确定为非线性最小二乘法的目标函数;确定目标函数的最小值,以将当前帧图像中的目标视图标志的提取边缘图像和理论边缘图像拟合。In some embodiments, the determination unit is further configured to: determine the sum of squares of the Euclidean distances between the extracted edge image of the target view mark in the current frame image and the theoretical edge image as the objective function of the nonlinear least squares method; determine the minimum value of the objective function to fit the extracted edge image of the target view mark in the current frame image and the theoretical edge image.
在一些实施例中,目标地图通过如下步骤建立:获取机器人采集的环境图像和机器人的位姿;对环境图像进行边缘提取,得到环境图像中的视图标志的边缘信息,其中,边缘信息包括像素深度信息和像素坐标信息;采用SLAM算法处理所采集的环境图像,生成并保存实时地图;在实时地图中为视图标志设置标识信息,得到目标地图,其中,标识信息包括边缘信息和机器人的位姿。In some embodiments, a target map is established through the following steps: acquiring an environmental image collected by a robot and the position and posture of the robot; performing edge extraction on the environmental image to obtain edge information of a view mark in the environmental image, wherein the edge information includes pixel depth information and pixel coordinate information; using a SLAM algorithm to process the collected environmental image to generate and save a real-time map; setting identification information for the view mark in the real-time map to obtain a target map, wherein the identification information includes edge information and the position and posture of the robot.
本公开的实施例提供的机器人的定位方法和装置,响应于接收到机器人采集的当前帧图像,可以从预先建立的目标地图中获取与当前帧图像匹配的参考帧图像,而后在当前图像帧中确定目标视图标志,对当前图像帧中的视图标志进行边缘提取,得到目标视图标志的提取边缘图像,最后基于参考帧图像中的目标视图标志的标识信息和当前帧图像中的目标视图标志的提取边缘图像,可以确定机器人的当前位姿,从而实现了利用图像中视图标志的边缘图像确定机器人当前的位姿,使得机器人定位不受光照和场景变化的影响,提高了机器人定位的准确性。The robot positioning method and device provided by the embodiments of the present disclosure can, in response to receiving a current frame image collected by the robot, obtain a reference frame image that matches the current frame image from a pre-established target map, and then determine the target view mark in the current image frame, perform edge extraction on the view mark in the current image frame, and obtain an extracted edge image of the target view mark. Finally, based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image, the current posture of the robot can be determined, thereby realizing the determination of the current posture of the robot by using the edge image of the view mark in the image, so that the positioning of the robot is not affected by changes in lighting and scene, thereby improving the accuracy of the robot positioning.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本公开的其它特征、目的和优点将会变得更明显:Other features, objects and advantages of the present disclosure will become more apparent from the detailed description of non-limiting embodiments made with reference to the following drawings:
图1是本公开的一些实施例可以应用于其中的示例性系统架构图;FIG1 is an exemplary system architecture diagram in which some embodiments of the present disclosure may be applied;
图2是根据本公开的机器人的定位方法的一个实施例的流程图;FIG2 is a flow chart of an embodiment of a positioning method of a robot according to the present disclosure;
图3是根据本公开的机器人的定位方法的又一个实施例的流程图;FIG3 is a flow chart of another embodiment of a robot positioning method according to the present disclosure;
图4是根据本公开的实施例的机器人的定位方法的一个应用场景的示意图;FIG4 is a schematic diagram of an application scenario of a robot positioning method according to an embodiment of the present disclosure;
图5是根据本公开的机器人的定位装置的一个实施例的结构示意图;FIG5 is a schematic structural diagram of an embodiment of a positioning device for a robot according to the present disclosure;
图6是适于用来实现本公开的实施例的电子设备的结构示意图。FIG. 6 is a schematic diagram of the structure of an electronic device suitable for implementing the embodiments of the present disclosure.
具体实施方式Detailed ways
下面结合附图和实施例对本公开作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。The present disclosure is further described in detail below in conjunction with the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are only used to explain the relevant invention, rather than to limit the invention. It is also necessary to explain that, for ease of description, only the parts related to the relevant invention are shown in the accompanying drawings.
需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本公开。It should be noted that, in the absence of conflict, the embodiments and features in the embodiments of the present disclosure may be combined with each other. The present disclosure will be described in detail below with reference to the accompanying drawings and in combination with the embodiments.
图1示出了可以应用本公开的实施例的机器人的定位方法或机器人的定位装置的示例性系统架构100。FIG. 1 shows an exemplary system architecture 100 to which a robot positioning method or a robot positioning device according to an embodiment of the present disclosure can be applied.
如图1所示,系统架构100可以包括机器人101,网络102和服务器103。网络102用以在机器人101和服务器103之间提供通信链路的介质。网络102可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。As shown in Fig. 1, the system architecture 100 may include a robot 101, a network 102 and a server 103. The network 102 is used to provide a medium for a communication link between the robot 101 and the server 103. The network 102 may include various connection types, such as wired, wireless communication links or optical fiber cables, etc.
机器人101可以通过网络102与服务器103交互,以接收或发送消息等。机器人101可以为应用在不同领域的机器人,例如扫地机器人、AGV等,并且机器人101可以设有图像采集装置,该机器人在移动的过程中可以通过图像采集装置进行图像采集。The robot 101 can interact with the server 103 through the network 102 to receive or send messages, etc. The robot 101 can be a robot used in different fields, such as a sweeping robot, AGV, etc., and the robot 101 can be provided with an image acquisition device, and the robot can perform image acquisition through the image acquisition device during movement.
服务器103可以是提供各种服务的服务器,例如对机器人101采集到的当前帧图像等数据进行分析等处理的后台服务器,后台服务器还可以将处理结果(例如确定的机器人的当前位姿)反馈给机器人。The server 103 may be a server that provides various services, such as a background server that analyzes and processes data such as the current frame image collected by the robot 101. The background server may also feed back the processing results (such as the current position of the robot) to the robot.
需要说明的是,本公开的实施例所提供的机器人的定位方法可以由服务器103执行。相应地,机器人的定位装置可以设置于服务器103中。It should be noted that the robot positioning method provided in the embodiment of the present disclosure may be executed by the server 103. Accordingly, the robot positioning device may be disposed in the server 103.
需要说明的是,服务器可以是硬件,也可以是软件。当服务器为硬件时,可以实现成多个服务器组成的分布式服务器集群,也可以实现成单个服务器。当服务器为软件时,可以实现成例如用来提供分布式服务的多个软件或软件模块,也可以实现成单个软件或软件模块。在此不做具体限定。It should be noted that the server can be hardware or software. When the server is hardware, it can be implemented as a distributed server cluster consisting of multiple servers, or it can be implemented as a single server. When the server is software, it can be implemented as multiple software or software modules for providing distributed services, or it can be implemented as a single software or software module. No specific limitation is made here.
应该理解,图1中的机器人、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的机器人、网络和服务器。It should be understood that the number of robots, networks and servers in Figure 1 is only illustrative. Any number of robots, networks and servers may be provided according to implementation requirements.
需要指出的是,机器人101可以直接对自身进行定位,机器人101从其中的图像采集装置获取当前帧图像后,可以从预先建立的目标地图中获取与当前帧图像匹配的参考帧图像,并在当前图像帧中确定目标视图标志,对当前帧图像中的视图标志进行边缘提取,得到目标视图标志的提取边缘图像,以及基于参考帧图像中的目标视图标志的标识信息和提取边缘图像,可以确定机器人的当前位姿。此时,机器人的定位方法也可以由机器人101执行,相应地,机器人的定位装置也可以设置于机器人101中。此时,示例性系统架构100可以不存在服务器103和网络102。It should be noted that the robot 101 can directly locate itself. After the robot 101 obtains the current frame image from the image acquisition device therein, it can obtain a reference frame image matching the current frame image from the pre-established target map, determine the target view mark in the current image frame, perform edge extraction on the view mark in the current frame image, obtain the extracted edge image of the target view mark, and determine the current position and posture of the robot based on the identification information of the target view mark in the reference frame image and the extracted edge image. At this time, the robot's positioning method can also be executed by the robot 101, and accordingly, the robot's positioning device can also be set in the robot 101. At this time, the exemplary system architecture 100 can be without the server 103 and the network 102.
继续参考图2,示出了根据本公开的机器人的定位方法的一个实施例的流程200。该机器人的定位方法,包括以下步骤:Continuing to refer to FIG2 , a process 200 of an embodiment of a robot positioning method according to the present disclosure is shown. The robot positioning method comprises the following steps:
步骤201,响应于接收到机器人采集的当前帧图像,从预先建立的目标地图中获取与当前帧图像匹配的参考帧图像。Step 201 , in response to receiving a current frame image captured by a robot, obtaining a reference frame image matching the current frame image from a pre-established target map.
在本实施例中,机器人的定位方法的执行主体(例如图1所示的服务器)可以通过有线连接方式或者无线连接方式接收机器人发送的当前帧图像。这里,当前帧图像可以为机器人在当前位置进行图像采集得到的图像。在接收到机器人采集的当前帧图像后,上述执行主体可以采用各种方式将所获取的当前帧图像与预先建立的目标地图中的各帧图像进行匹配,从中获取与当前帧图像匹配的图像,并将所获取的图像确定为当前帧图像的参考帧图像。作为示例,上述执行主体可以计算当前帧图像与目标地图中各帧图像之间的距离,并将目标地图中与当前帧图像距离最小的图像确定为参考帧图像。In this embodiment, the execution subject of the robot positioning method (such as the server shown in Figure 1) can receive the current frame image sent by the robot through a wired connection or a wireless connection. Here, the current frame image can be an image obtained by the robot at the current position through image acquisition. After receiving the current frame image acquired by the robot, the above-mentioned execution subject can use various methods to match the acquired current frame image with each frame image in the pre-established target map, obtain an image matching the current frame image, and determine the acquired image as a reference frame image of the current frame image. As an example, the above-mentioned execution subject can calculate the distance between the current frame image and each frame image in the target map, and determine the image with the smallest distance to the current frame image in the target map as the reference frame image.
上述参考帧图像中可以包括至少一个视图标志和视图标志的标识信息。这里,视图标志可以为图像中包含的广告牌、交通信号指示牌等具有标志性的对象。视图标志的标识信息可以包括该视图标志的边缘信息和参考位姿。其中,边缘图像信息可以包括视图标志的边缘图像在参考帧图像中的图像坐标信息和视图标志的边缘的深度信息。参考位姿可以为机器人采集该参考帧图像时的位姿。The above-mentioned reference frame image may include at least one view mark and identification information of the view mark. Here, the view mark may be a landmark object such as a billboard, a traffic signal sign, etc. contained in the image. The identification information of the view mark may include edge information and a reference pose of the view mark. Among them, the edge image information may include image coordinate information of the edge image of the view mark in the reference frame image and depth information of the edge of the view mark. The reference pose may be the pose of the robot when collecting the reference frame image.
需要指出的是,上述无线连接方式可以包括但不限于3G/4G连接、WiFi连接、蓝牙连接、WiMAX连接、Zigbee连接、UWB(ultra wideband)连接、以及其他现在已知或将来开发的无线连接方式。It should be noted that the above-mentioned wireless connection methods may include but are not limited to 3G/4G connection, WiFi connection, Bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection, and other wireless connection methods currently known or to be developed in the future.
在本实施例的一些可选的实现方式中,上述执行主体可以通过如下方式从目标地图中获取与当前帧图像匹配的参考帧图像:响应于接收到机器人采集的当前帧图像,对当前帧图像进行场景识别;基于场景识别结果,从预先建立的目标地图中确定与当前图像帧匹配的参考图像帧。在该实现方式中,上述执行主体可以通过各种方式对当前帧图像进行场景识别,从而可以确定出当前帧图像的场景。作为示例,上述执行主体可以采用词袋(Bagof words)模型对当前帧图像进行场景识别。可以理解的是,上述执行主体还可以通过其他的方法进行场景识别,这里不做唯一的限定。而后,上述执行主体可以在预先建立的目标地图中查找包含所识别出场景的图像,该图像即为参考图像。通过场景识别的方法确定当前帧图像的参考图像可以提高获取到的参考帧图像的准确性。In some optional implementations of this embodiment, the execution subject may obtain a reference frame image matching the current frame image from the target map in the following manner: in response to receiving the current frame image collected by the robot, the current frame image is subjected to scene recognition; based on the scene recognition result, a reference image frame matching the current image frame is determined from a pre-established target map. In this implementation, the execution subject may perform scene recognition on the current frame image in various ways, so as to determine the scene of the current frame image. As an example, the execution subject may use a bag of words model to perform scene recognition on the current frame image. It is understandable that the execution subject may also perform scene recognition by other methods, which are not limited here. Then, the execution subject may search for an image containing the recognized scene in the pre-established target map, and the image is the reference image. Determining the reference image of the current frame image by the scene recognition method may improve the accuracy of the reference frame image obtained.
在本实施例的一些可选的实现方式中,上述机器人采用slam技术并发的定位与建立机器人所处的环境的地图,该地图即为目标地图。因此,上述机器人在该环境处于曾经到地方采集到当前帧图像后,可以在目标地图中匹配出参考帧图像。具体地,上述目标地图可以通过如下方式得到:获取机器人采集的环境图像和机器人的位姿;之后可以对环境图像进行边缘提取,得到环境图像中的视图标志的边缘信息,其中,边缘信息可以包括深度信息和像素坐标;而后可以采用SLAM算法处理所采集的环境图像,生成并保存实时地图;最后在实时地图中为视图标志设置标识信息,得到目标地图,其中,标识信息包括边缘信息和机器人的位姿。可以理解的是,上述目标地图还可以为通过其他方式建立的地图,这里没有唯一的限定。In some optional implementations of this embodiment, the robot uses SLAM technology to concurrently locate and establish a map of the environment in which the robot is located, and the map is the target map. Therefore, after the robot is in the environment and collects the current frame image, it can match the reference frame image in the target map. Specifically, the target map can be obtained in the following manner: obtain the environment image collected by the robot and the robot's posture; then the edge extraction of the environment image can be performed to obtain the edge information of the view mark in the environment image, wherein the edge information can include depth information and pixel coordinates; then the SLAM algorithm can be used to process the collected environment image to generate and save a real-time map; finally, the identification information is set for the view mark in the real-time map to obtain the target map, wherein the identification information includes the edge information and the robot's posture. It can be understood that the target map can also be a map established by other means, and there is no unique limitation here.
步骤202,在当前图像帧中确定目标视图标志,对所确定的目标视图标志进行边缘提取得到目标视图标志的提取边缘图像。Step 202: determine a target view mark in the current image frame, and perform edge extraction on the determined target view mark to obtain an extracted edge image of the target view mark.
在本实施例中,基于步骤201接收到的当前帧图像,上述执行主体(例如图1所示的服务器)可以采用各种方式在当前帧图像中确定目标视图标志。例如,可以通过在当前帧图像中的各视图标志中指定视图标志的方式确定目标视图标志。需要说明的是,上述参考图像中也包含目标视图标志。而后,上述执行主体可以对当前帧图像中的目标视图标志进行边缘提取,从而可以得到当前帧图像中目标视图标志的边缘图像,并将所得到的边缘图像确定为提取边缘图像。作为示例,上述执行主体可以采用sobel算子等对当前帧图像进行边缘检测,从而可以在当前帧图像中确定出目标视图标志与背景间的交界线,所确定的交界线即为目标视图标志的边缘。In this embodiment, based on the current frame image received in step 201, the execution entity (such as the server shown in FIG. 1 ) may determine the target view mark in the current frame image in various ways. For example, the target view mark may be determined by specifying a view mark in each view mark in the current frame image. It should be noted that the reference image also includes the target view mark. Then, the execution entity may perform edge extraction on the target view mark in the current frame image, thereby obtaining an edge image of the target view mark in the current frame image, and determining the obtained edge image as the extracted edge image. As an example, the execution entity may perform edge detection on the current frame image using a sobel operator, etc., thereby determining the boundary line between the target view mark and the background in the current frame image, and the determined boundary line is the edge of the target view mark.
可以理解的是,上述执行主体还可以先对当前图像帧中所有的视图标志进行边缘提取,再在当前帧图像中所提取的所有的视图标志的边缘中确定目标视图标志的边缘。这里没有唯一的限定。It is understandable that the above execution subject may also first extract edges of all view marks in the current image frame, and then determine the edge of the target view mark from the edges of all view marks extracted in the current frame image. There is no unique limitation here.
在本实施例的一些可选的实现方式中,上述执行主体可以通过如下方式确定目标视图标志:在当前帧图像中识别出至少一个视图标志;针对所识别出的至少一个视图标志中的视图标志,响应于确定出参考帧图像中包含该视图标志,将该视图标志确定为目标视图标志。上述执行主体采用该实现方式可以获取当前帧图像和参考图像共同包含的所有视图标志,所确定的各视图标志均为目标视图标志。该实现方式可以在当前帧图像中确定出所有的目标视图标志,而目标视图标志的数量多可以进一步提高确定出的机器人的当前位姿的准确性。In some optional implementations of this embodiment, the execution subject may determine the target view mark in the following manner: identify at least one view mark in the current frame image; for a view mark in the at least one identified view mark, in response to determining that the reference frame image contains the view mark, determine the view mark as the target view mark. The execution subject may obtain all view marks contained in the current frame image and the reference image by adopting this implementation, and each determined view mark is a target view mark. This implementation may determine all target view marks in the current frame image, and a large number of target view marks may further improve the accuracy of the determined current position and posture of the robot.
步骤203,基于参考帧图像中的目标视图标志的标识信息和当前帧图像中的目标视图标志的提取边缘图像,确定机器人当前的位姿。Step 203 : determining the current posture of the robot based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image.
在本实施例中,基于步骤201得到的参考帧图像和步骤202得到的当前帧图像中的目标视图标志的提取边缘图像,上述执行主体可以采用各种方式处理参考帧图像中的目标视图标志的标识信息(包括边缘信息和参考位姿)和当前帧图像中的目标视图标志的提取边缘图像,从而可以确定出机器人采集当前帧图像的当前位姿。作为示例,上述执行主体可以将参考帧图像中的目标视图标志的边缘与当前帧图像中的目标视图标志的提取边缘图像进行匹配,以实现利用参考帧图像中的目标视图标志的标识信息中的边缘信息求解参考位姿相对于当前位姿的相对位姿,从而可以确定出机器人的当前位姿。In this embodiment, based on the reference frame image obtained in step 201 and the extracted edge image of the target view mark in the current frame image obtained in step 202, the above-mentioned execution subject can process the identification information (including edge information and reference posture) of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image in various ways, so as to determine the current posture of the robot when collecting the current frame image. As an example, the above-mentioned execution subject can match the edge of the target view mark in the reference frame image with the extracted edge image of the target view mark in the current frame image, so as to realize the use of the edge information in the identification information of the target view mark in the reference frame image to solve the relative posture of the reference posture relative to the current posture, so as to determine the current posture of the robot.
本实施例公开的方法,利用目标视图标志的边缘来配准当前帧图像和参考帧图像从而确定机器人的当前位姿,该方法不依赖光度一致和场景非动态变化的假设,因此在变化的光照和场景动态变化的条件下,得到机器人的当前位姿的鲁棒性更好。The method disclosed in this embodiment utilizes the edge of the target view marker to align the current frame image and the reference frame image to determine the current posture of the robot. This method does not rely on the assumptions of consistent photometric values and non-dynamic scene changes. Therefore, under conditions of changing lighting and dynamic scene changes, the current posture of the robot is more robust.
本公开的上述实施例提供的方法,响应于接收到机器人采集的当前帧图像,可以从预先建立的目标地图中获取与当前帧图像匹配的参考帧图像,而后在当前图像帧中确定目标视图标志,对当前图像帧中的视图标志进行边缘提取,得到目标视图标志的提取边缘图像,最后基于参考帧图像中的目标视图标志的标识信息和当前帧图像中的目标视图标志的提取边缘图像,可以确定机器人的当前位姿,从而实现了利用图像中视图标志的边缘图像确定机器人当前的位姿,使得机器人定位不受光照和场景变化的影响,提高了机器人定位的准确性。The method provided by the above-mentioned embodiments of the present disclosure, in response to receiving the current frame image collected by the robot, can obtain a reference frame image matching the current frame image from a pre-established target map, and then determine the target view mark in the current image frame, perform edge extraction on the view mark in the current image frame, and obtain an extracted edge image of the target view mark. Finally, based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image, the current posture of the robot can be determined, thereby realizing the determination of the current posture of the robot by using the edge image of the view mark in the image, so that the positioning of the robot is not affected by changes in lighting and scene, thereby improving the accuracy of the robot positioning.
进一步参考图3,其示出了机器人的定位方法的又一个实施例的流程300。该网机器人的定位方法的流程300,包括以下步骤:Further referring to FIG3 , it shows a process 300 of another embodiment of a method for positioning a robot. The process 300 of the method for positioning a net robot comprises the following steps:
步骤301,响应于接收到机器人采集的当前帧图像,从预先建立的目标地图中获取与当前帧图像匹配的参考帧图像。Step 301 , in response to receiving a current frame image captured by a robot, obtaining a reference frame image matching the current frame image from a pre-established target map.
步骤302,在当前图像帧中确定目标视图标志,对所确定的目标视图标志进行边缘提取得到目标视图标志的提取边缘图像。Step 302: determine a target view mark in the current image frame, and perform edge extraction on the determined target view mark to obtain an extracted edge image of the target view mark.
在本实施例中,步骤301~步骤302所包含的内容与上述实施例中的步骤201~步骤201所包含的内容相似,这里不再赘述。In this embodiment, the contents included in step 301 to step 302 are similar to the contents included in step 201 to step 201 in the above embodiment, and are not repeated here.
步骤303,基于参考帧图像中的目标视图标志的标识信息,在当前帧图像中确定出目标视图标志的理论边缘图像。Step 303: Based on the identification information of the target view mark in the reference frame image, a theoretical edge image of the target view mark is determined in the current frame image.
在本实施中,基于步骤301获取的参考帧图像和当前帧图像,上述执行主体可以在当前帧图像中确定出目标视图标志的理论边缘图像。具体地,上述执行主体可以利用参考帧图像中的目标视图标志的标识信息(包括目标视图标志的边缘信息和参考位姿)将参考帧图像中的目标视图标志的边缘图像转换并投影到当前帧图像,从而可以得到上述目标视图标志在当前帧图像中的理论边缘图像。In this implementation, the execution subject may determine a theoretical edge image of the target view mark in the current frame image based on the reference frame image and the current frame image acquired in step 301. Specifically, the execution subject may use the identification information of the target view mark in the reference frame image (including the edge information and reference pose of the target view mark) to convert and project the edge image of the target view mark in the reference frame image to the current frame image, thereby obtaining the theoretical edge image of the target view mark in the current frame image.
步骤304,采用非线性最小二乘法,对当前帧图像中的目标视图标志的提取边缘图像和所确定的理论边缘图像进行拟合,得到机器人的当前位姿。Step 304 , using a nonlinear least squares method, fits the extracted edge image of the target view mark in the current frame image and the determined theoretical edge image to obtain the current position and posture of the robot.
在本实施例中,基于步骤303得到的参考帧图像中的目标视图标志在当前帧图像中的理论边缘图像,上述执行主体可以采用非线性最小二乘法对当前帧图像中的目标视图标志的提取边缘图像和所确定的理论边缘图像进行拟合,从而可以得到机器人的当前位姿。即采用非线性最小二值法最小化当前帧图像中的目标视图标志的理论边缘图像的像素点与提取边缘图像的像素之间的距离,从而实现了将当前帧图像中的目标视图标志的提取边缘图像和所确定的理论边缘图像拟合。In this embodiment, based on the theoretical edge image of the target view mark in the reference frame image in the current frame image obtained in step 303, the above-mentioned execution subject can use the nonlinear least square method to fit the extracted edge image of the target view mark in the current frame image and the determined theoretical edge image, so as to obtain the current posture of the robot. That is, the nonlinear least square method is used to minimize the distance between the pixel points of the theoretical edge image of the target view mark in the current frame image and the pixels of the extracted edge image, so as to achieve the fitting of the extracted edge image of the target view mark in the current frame image and the determined theoretical edge image.
在本实施例的一些可选的实现方式中,上述执行主体可以将当前帧图像中的目标视图标志的提取边缘图像和理论边缘图像之间的欧式距离平方和确定为非线性最小二乘法的目标函数。而后上述执行主体可以对目标函数求最小值,从而实现了将当前帧图像中的目标视图标志的提取边缘图像和理论边缘图像拟合。In some optional implementations of this embodiment, the execution subject may determine the sum of squares of the Euclidean distances between the extracted edge image of the target view mark in the current frame image and the theoretical edge image as the objective function of the nonlinear least squares method. The execution subject may then find the minimum value of the objective function, thereby fitting the extracted edge image of the target view mark in the current frame image with the theoretical edge image.
继续参见图4,图4是根据本实施例的机器人的定位方法的应用场景的一个示意图。在图4的应用场景中,机器人在采集到当前帧图像之后,后台服务器可以接收当前帧图像cn,从预先建立的目标地图中获取与当前帧图像cn匹配的参考帧图像cr,之后后台服务器可以在当前帧图像中确定出目标视图标志401,如图4所示,并对所确定的目标视图标志401进行边缘提取得到目标视图标志的提取边缘图像402,如图4所示,而后后台服务器利用参考帧图像cr中的视图标志的标识信息(包括目标视图标志边缘图像403的边缘信息和参考位姿)可以在当前帧图像cn中确定目标视图标志的理论边缘图像404,最后后台服务器可以基于非线性最小二乘法将当前帧图像cn中的目标视图标志的提取边缘图像402和理论边缘图像404进行拟合,得到可以使得非线性最小二乘的目标函数最小的旋转矩阵Rccrn和平移矩阵Pccrn,从而确定出机器人的当前位姿。其中,旋转矩阵Rccrn和平移矩阵Pccrn为参考帧图像cr相对于当前帧图像cn的旋转矩阵和平移矩阵。Continuing to refer to FIG4, FIG4 is a schematic diagram of an application scenario of the positioning method of the robot according to the present embodiment. In the application scenario of FIG4, after the robot acquires the current frame image, the background server can receive the current frame image c n , obtain the reference frame image cr matching the current frame image c n from the pre-established target map, and then the background server can determine the target view mark 401 in the current frame image, as shown in FIG4, and perform edge extraction on the determined target view mark 401 to obtain the extracted edge image 402 of the target view mark, as shown in FIG4, and then the background server can determine the theoretical edge image 404 of the target view mark in the current frame image c n using the identification information of the view mark in the reference frame image cr (including the edge information and reference pose of the target view mark edge image 403), and finally the background server can fit the extracted edge image 402 of the target view mark in the current frame image c n and the theoretical edge image 404 based on the nonlinear least squares method, and obtain the rotation matrix R c c r n and the translation matrix P c c r n that can minimize the objective function of the nonlinear least squares, thereby determining the current pose of the robot. Among them, the rotation matrix Rccrn and the translation matrix Pccrn are the rotation matrix and translation matrix of the reference frame image cr relative to the current frame image cn .
从图3中可以看出,与图2对应的实施例相比,本实施例中的机器人的定位方法的流程300通过非线性最小二乘法拟合当前帧图像中的目标视图标志的提取边缘图像和所确定的理论边缘图像来实现对机器人的当前位姿的估计,提升了当前帧图像中的目标视图标志的提取边缘图像和所确定的理论边缘图像拟合的效果,从而进一步提高了得到的机器人的当前位姿的准确性。As can be seen from Figure 3, compared with the embodiment corresponding to Figure 2, the process 300 of the robot positioning method in this embodiment realizes the estimation of the current posture of the robot by fitting the extracted edge image of the target view mark in the current frame image and the determined theoretical edge image through the nonlinear least squares method, thereby improving the fitting effect of the extracted edge image of the target view mark in the current frame image and the determined theoretical edge image, thereby further improving the accuracy of the obtained current posture of the robot.
进一步参考图5,作为对上述各图所示方法的实现,本公开提供了一种机器人的定位装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。Further referring to FIG. 5 , as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of a positioning device for a robot. The device embodiment corresponds to the method embodiment shown in FIG. 2 , and the device can be specifically applied to various electronic devices.
如图5所示,本实施例的机器人的定位装置500包括:获取单元501、边缘提取单元502、确定单元503。其中,获取单元501被配置成响应于接收到机器人采集的当前帧图像,从预先建立的目标地图中获取与当前帧图像匹配的参考帧图像,其中,参考帧图像包括至少一个视图标志和视图标志的标识信息,标识信息包括视图标志的边缘信息和机器人采集参考帧图像的参考位姿;边缘提取单元502被配置成在当前图像帧中确定目标视图标志,对所确定的目标视图标志进行边缘提取得到目标视图标志的提取边缘图像,其中,参考帧图像包括目标视图标志;确定单元503被配置成基于参考帧图像中的目标视图标志的标识信息和当前帧图像中的目标视图标志的提取边缘图像,确定机器人的当前位姿。As shown in FIG5 , the robot positioning device 500 of this embodiment includes: an acquisition unit 501, an edge extraction unit 502, and a determination unit 503. The acquisition unit 501 is configured to, in response to receiving a current frame image collected by the robot, acquire a reference frame image matching the current frame image from a pre-established target map, wherein the reference frame image includes at least one view mark and identification information of the view mark, and the identification information includes edge information of the view mark and a reference pose of the robot collecting the reference frame image; the edge extraction unit 502 is configured to determine a target view mark in the current image frame, perform edge extraction on the determined target view mark to obtain an extracted edge image of the target view mark, wherein the reference frame image includes the target view mark; the determination unit 503 is configured to determine the current pose of the robot based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image.
在本实施例的一些可选的实现方式中,获取单元501进一步被配置成:响应于接收到机器人采集的当前帧图像,对当前帧图像进行场景识别;基于场景识别结果,从预先建立的目标地图中确定与当前图像帧匹配的参考图像帧。In some optional implementations of this embodiment, the acquisition unit 501 is further configured to: in response to receiving a current frame image captured by the robot, perform scene recognition on the current frame image; and based on the scene recognition result, determine a reference image frame that matches the current image frame from a pre-established target map.
在本实施例的一些可选的实现方式中,边缘提取单元502进一步被配置成在当前帧图像中识别出至少一个视图标志;针对所识别出的至少一个视图标志中的视图标志,响应于确定出参考帧图像中包含该视图标志,将该视图标志确定为目标视图标志。In some optional implementations of this embodiment, the edge extraction unit 502 is further configured to identify at least one view mark in the current frame image; for a view mark in the at least one identified view mark, in response to determining that the reference frame image contains the view mark, determine the view mark as the target view mark.
在本实施例的一些可选的实现方式中,确定单元503进一步被配置成:基于参考帧图像中的目标视图标志的标识信息,在当前帧图像中确定出目标视图标志的理论边缘图像;采用非线性最小二乘法,将当前帧图像中的目标视图标志的提取边缘图像和所确定的理论边缘图像进行拟合,得到机器人的当前位姿。In some optional implementations of this embodiment, the determination unit 503 is further configured to: determine a theoretical edge image of the target view mark in the current frame image based on the identification information of the target view mark in the reference frame image; and use a nonlinear least squares method to fit the extracted edge image of the target view mark in the current frame image and the determined theoretical edge image to obtain the current posture of the robot.
在本实施例的一些可选的实现方式中,确定单元503进一步还被配置成:将当前帧图像中的目标视图标志的提取边缘图像和理论边缘图像之间的欧式距离平方和确定为非线性最小二乘法的目标函数;确定目标函数的最小值,以将当前帧图像中的目标视图标志的提取边缘图像和理论边缘图像拟合。In some optional implementations of this embodiment, the determination unit 503 is further configured to: determine the sum of squares of the Euclidean distances between the extracted edge image of the target view mark in the current frame image and the theoretical edge image as the objective function of the nonlinear least squares method; determine the minimum value of the objective function to fit the extracted edge image of the target view mark in the current frame image and the theoretical edge image.
在本实施例的一些可选的实现方式中,目标地图通过如下步骤建立:获取机器人采集的环境图像和机器人的位姿;对环境图像进行边缘提取,得到环境图像中的视图标志的边缘信息,其中,边缘信息包括像素深度信息和像素坐标信息;采用SLAM算法处理所采集的环境图像,生成并保存实时地图;在实时地图中为视图标志设置标识信息,得到目标地图,其中,标识信息包括边缘信息和机器人的位姿。In some optional implementations of this embodiment, the target map is established through the following steps: obtaining the environment image collected by the robot and the robot's posture; performing edge extraction on the environment image to obtain edge information of the view mark in the environment image, wherein the edge information includes pixel depth information and pixel coordinate information; using the SLAM algorithm to process the collected environment image to generate and save a real-time map; setting identification information for the view mark in the real-time map to obtain a target map, wherein the identification information includes edge information and the robot's posture.
装置500中记载的诸单元与参考图2描述的方法中的各个步骤相对应。由此,上文针对方法描述的操作和特征同样适用于装置500及其中包含的单元,在此不再赘述。下面参考图6,其示出了适于用来实现本公开的实施例的电子设备(例如图1中的服务器或机器人)600的结构示意图。图6示出的服务器仅仅是一个示例,不应对本公开的实施例的功能和使用范围带来任何限制。The units recorded in the device 500 correspond to the steps in the method described with reference to FIG2 . Therefore, the operations and features described above for the method are also applicable to the device 500 and the units contained therein, and are not repeated here. Referring to FIG6 , a schematic diagram of the structure of an electronic device (such as the server or robot in FIG1 ) 600 suitable for implementing an embodiment of the present disclosure is shown. The server shown in FIG6 is only an example and should not bring any limitation to the functions and scope of use of the embodiments of the present disclosure.
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。As shown in FIG6 , the electronic device 600 may include a processing device (e.g., a central processing unit, a graphics processing unit, etc.) 601, which can perform various appropriate actions and processes according to a program stored in a read-only memory (ROM) 602 or a program loaded from a storage device 608 into a random access memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic device 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to the bus 604.
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。图6中示出的每个方框可以代表一个装置,也可以根据需要代表多个装置。Typically, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; output devices 607 including, for example, a liquid crystal display (LCD), a speaker, a vibrator, etc.; storage devices 608 including, for example, a magnetic tape, a hard disk, etc.; and communication devices 609. The communication device 609 may allow the electronic device 600 to communicate wirelessly or wired with other devices to exchange data. Although FIG. 6 shows an electronic device 600 with various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or have alternatively. Each box shown in FIG. 6 may represent one device, or may represent multiple devices as needed.
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开的实施例的方法中限定的上述功能。需要说明的是,本公开的实施例所述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开的实施例中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开的实施例中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。In particular, according to an embodiment of the present disclosure, the process described above with reference to the flowchart can be implemented as a computer software program. For example, an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program includes a program code for executing the method shown in the flowchart. In such an embodiment, the computer program can be downloaded and installed from the network through a communication device 609, or installed from a storage device 608, or installed from a ROM 602. When the computer program is executed by the processing device 601, the above functions defined in the method of the embodiment of the present disclosure are executed. It should be noted that the computer-readable medium described in the embodiment of the present disclosure can be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two. The computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to, an electrical connection with one or more conductors, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above. In an embodiment of the present disclosure, a computer-readable storage medium may be any tangible medium containing or storing a program that can be used by or in combination with an instruction execution system, an apparatus, or a device. In an embodiment of the present disclosure, a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, which carries a computer-readable program code. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above. A computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which may send, propagate, or transmit a program for use by or in combination with an instruction execution system, an apparatus, or a device. The program code contained on a computer-readable medium may be transmitted using any appropriate medium, including but not limited to: wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:响应于接收到机器人采集的当前帧图像,从预先建立的目标地图中获取与当前帧图像匹配的参考帧图像,其中,参考帧图像包括至少一个视图标志和视图标志的标识信息,标识信息包括视图标志的边缘信息和机器人采集参考帧图像的参考位姿;在当前图像帧中确定目标视图标志,对所确定的目标视图标志进行边缘提取得到目标视图标志的提取边缘图像,其中,参考帧图像包括目标视图标志;基于参考帧图像中的目标视图标志的标识信息和当前帧图像中的目标视图标志的提取边缘图像,确定机器人的当前位姿。The computer-readable medium may be included in the electronic device; or it may exist independently without being assembled into the electronic device. The computer-readable medium carries one or more programs. When the one or more programs are executed by the electronic device, the electronic device: in response to receiving the current frame image collected by the robot, obtain a reference frame image matching the current frame image from a pre-established target map, wherein the reference frame image includes at least one view mark and identification information of the view mark, and the identification information includes edge information of the view mark and a reference pose of the robot collecting the reference frame image; determine the target view mark in the current image frame, perform edge extraction on the determined target view mark to obtain an extracted edge image of the target view mark, wherein the reference frame image includes the target view mark; determine the current pose of the robot based on the identification information of the target view mark in the reference frame image and the extracted edge image of the target view mark in the current frame image.
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的实施例的操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for performing the operations of embodiments of the present disclosure may be written in one or more programming languages or a combination thereof, including object-oriented programming languages, such as Java, Smalltalk, C++, and conventional procedural programming languages, such as "C" or similar programming languages. The program code may be executed entirely on a user's computer, partially on a user's computer, as a separate software package, partially on a user's computer and partially on a remote computer, or entirely on a remote computer or server. In cases involving a remote computer, the remote computer may be connected to the user's computer via any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g., via the Internet using an Internet service provider).
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flow chart and block diagram in the accompanying drawings illustrate the possible architecture, function and operation of the system, method and computer program product according to various embodiments of the present disclosure. In this regard, each square box in the flow chart or block diagram can represent a module, a program segment or a part of a code, and the module, the program segment or a part of the code contains one or more executable instructions for realizing the specified logical function. It should also be noted that in some implementations as replacements, the functions marked in the square box can also occur in a sequence different from that marked in the accompanying drawings. For example, two square boxes represented in succession can actually be executed substantially in parallel, and they can sometimes be executed in the opposite order, depending on the functions involved. It should also be noted that each square box in the block diagram and/or flow chart, and the combination of the square boxes in the block diagram and/or flow chart can be implemented with a dedicated hardware-based system that performs a specified function or operation, or can be implemented with a combination of dedicated hardware and computer instructions.
描述于本公开的实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括获取单元、边缘提取单元、确定单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,获取单元还可以被描述为“响应于接收到机器人采集的当前帧图像,从预先建立的目标地图中获取与当前帧图像匹配的参考帧图像的单元”。The units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. The described units may also be arranged in a processor, for example, may be described as: a processor includes an acquisition unit, an edge extraction unit, and a determination unit. Among them, the names of these units do not constitute a limitation on the unit itself in some cases. For example, the acquisition unit may also be described as "a unit that acquires a reference frame image matching the current frame image from a pre-established target map in response to receiving a current frame image collected by the robot."
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开的实施例中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开的实施例中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only a preferred embodiment of the present disclosure and an explanation of the technical principles used. Those skilled in the art should understand that the scope of the invention involved in the embodiments of the present disclosure is not limited to the technical solutions formed by a specific combination of the above-mentioned technical features, but should also cover other technical solutions formed by any combination of the above-mentioned technical features or their equivalent features without departing from the above-mentioned inventive concept. For example, the above-mentioned features are replaced with the technical features with similar functions disclosed in the embodiments of the present disclosure (but not limited to) to form a technical solution.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010089211.2A CN113256715B (en) | 2020-02-12 | 2020-02-12 | Positioning method and device for robot |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010089211.2A CN113256715B (en) | 2020-02-12 | 2020-02-12 | Positioning method and device for robot |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN113256715A CN113256715A (en) | 2021-08-13 |
| CN113256715B true CN113256715B (en) | 2024-04-05 |
Family
ID=77220181
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010089211.2A Active CN113256715B (en) | 2020-02-12 | 2020-02-12 | Positioning method and device for robot |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113256715B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116147585A (en) * | 2023-01-06 | 2023-05-23 | 北京百度网讯科技有限公司 | Method, device, equipment and medium for obtaining real trajectory of robot |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104200496A (en) * | 2014-09-01 | 2014-12-10 | 西北工业大学 | High-precision detecting and locating method for rectangular identifiers on basis of least square vertical fitting of adjacent sides |
| GB201801399D0 (en) * | 2017-12-13 | 2018-03-14 | Xihua Univeristy | Positioning method and apparatus |
| CN108283021A (en) * | 2015-10-02 | 2018-07-13 | X开发有限责任公司 | Locating a robot in an environment using detected edges of a camera image from a camera of the robot and detected edges derived from a three-dimensional model of the environment |
| CN108665508A (en) * | 2018-04-26 | 2018-10-16 | 腾讯科技(深圳)有限公司 | A kind of positioning and map constructing method, device and storage medium immediately |
| CN108717710A (en) * | 2018-05-18 | 2018-10-30 | 京东方科技集团股份有限公司 | Localization method, apparatus and system under indoor environment |
| CN109410281A (en) * | 2018-11-05 | 2019-03-01 | 珠海格力电器股份有限公司 | Positioning control method and device, storage medium and logistics system |
| WO2019140745A1 (en) * | 2018-01-16 | 2019-07-25 | 广东省智能制造研究所 | Robot positioning method and device |
| WO2019223463A1 (en) * | 2018-05-22 | 2019-11-28 | 腾讯科技(深圳)有限公司 | Image processing method and apparatus, storage medium, and computer device |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8147503B2 (en) * | 2007-09-30 | 2012-04-03 | Intuitive Surgical Operations Inc. | Methods of locating and tracking robotic instruments in robotic surgical systems |
| US9098905B2 (en) * | 2010-03-12 | 2015-08-04 | Google Inc. | System and method for determining position of a device |
| JP6561511B2 (en) * | 2014-03-20 | 2019-08-21 | 株式会社リコー | Parallax value deriving device, moving body, robot, parallax value production deriving method, parallax value producing method and program |
-
2020
- 2020-02-12 CN CN202010089211.2A patent/CN113256715B/en active Active
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104200496A (en) * | 2014-09-01 | 2014-12-10 | 西北工业大学 | High-precision detecting and locating method for rectangular identifiers on basis of least square vertical fitting of adjacent sides |
| CN108283021A (en) * | 2015-10-02 | 2018-07-13 | X开发有限责任公司 | Locating a robot in an environment using detected edges of a camera image from a camera of the robot and detected edges derived from a three-dimensional model of the environment |
| GB201801399D0 (en) * | 2017-12-13 | 2018-03-14 | Xihua Univeristy | Positioning method and apparatus |
| WO2019140745A1 (en) * | 2018-01-16 | 2019-07-25 | 广东省智能制造研究所 | Robot positioning method and device |
| CN108665508A (en) * | 2018-04-26 | 2018-10-16 | 腾讯科技(深圳)有限公司 | A kind of positioning and map constructing method, device and storage medium immediately |
| CN108717710A (en) * | 2018-05-18 | 2018-10-30 | 京东方科技集团股份有限公司 | Localization method, apparatus and system under indoor environment |
| WO2019223463A1 (en) * | 2018-05-22 | 2019-11-28 | 腾讯科技(深圳)有限公司 | Image processing method and apparatus, storage medium, and computer device |
| CN109410281A (en) * | 2018-11-05 | 2019-03-01 | 珠海格力电器股份有限公司 | Positioning control method and device, storage medium and logistics system |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113256715A (en) | 2021-08-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN115655262B (en) | Deep learning perception-based multi-level semantic map construction method and device | |
| US10360247B2 (en) | System and method for telecom inventory management | |
| CN109584276B (en) | Key point detection method, device, equipment and readable medium | |
| CN111292420A (en) | Method and apparatus for building a map | |
| CN110135323A (en) | Image annotation method, device, system and storage medium | |
| CN114612616B (en) | Mapping method, device, electronic device and storage medium | |
| CN112258647B (en) | Map reconstruction method and device, computer readable medium and electronic equipment | |
| CN118097157B (en) | Image segmentation method and system based on fuzzy clustering algorithm | |
| KR20210058768A (en) | Method and device for labeling objects | |
| CN115222920B (en) | Image-based digital twin space-time knowledge graph construction method and device | |
| CN110619666A (en) | Method and device for calibrating camera | |
| CN117896626B (en) | Method, device, equipment and storage medium for detecting motion trajectory with multiple cameras | |
| CN115900713A (en) | Assistant voice navigation method, device, electronic device and storage medium | |
| CN109635870A (en) | Data processing method and device | |
| WO2019198634A1 (en) | Learning data generation device, variable region detection method, and computer program | |
| CN115082515A (en) | Target tracking method, device, equipment and medium | |
| CN114140771A (en) | A method and system for automatic labeling of image depth datasets | |
| WO2023237065A1 (en) | Loop closure detection method and apparatus, and electronic device and medium | |
| CN113256715B (en) | Positioning method and device for robot | |
| CN112270242A (en) | Track display method and device, readable medium and electronic equipment | |
| CN113793349B (en) | Target detection method and device, computer readable storage medium and electronic equipment | |
| CN113168706A (en) | Object position determination in frames of video stream | |
| CN111445499B (en) | Method and device for identifying target information | |
| CN118628567A (en) | Robot positioning method, device and storage medium based on digital twin | |
| CN112880675B (en) | Pose smoothing method and device for visual positioning, terminal and mobile robot |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |