CN112085790A - Point-line combined multi-camera visual SLAM method, equipment and storage medium - Google Patents
Point-line combined multi-camera visual SLAM method, equipment and storage medium Download PDFInfo
- Publication number
- CN112085790A CN112085790A CN202010819166.1A CN202010819166A CN112085790A CN 112085790 A CN112085790 A CN 112085790A CN 202010819166 A CN202010819166 A CN 202010819166A CN 112085790 A CN112085790 A CN 112085790A
- Authority
- CN
- China
- Prior art keywords
- line
- features
- point
- feature
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Image Analysis (AREA)
Abstract
本发明提出了一种点线结合的多相机视觉SLAM方法、设备及存储介质,通过采集目标场景的多角度影像数据,提取并匹配所述多角度影像数据中的点特征和线特征,并得到所述点特征和线特征在三维空间中的位置信息,对所述多角度影像数据中的每帧影像进行相机位姿初步估计,并结合提取并匹配出的点特征、线特征和所述相机位姿初步估计结果,构建图结构,根据所提取点特征、线特征和所述图结构确定三维地图。本实施例采用点与线特征联合解算的方法,由于线特征含有更多信息,因此可提高跟踪的稳定性和精确性,使用线特征构建的稀疏特征地图可更清晰、直观地对场景进行抽象化描述。
The present invention proposes a multi-camera visual SLAM method, device and storage medium combining point and line. By collecting multi-angle image data of a target scene, extracting and matching point features and line features in the multi-angle image data, and obtaining The position information of the point feature and the line feature in the three-dimensional space, the camera pose is preliminarily estimated for each frame of the multi-angle image data, and the extracted and matched point features, line features and the camera are combined. Based on the preliminary estimation result of the pose, a graph structure is constructed, and a three-dimensional map is determined according to the extracted point features, line features and the graph structure. This embodiment adopts the joint solution method of point and line features. Since the line features contain more information, the stability and accuracy of the tracking can be improved, and the sparse feature map constructed by using the line features can make the scene more clearly and intuitively. abstract description.
Description
技术领域technical field
本发明涉及计算机视觉技术领域,尤其涉及一种点线结合的多相机视觉SLAM方法、设备及存储介质。The present invention relates to the technical field of computer vision, in particular to a point-line combined multi-camera visual SLAM method, device and storage medium.
背景技术Background technique
SLAM算法被广泛应用于增强现实、航空航天、水下等场景的机器人自主导航与环境识别,具有重要的理论意义和实际价值。其中,基于序列影像数据的视觉SLAM由于经济、便携等优点,成为研究的热点问题。如何有效地从影像数据中提取具有代表性和跟踪意义的特征,使用合适的代数语言对其进行描述,以及如何充分利用以上信息恢复相机姿态和场景结构是SLAM算法的重点关注问题。The SLAM algorithm is widely used in autonomous navigation and environment recognition of robots in augmented reality, aerospace, underwater and other scenarios, and has important theoretical significance and practical value. Among them, visual SLAM based on sequence image data has become a hot research topic due to its advantages of economy and portability. How to effectively extract representative and tracking meaningful features from image data, describe them using a suitable algebraic language, and how to fully utilize the above information to restore camera pose and scene structure are the key concerns of SLAM algorithms.
特征法是目前视觉SLAM中的主流方法,通过从影像中提取抽象几何特征并进行数据关联解算不同帧间的相对位姿关系,恢复相机轨迹,并构建稀疏特征地图。常用特征法视觉SLAM存在以下不足:(1)普通视觉SLAM系统中使用的相机视角狭窄,在单帧内获取的信息有限。(2)大量算法中使用的点特征维度单一,相对其它高维特征包含场景信息较少,在影像数据质量较差或相机运动过快时容易跟丢,导致跟踪结果不稳定,构建地图不够格直观,难以反映真实场景,因此现有技术中的特征法视觉SLAM无法满足特征跟踪稳定性和高精度的要求。The feature method is currently the mainstream method in visual SLAM. By extracting abstract geometric features from images and performing data association, the relative pose relationship between different frames is calculated, the camera trajectory is recovered, and a sparse feature map is constructed. Common feature-based visual SLAM has the following deficiencies: (1) The camera angle of view used in ordinary visual SLAM systems is narrow, and the information obtained in a single frame is limited. (2) The point feature used in a large number of algorithms has a single dimension, and contains less scene information than other high-dimensional features. It is easy to be lost when the quality of image data is poor or the camera moves too fast, resulting in unstable tracking results and unqualified map construction. It is intuitive and difficult to reflect the real scene, so the feature-based visual SLAM in the prior art cannot meet the requirements of feature tracking stability and high precision.
因此,现有技术有待于进一步的改进。Therefore, the prior art needs to be further improved.
发明内容SUMMARY OF THE INVENTION
鉴于现有技术的不足,本发明目的在于提供一种点线结合的多相机视觉SLAM方法、设备及存储介质,以克服现有技术中使用特征法进行特征跟踪时稳定性和精度无法满足需要的缺陷。In view of the deficiencies of the prior art, the purpose of the present invention is to provide a point-line combined multi-camera visual SLAM method, device and storage medium, so as to overcome the problems in the prior art that the stability and accuracy cannot meet the requirements when using the feature method for feature tracking. defect.
本发明的技术方案如下:The technical scheme of the present invention is as follows:
第一方面,本实施例公开了一种点线结合的多相机视觉SLAM方法,其中,包括:In the first aspect, this embodiment discloses a multi-camera visual SLAM method combining point and line, including:
采集目标场景的多角度影像数据;Collect multi-angle image data of the target scene;
提取并匹配所述多角度影像数据中的点特征和线特征,并得到所述点特征和线特征在三维空间中的位置信息;Extracting and matching point features and line features in the multi-angle image data, and obtaining position information of the point features and line features in three-dimensional space;
对所述多角度影像数据中的每帧影像进行相机位姿初步估计,并结合提取并匹配出的点特征、线特征和所述相机位姿初步估计结果,构建图结构;Perform a preliminary estimation of the camera pose for each frame of the multi-angle image data, and combine the extracted and matched point features, line features and the preliminary estimation result of the camera pose to construct a graph structure;
根据所提取的点特征、线特征和所述图结构确定三维地图。A three-dimensional map is determined according to the extracted point features, line features and the graph structure.
可选的,所述提取所述多角度影像数据中的点特征和线特征的步骤之前,还包括:Optionally, before the step of extracting point features and line features in the multi-angle image data, the method further includes:
对获取到的所述多角度影像数据进行畸变校正处理,得到畸变校正后的中间影像数据;Performing distortion correction processing on the acquired multi-angle image data to obtain intermediate image data after distortion correction;
利用预设摄像头掩膜对所述中间影像数据进行掩膜处理,得到预处理完成的多角度影像数据。The intermediate image data is masked by using a preset camera mask to obtain preprocessed multi-angle image data.
可选的,所述提取所述多角度影像数据中的点特征和线特征的步骤包括:Optionally, the step of extracting point features and line features in the multi-angle image data includes:
使用特征点提取算法提取所述多角度影像数据中的角点特征,以及使用rBRIEF描述子进行角点特征描述,并基于角点描述进行特征匹配,得到提取出的点特征;Extracting the corner features in the multi-angle image data using a feature point extraction algorithm, and using the rBRIEF descriptor to describe the corner features, and performing feature matching based on the corner descriptions to obtain the extracted point features;
使用线特征提取算法提取影像中的线段特征,并使用LBD描述算子对提取出的线段特征进行描述,并基于线段特征描述进行线特征匹配;得到提取出的线特征。The line feature extraction algorithm is used to extract the line segment features in the image, and the LBD description operator is used to describe the extracted line segment features, and the line feature matching is performed based on the line segment feature description; the extracted line features are obtained.
可选的,使用线特征提取算法提取影像中的线段特征,并使用LBD描述算子对提取出的线段特征进行描述,并基于线段特征描述进行线特征匹配的步骤包括:Optionally, using a line feature extraction algorithm to extract line segment features in the image, and using an LBD description operator to describe the extracted line segment features, and performing line feature matching based on the line segment feature description The steps include:
利用线特征提取算法从所述多角度影像数据中提取出结构线段;Extracting structural line segments from the multi-angle image data by using a line feature extraction algorithm;
选取预设个数的结构线段,并利用描述子对选取的结构线段进行影像特征描述,并根据所述影像特描述进行结构线段匹配;Selecting a preset number of structural line segments, and using descriptors to describe the image features of the selected structural line segments, and performing structural line segment matching according to the image feature description;
使用普吕克坐标的四维正交描述式对结构线段匹配出的直线进行描述,得到提取出的多个线特征。The straight line matched by the structural line segment is described using the four-dimensional orthogonal description of the Plück coordinate, and the extracted multiple line features are obtained.
可选的,所述根据所述影像特描述进行结构线段匹配的步骤包括:Optionally, the step of performing structural line segment matching according to the image feature description includes:
在影像的多个通道内分别进行结构线段匹配,并标记在至少一个通道中符合预设匹配条件的结构线段为有效线段,得到第一有效线段集;Perform structural line segment matching in multiple channels of the image respectively, and mark structural line segments that meet preset matching conditions in at least one channel as valid line segments to obtain a first valid line segment set;
以及,利用k最近邻方法对标记为所述有效线段的各个线段进行再次匹配,得到再次匹配得到的有效线段,得到第二有效线段集;And, using the k-nearest neighbor method to match each line segment marked as the effective line segment again to obtain the effective line segment obtained by matching again, and obtain the second effective line segment set;
对所述第二有效线段集中的各个有效线段进行双向匹配,得到最佳匹配对。Bidirectional matching is performed on each valid line segment in the second valid line segment set to obtain the best matching pair.
可选的,所述对所述多角度影像数据中的每帧影像进行相机位姿初步估计,并结合提取并匹配出的点特征、线特征和所述相机位姿初步估计结果,构建图结构的步骤包括:Optionally, performing a preliminary estimation of the camera pose for each frame of the multi-angle image data, and combining the extracted and matched point features, line features and the preliminary estimation result of the camera pose to construct a graph structure. The steps include:
使用对极几何确定初始帧的相对位姿关系;Use epipolar geometry to determine the relative pose relationship of the initial frame;
通过相机运动状态估计和EPnP方法对待解算帧相对于已知帧的位姿关系进行估计,得到各个待解算帧的相对位姿关系;Estimate the pose relationship of the frame to be solved relative to the known frame through camera motion state estimation and EPnP method, and obtain the relative pose relationship of each frame to be solved;
以点特征、线特征和相机位置为顶点,以所述点特征和线特征投影关系构建图结构。Taking point features, line features and camera positions as vertices, a graph structure is constructed based on the projection relationship of the point features and line features.
可选的,所述以点特征、线特征和相机位置为顶点,以所述点特征和线特征投影关系构建图结构的步骤包括:Optionally, the step of constructing a graph structure with the point feature, line feature, and camera position as vertices and using the point feature and line feature projection relationship includes:
将与待解算相机位姿关联的线特征和点特征作为顶点加入图结构;The line features and point features associated with the camera pose to be solved are added to the graph structure as vertices;
根据点特征和线特征在所述解算帧中的可视关系,在G2O开源库中分别构建多顶点边;其中,所述多顶点边为点特征、线特征和相机之间对应的相对位姿关系;According to the visual relationship between point features and line features in the solution frame, multi-vertex edges are respectively constructed in the G2O open source library; wherein, the multi-vertex edges are the corresponding relative positions between point features, line features and cameras posture relationship;
根据所述多顶点边中点特征、线特征和相机之间对应的相对位姿关系,在每帧影像中计算所述点特征重投影误差对相机姿态求解雅克比矩阵,和在每帧影像中计算所述线特征的重投影误差对相机姿态求解雅克比矩阵;According to the relative pose relationship between the point feature, the line feature and the camera corresponding to the multi-vertex edge, calculate the point feature reprojection error in each frame of image to solve the Jacobian matrix for the camera pose, and in each frame of image Calculate the reprojection error of the line feature to solve the Jacobian matrix for the camera pose;
根据求解出的重投影误差利用最优化算法迭代解算出相机位置和特征坐标。According to the solved reprojection error, the optimization algorithm is used to iteratively solve the camera position and feature coordinates.
可选的,根据所提取的点特征、线特征和所述图结构确定三维地图的步骤还包括:Optionally, the step of determining the three-dimensional map according to the extracted point features, line features and the graph structure further includes:
基于共视关系和相邻帧之间的相似度,从所述多角度影像数据中相邻帧进行筛选,得到闭环备用组;Based on the common view relationship and the similarity between adjacent frames, the adjacent frames in the multi-angle image data are screened to obtain a closed-loop spare group;
利用词袋法闭环检测法对所述闭环备用组对应的匹配区域进行检测,并根据检测结果校正当前帧的共视关系,并更新匹配区域内的特征点的坐标值,以及校正闭环在世界坐标系中的位置;The bag-of-words closed-loop detection method is used to detect the matching area corresponding to the closed-loop spare group, and the common view relationship of the current frame is corrected according to the detection result, and the coordinate values of the feature points in the matching area are updated, and the closed-loop is corrected in the world coordinates. position in the system;
根据检测到的闭环更新图结构中过往帧之间的公共时域和连接关系;Update the common time domain and connection relationship between past frames in the graph structure according to the detected closed loop;
结合更新后的图结构,以及使用预先保存的线特征在多角度影像上的端点坐标对提取到的直线进行裁剪,使用裁剪后的线段确定所述三维地图。Combined with the updated map structure and the pre-saved end point coordinates of the line feature on the multi-angle image, the extracted straight line is clipped, and the clipped line segment is used to determine the three-dimensional map.
第二方面,本实施例公开了一种信息处理设备,其中,包括处理器、与处理器通信连接的存储介质,所述存储介质适于存储多条指令;所述处理器适于调用所述存储介质中的指令,以执行实现上述所述的点线结合的多相机视觉SLAM方法的步骤。In a second aspect, this embodiment discloses an information processing device, which includes a processor and a storage medium communicatively connected to the processor, where the storage medium is suitable for storing a plurality of instructions; the processor is suitable for calling the The instructions in the storage medium are used to execute the steps of implementing the above-mentioned multi-camera vision SLAM method combined with dots and lines.
第三方面,本实施例公开了一种计算机可读存储介质,其中,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如所述的点线结合的多相机视觉SLAM方法的步骤。In a third aspect, this embodiment discloses a computer-readable storage medium, wherein the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors , to implement the steps of the multi-camera vision SLAM method as described.
有益效果:本发明提出了一种点线结合的多相机视觉SLAM方法、系统及设备,通过采集目标场景的多角度影像数据,提取并匹配所述多角度影像数据中的点特征和线特征,并得到所述点特征和线特征在三维空间中的位置信息,对所述多角度影像数据中的每帧影像进行相机位姿初步估计,并结合提取并匹配出的点特征、线特征和所述相机位姿初步估计结果,构建图结构,根据所提取点特征、线特征和所述图结构确定三维地图。本实施例采用点与线特征联合解算的方法,由于线特征含有更多信息,因此可提高跟踪的稳定性和精确性,使用线特征构建的稀疏特征地图可更清晰、直观地对场景进行抽象化描述。Beneficial effects: The present invention proposes a multi-camera visual SLAM method, system and device combining point and line. By collecting multi-angle image data of a target scene, extracting and matching point features and line features in the multi-angle image data, And obtain the position information of the point features and line features in the three-dimensional space, carry out a preliminary estimation of the camera pose for each frame of the multi-angle image data, and combine the extracted and matched point features, line features and all According to the preliminary estimation result of the camera pose, a graph structure is constructed, and a three-dimensional map is determined according to the extracted point features, line features and the graph structure. This embodiment adopts the joint solution method of point and line features. Since the line features contain more information, the stability and accuracy of the tracking can be improved, and the sparse feature map constructed by using the line features can make the scene more clearly and intuitively. abstract description.
附图说明Description of drawings
图1是本发明所述一种点线结合的多相机视觉SLAM方法的步骤流程图;Fig. 1 is the step flow chart of a kind of point-line combined multi-camera visual SLAM method according to the present invention;
图2是本发明实施例中使用摄像头掩膜采集影像数据示意图;2 is a schematic diagram of collecting image data using a camera mask in an embodiment of the present invention;
图3(a)是本发明实施例提供的直线正交描述法中参数θ1几何意义的示意图;Fig. 3 (a) is the schematic diagram of the geometric meaning of parameter θ 1 in the straight line orthogonal description method provided by the embodiment of the present invention;
图3(b)是本发明实施例提供的直线正交描述法中参数θ2几何意义的示意图;Fig. 3 (b) is the schematic diagram of the geometric meaning of parameter θ 2 in the straight line orthogonal description method provided by the embodiment of the present invention;
图3(c)是本发明实施例提供的直线正交描述法中参数θ3几何意义的示意图;Fig. 3 (c) is the schematic diagram of the geometric meaning of parameter θ 3 in the straight line orthogonal description method provided by the embodiment of the present invention;
图4是本发明实施例提供的局部场景对应稀疏特征地图结果示意图;4 is a schematic diagram of a result of a sparse feature map corresponding to a local scene provided by an embodiment of the present invention;
图5是本发明实施例中信息处理设备的结构示意图。FIG. 5 is a schematic structural diagram of an information processing device in an embodiment of the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案及优点更加清楚、明确,以下参照附图并举实施例对本发明进一步详细说明。应当理解,此处所描述的具体实施例仅仅用于解释本发明,并不用于限定本发明。In order to make the objectives, technical solutions and advantages of the present invention clearer and clearer, the present invention will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本发明的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。It will be understood by those skilled in the art that the singular forms "a", "an", "the" and "the" as used herein can include the plural forms as well, unless expressly stated otherwise. It should be further understood that the word "comprising" used in the description of the present invention refers to the presence of stated features, integers, steps, operations, elements and/or components, but does not exclude the presence or addition of one or more other features, Integers, steps, operations, elements, components and/or groups thereof. It will be understood that when we refer to an element as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Furthermore, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combination of one or more of the associated listed items.
本技术领域技术人员可以理解,除非另外定义,这里使用的所有术语(包括技术术语和科学术语),具有与本发明所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语,应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非像这里一样被特定定义,否则不会用理想化或过于正式的含义来解释。It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It should also be understood that terms, such as those defined in a general dictionary, should be understood to have meanings consistent with their meanings in the context of the prior art and, unless specifically defined as herein, should not be interpreted in idealistic or overly formal meaning to explain.
发明人发现现有技术中的SLAM算法中使用的相机视觉狭窄,单帧图片能获取到的信息有限,并且由于基于点特征实现相机轨迹的恢复,跟踪轨迹的结果不稳定,构建出的地图不能反映真实场景。The inventor found that the camera used in the SLAM algorithm in the prior art has narrow vision, and the information that can be obtained from a single frame of picture is limited, and due to the recovery of the camera trajectory based on point features, the result of the tracking trajectory is unstable, and the constructed map cannot be used. reflect the real scene.
为了克服现有技术中的上述问题,本实施例公开了一种点线结合的多相机视觉SLAM方法、设备及存储介质,通过提取多角度影像数据中的点特征和线特征,并得到点特征和线特征的位置描述,使用匹配算法建立点特征和线特征在帧间的对应关系;再采用模型分别对二维影像平面和三维世界中的点线特征进行数学描述,建立投影关系,并根据点特征、线特征、以及点特征和线特征和所述相机位姿初步估计结果,构建图结构,实现相机轨迹与三维特征地图的可视化。In order to overcome the above problems in the prior art, the present embodiment discloses a multi-camera visual SLAM method, device and storage medium combining point and line, by extracting point features and line features in multi-angle image data, and obtaining point features and the position description of the line feature, use the matching algorithm to establish the correspondence between the point feature and the line feature in the frame; then use the model to mathematically describe the point and line features in the two-dimensional image plane and the three-dimensional world, establish the projection relationship, and according to The point feature, line feature, point feature and line feature, and the preliminary estimation result of the camera pose are used to construct a graph structure to realize the visualization of the camera trajectory and the three-dimensional feature map.
下面结合附图和具体实施例,对本发明所提供的方法、系统及设备做进一步详细的说明。The method, system and device provided by the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments.
示例性方法Exemplary method
第一方面,本实施例公开了一种点线结合的多相机视觉SLAM方法,如图1所示,包括:In the first aspect, this embodiment discloses a multi-camera visual SLAM method combining point and line, as shown in FIG. 1 , including:
步骤S1、采集目标场景的多角度影像数据。Step S1, collecting multi-angle image data of the target scene.
首先采集目标场景区域内的多角度影像数据,为了获取到含有更多信息的全景数据,本步骤中使用Ladybug 5+多相机全景设备对目标场景区域内的多角度影像数据进行采集。First, the multi-angle image data in the target scene area is collected. In order to obtain panoramic data with more information, the Ladybug 5+ multi-camera panoramic device is used in this step to collect the multi-angle image data in the target scene area.
由于Ladybug全景设备上搭载的6个相机同时采集360°场景数据。相比使用单个普通相机采集数据的系统,使用多个广角相机采集数据可在同样长的时间内可获取更多的场景信息;轨迹运算结果综合了各个角度的信息,提高了系统的稳定性。Because the six cameras on the Ladybug panoramic device simultaneously collect 360° scene data. Compared with a system that uses a single ordinary camera to collect data, using multiple wide-angle cameras to collect data can obtain more scene information in the same period of time; the trajectory calculation results integrate information from various angles, improving the stability of the system.
为了便于对获取到的多角度影像数据进行处理,当采集到所述多角度影像数据后,还对采集到的多角度影像数据进行预处理。所述预处理的步骤包括:In order to facilitate the processing of the acquired multi-angle image data, after the multi-angle image data is collected, the collected multi-angle image data is also preprocessed. The preprocessing steps include:
步骤S01、对获取到的所述多角度影像数据进行畸变校正处理,得到畸变校正后的中间影像数据.Step S01: Perform distortion correction processing on the acquired multi-angle image data to obtain intermediate image data after distortion correction.
当本步骤中使用的时使是Ladybug 5+多相机全景设备时,则可以直接使用Ladybug SDK工具包对影像进行畸变校正。When the Ladybug 5+ multi-camera panoramic device is used in this step, you can directly use the Ladybug SDK toolkit to perform distortion correction on the image.
步骤S02、利用预设摄像头掩膜对所述中间影像数据进行掩膜处理,得到预处理完成的多角度影像数据。Step S02 , performing mask processing on the intermediate image data by using a preset camera mask to obtain preprocessed multi-angle image data.
由于数据采集时,采集者的头部和背包上搭载的其他传感器的轮廓出现在了影像上。为了避免从图像和边框交界处、拍摄到的干扰物体处错误地提取出特征,影响后续特征跟踪和轨迹求解,在使用Laydbug SDK工具对影像进行畸变校正后,再对使用不同镜头拍摄的影像分别进行掩模处理,校正后的影像对应掩模如图2所示。由于镜头在数据采集背包上放置位置的差异,影像内遮挡部分有不同,因此是根据镜头的不同制作掩模,每个镜头单独有固定的掩模,各个镜头的掩模对所属镜头拍摄的所有影像具有适应性。As the data was collected, the outlines of the collector's head and other sensors on the backpack appeared on the image. In order to avoid erroneously extracting features from the junction of the image and the frame and the photographed interfering objects, which affects the subsequent feature tracking and trajectory solution, after using the Laydbug SDK tool to correct the distortion of the image, then use the Laydbug SDK tool to correct the image distortion. After mask processing, the corrected image corresponds to the mask as shown in Figure 2. Due to the difference in the placement of the lens on the data collection backpack, the occluded part of the image is different, so the mask is made according to the difference of the lens. Each lens has a separate fixed mask. Images are adaptive.
进一步的,本步骤中还包括:使用Ladybug SDK工具包获取Ladybug设备内外参数,其中包括Ladybug所搭载6个相机的焦距,图像中心坐标和相对于设备中心的旋转平移参数。Further, this step also includes: using the Ladybug SDK toolkit to obtain internal and external parameters of the Ladybug device, including the focal lengths of the six cameras carried by the Ladybug, the coordinates of the image center, and the rotation and translation parameters relative to the center of the device.
步骤S2、提取并匹配所述多角度影像数据中的点特征和线特征,并得到所述点特征和线特征在三维空间中的位置信息。Step S2: Extract and match point features and line features in the multi-angle image data, and obtain position information of the point features and line features in the three-dimensional space.
从相机采集到的多角度影像数据中提取出点特征和线特征,并获取提取到的多角度影像数据中的点特征和线特征在三维空间中的位置信息。The point features and line features are extracted from the multi-angle image data collected by the camera, and the position information of the point features and line features in the extracted multi-angle image data in the three-dimensional space is obtained.
具体的,本步骤包括:对所得序列影像数据分别进行点线特征的提取、匹配操作,并使用适当的方法对其在影像和空间中所处位置进行几何描述。Specifically, this step includes: extracting and matching point and line features respectively on the obtained sequence image data, and using an appropriate method to geometrically describe its position in the image and space.
进一步的,对点线特征进行提取及匹配操作包括:使用特征点提取算法提取所述多角度影像数据中的角点特征,以及使用rBRI EF描述子进行角点特征描述,并基于角点描述进行特征匹配,得到提取出的点特征。以及使用线特征提取算法提取影像中的线段特征,并使用LBD描述算子对提取出的线段特征进行描述,并基于线段特征描述进行线特征匹配;得到提取出的线特征。Further, extracting and matching the point and line features includes: extracting the corner features in the multi-angle image data using a feature point extraction algorithm, and using the rBRI EF descriptor to describe the corner features, and based on the corner description. Feature matching to obtain the extracted point features. And use the line feature extraction algorithm to extract the line segment feature in the image, and use the LBD description operator to describe the extracted line segment feature, and perform line feature matching based on the line segment feature description; obtain the extracted line feature.
具体的,本步骤包括以下步骤:Specifically, this step includes the following steps:
步骤S21、提取序列影像中的点特征,并进行匹配和几何描述。具体过程如下:Step S21, extracting point features in the sequence image, and performing matching and geometric description. The specific process is as follows:
步骤S21.1,使用ORB算法提取影像中的点特征。使用其它算法提取点特征的步骤与ORB相似。保证点特征在影像内均匀分布,提取特征之前需将影像划分为若干网格,在各个网格内先使用初始阈值提取FAST角点,若提取失败,则改为使用比初始阈值稍大的最小阈值进行提取,根据效果调整阈值使得每个网格内可提取出数量相近的点特征。Step S21.1, using the ORB algorithm to extract point features in the image. The steps for extracting point features using other algorithms are similar to ORB. Ensure that the point features are evenly distributed in the image. Before extracting the feature, the image needs to be divided into several grids. In each grid, the initial threshold is used to extract the FAST corner points. If the extraction fails, the smallest value slightly larger than the initial threshold is used instead. The threshold is extracted, and the threshold is adjusted according to the effect so that a similar number of point features can be extracted in each grid.
首先使用FAST角点检测算法提取出影像中若干角点,再计算每个角点的Harris响应,筛选出N个Harris响应最大的点作为最佳特征,然后建立影像金字塔,一般建立8层金字塔,设置金字塔第0层尺度为1,第1层尺度为1.2,此后第2层则为1.22,以此类推。分别在8幅金字塔影像上进行角点提取,记录角点所在金字塔层数作为点的尺度信息,并使用矩(moment)来确定角点的方向。First, the FAST corner detection algorithm is used to extract several corner points in the image, and then the Harris response of each corner point is calculated, and the N points with the largest Harris response are selected as the best features, and then an image pyramid is established, generally an 8-layer pyramid is established. Set the scale of the 0th level of the pyramid to 1, the scale of the 1st level to 1.2, then the 2nd level to be 1.2 2 , and so on. Corner extraction is performed on 8 pyramid images respectively, the number of pyramid layers where the corner is located is recorded as the scale information of the point, and the moment is used to determine the direction of the corner.
对于某个待求角点,首先计算出以该点为圆心,半径r范围内的亮度加权质心,以此质心到该点形成的向量矩的方向作为角点的方向。For a certain corner point to be found, first calculate the brightness weighted centroid within the radius r with the point as the center of the circle, and use the direction of the vector moment formed from the centroid to the point as the direction of the corner point.
使用I(x,y)表示角点(x,y)亮度,角点角点的矩为:Use I(x, y) to represent the brightness of the corner (x, y), and the moment of the corner is:
质心坐标为 The center of mass coordinates are
角点的方向为θ=atan2(yc,xc)。The direction of the corner point is θ=atan2(y c , x c ).
对于每个提取出的点特征选取5*5邻域作为判断窗口,只有当窗口内所有像素均处于所述掩模通过部分的范围之内时,判定该特征为有效特征。For each extracted point feature, a 5*5 neighborhood is selected as a judgment window, and only when all the pixels in the window are within the range of the mask passing portion, the feature is judged to be an effective feature.
步骤S21.2,使用rBRIEF描述子生成256维二进制描述子对角点周围影像特征进行描述,并通过计算描述子之间的汉明距离(Hamming Distance)估算对应特征点影像表现的相似度进行点特征匹配。Step S21.2, use the rBRIEF descriptor to generate a 256-dimensional binary descriptor to describe the image features around the corner, and estimate the similarity of the image performance of the corresponding feature points by calculating the Hamming Distance between the descriptors. feature matching.
步骤S21.3,使用三维空间坐标或齐次坐标 描述点特征在世界坐标系中的位置。使用齐次坐标可以更方便地将点在不同坐标系间进行转换。Step S21.3, use three-dimensional space coordinates or homogeneous coordinates Describes the location of a point feature in the world coordinate system. Using homogeneous coordinates makes it easier to convert points between different coordinate systems.
进一步的,提取所述多角度影像数据中的线特征的方法步骤包括:Further, the method steps of extracting line features in the multi-angle image data include:
使用线特征提取算法提取影像中的线段特征,并使用LBD描述算子对提取出的线段特征进行描述,并基于线段特征描述进行线特征匹配的步骤包括:The line feature extraction algorithm is used to extract the line segment features in the image, and the LBD description operator is used to describe the extracted line segment features, and the steps of performing line feature matching based on the line segment feature description include:
利用线特征提取算法从所述多角度影像数据中提取出结构线段;Extracting structural line segments from the multi-angle image data by using a line feature extraction algorithm;
选取预设个数的结构线段,并利用描述子对选取的结构线段进行影像特征描述,并根据所述影像特描述进行结构线段匹配;Selecting a preset number of structural line segments, and using descriptors to describe the image features of the selected structural line segments, and performing structural line segment matching according to the image feature description;
使用普吕克坐标的四维正交描述式对结构线段匹配出的直线进行描述,得到提取出的多个线特征。The straight line matched by the structural line segment is described using the four-dimensional orthogonal description of the Plück coordinate, and the extracted multiple line features are obtained.
步骤S22,提取序列影像中的线特征,对其进行匹配操作,并使用一种基于普吕克坐标的四维正交表达式对其在三维空间中的位置进行几何描述。区别于一般坐标转换方式,详细介绍线特征在世界空间、相机空间和影像平面中坐标相互转换的方式。具体过程如下:Step S22, extract the line features in the sequence images, perform matching operations on them, and use a four-dimensional orthogonal expression based on Plück coordinates to geometrically describe their positions in the three-dimensional space. Different from the general coordinate transformation method, the method of coordinate transformation of line features in world space, camera space and image plane is introduced in detail. The specific process is as follows:
步骤S22.1,用LSD(Line Segment Detector)方法从影像中提取结构线段。加以一定的长度筛选,并取线段的两端ps、pe和中点pm,各自判断其3*3邻域中是否所有点均处于步骤S101.2中所述掩模通过部分的范围之内,仅有当三点均通过时判定所提取线段为有效特征。Step S22.1, using the LSD (Line Segment Detector) method to extract structural line segments from the image. Filter by a certain length, and take the two ends p s , p e and the midpoint p m of the line segment, and judge whether all the points in its 3*3 neighborhood are in the range of the mask passing part described in step S101.2. Within, the extracted line segment is determined to be a valid feature only when all three points pass.
步骤S22.2,提取出足量的线段(在纹理丰富区域不少于30条,纹理较贫乏区域不少于10条)后,再使用基于LBD描述子(Line Band Descriptor)的方法生成256维的二进制描述子,对不同影像中提取的线段进行影像特征描述,并通过计算汉明距离进行匹配。Step S22.2, after extracting a sufficient number of line segments (not less than 30 in the texture-rich area, and not less than 10 in the texture-poor area), then use the method based on the LBD descriptor (Line Band Descriptor) to generate a 256-dimensional The binary descriptor is used to describe the image features of the line segments extracted from different images, and match them by calculating the Hamming distance.
步骤S22.3,对于影像中的线段特征,使用端点齐次坐标x1(u1,v1,1)和x2(u2,v2,1)进行描述。对于空间中的直线特征,使用一种基于普吕克坐标的四维正交描述式进行描述。In step S22.3, the line segment features in the image are described using the homogeneous coordinates of the endpoints x 1 (u 1 , v 1 , 1) and x 2 (u 2 , v 2 , 1). For linear features in space, a four-dimensional orthogonal description based on Plück coordinates is used to describe them.
给定空间中两点的齐次坐标则空间直线的普吕克坐标可由6维向量LT~(nT,vT)T表示,其中:Homogeneous coordinates of two points in a given space Then the Plück coordinate of a straight line in space can be represented by a 6-dimensional vector L T ~(n T , v T ) T , where:
则直线的正交坐标为:Then the orthogonal coordinates of the line are:
令矩阵则U∈SO(3),W∈SO(2),线段参数可简化为矩阵U的对数映射向量θ和W对应角度θ的组合其中每个参数都具有各自的几何意义:矩阵W包含的比率信息σ1/σ2表示了从坐标原点O到直线的距离d,参数θ与d的方向相关。矩阵U包含了直线L的三维坐标信息,θ1、θ2、θ3各自对应的几何关系为:let matrix Then U∈SO(3), W∈SO(2), the line segment parameters can be simplified to the combination of the logarithmic mapping vector θ of the matrix U and the corresponding angle θ of W Each parameter has its own geometric meaning: the ratio information σ 1 /σ 2 contained in the matrix W represents the distance d from the coordinate origin O to the straight line, and the parameter θ is related to the direction of d. The matrix U contains the three-dimensional coordinate information of the straight line L, and the corresponding geometric relationships of θ 1 , θ 2 , and θ 3 are:
1. θ1表示直线保持相切于以O为圆心,d为半径,OL平面上的圆转动;1. θ 1 means that the straight line remains tangent to the circle with O as the center, d as the radius, and the circle on the OL plane rotates;
2. θ2表示直线保持相切于以O为圆心,d为半径,垂直于OL平面并交直线于点P上的圆转动;2. θ 2 means that the straight line remains tangent to the circle with O as the center, d as the radius, perpendicular to the OL plane and intersecting the line on the point P and rotates;
3. θ3表示直线围绕轴OP转动。直线正交坐标中各分量所代表的几何意义,通过该分量变化时直线位置的变化描述,如图3a至图3c所示。3. θ 3 represents the rotation of the line around the axis OP. The geometric meaning represented by each component in the orthogonal coordinates of the straight line is described by the change of the position of the straight line when the component changes, as shown in Figure 3a to Figure 3c.
在后续图优化计算中,需要根据δθ的增量对矩阵进行更新,算法通过计算指数和对数映射,更新矩阵U和W来进行:In the subsequent graph optimization calculation, it is necessary to adjust the matrix according to the increment of δθ To update, the algorithm does this by computing exponential and logarithmic mappings, updating matrices U and W:
步骤S22.4,用表示世界到相机的坐标转换矩阵,依据将普吕克直线Lw从世界坐标系转换到相机坐标系下直线Lc,其中Rcw∈SO(3)为转换旋转矩阵,为平移向量,表示普吕克坐标系下Tcw的变体形式。Step S22.4, with Represents the coordinate transformation matrix from the world to the camera, according to Convert the Plück line L w from the world coordinate system to the line L c in the camera coordinate system, where R cw ∈ SO(3) is the transformation rotation matrix, is the translation vector, Represents a variant of T cw in the Plück coordinate system.
依据 in accordance with
将直线从相机坐标系转换到相机平面直线l′,其中为普吕克坐标系下相机内置矩阵,其中(uc,vc)表示相机主光轴在图像坐标系中的坐标,fu、fv分别为相机在影像u、v方向上的焦距,nc直线在相机坐标系内普吕克坐标的分量。Transform the line from the camera coordinate system to the camera plane line l', where is the built-in matrix of the camera in the Plück coordinate system, where (u c , v c ) represents the coordinates of the main optical axis of the camera in the image coordinate system, f u and f v are the focal lengths of the camera in the u and v directions of the image, respectively, The component of the Plück coordinate of the n c line in the camera coordinate system.
本步骤中,为了的实现更好的线特征匹配效果,所述根据所述影像特描述进行结构线段匹配的步骤包括:In this step, in order to achieve a better line feature matching effect, the step of performing structural line segment matching according to the image feature description includes:
步骤22.21,在影像的多个通道内分别进行结构线段匹配,并标记在至少一个通道中符合预设匹配条件的结构线段为有效线段,得到第一有效线段集。Step 22.21: Perform structural line segment matching in multiple channels of the image respectively, and mark structural line segments that meet preset matching conditions in at least one channel as valid line segments to obtain a first valid line segment set.
对于多通道彩色图像,在各个通道内分别进行线段匹配,只要在多于一个通道中符合匹配条件,则该线段可被标记为有效匹配,这一步是为了充分利用彩色影像数据在各通道的信息,经过本步骤的匹配后,得到第一有效线段集。For multi-channel color images, line segment matching is performed in each channel. As long as the matching conditions are met in more than one channel, the line segment can be marked as an effective match. This step is to make full use of the information of color image data in each channel. , after the matching in this step, the first valid line segment set is obtained.
步骤22.22,利用k最近邻方法对标记为所述有效线段的各个线段进行再次匹配,得到再次匹配得到的有效线段,得到第二有效线段集。In step 22.22, each line segment marked as the valid line segment is matched again by using the k-nearest neighbor method to obtain valid line segments obtained by matching again, and obtain a second valid line segment set.
对于每次匹配,使用K最近邻(K Nearest-Neighbor)方法寻找描述算子距离仅次于该次匹配的线段,仅当两种距离之差大于设定的阈值时,才将该次匹配标记为有效匹配,否则放弃该次匹配结果。这一步筛选是为了减少相似线段之间匹配错位情况的出现,以增加匹配的准确性,经过本次匹配后,得到第二有效线段集。For each match, use the K Nearest-Neighbor method to find the line segment whose description operator distance is next to the match, and mark the match only when the difference between the two distances is greater than the set threshold. It is a valid match, otherwise the match result is discarded. This step of screening is to reduce the occurrence of matching misalignment between similar line segments, so as to increase the matching accuracy. After this matching, a second valid line segment set is obtained.
步骤22.23,对所述第二有效线段集中的各个有效线段进行双向匹配,得到最佳匹配对。Step 22.23: Perform bidirectional matching on each valid line segment in the second valid line segment set to obtain the best matching pair.
交换待匹配帧特征序列和新输入帧特征序列位置,进行双向匹配,仅保留双向皆为最佳匹配的特征对。例如,假设帧A中存在线段l1,计算并筛选得出帧B中与l1的最佳匹配线段为l2;交换A、B位置,仅有当帧A中与l2的最佳匹配线段依然为l1时,判断匹配结果{l1,l2}为有效匹配对。Swap the position of the feature sequence of the frame to be matched and the feature sequence of the new input frame, perform two-way matching, and only retain the feature pair that is the best match in both directions. For example, assuming that there is a line segment l 1 in frame A, the best matching line segment with l 1 in frame B is calculated and screened to be l 2 ; by exchanging the positions of A and B, only when the best match with l 2 in frame A is When the line segment is still l 1 , it is judged that the matching result {l 1 , l 2 } is a valid matching pair.
步骤S3、对所述多角度影像数据中的每帧影像进行相机位姿初步估计,并结合提取并匹配出的点特征、线特征和所述相机位姿初步估计结果,构建图结构。Step S3: Perform a preliminary estimation of the camera pose for each frame of the multi-angle image data, and combine the extracted and matched point features, line features and the preliminary estimation result of the camera pose to construct a graph structure.
本步骤中,使用根据前述方法获取的特征关联信息,使用对极几何进行系统初始化,使用EPnP方法进行位姿初步估计,具体的,使用对极几何恢复初始帧的相对位姿关系,并进行系统初始化,主要包括以下步骤:In this step, the feature association information obtained according to the aforementioned method is used, the epipolar geometry is used to initialize the system, and the EPnP method is used to perform preliminary pose estimation. Initialization mainly includes the following steps:
步骤S31,使用对极几何确定初始帧的相对位姿关系。Step S31, using the epipolar geometry to determine the relative pose relationship of the initial frame.
从输入影像中提取点特征,当点特征数量足够时(大于阈值τd),作为待定初始帧进行下一步,否则直接跳过该帧继续计算;创建待定初始帧后,从序列影像的下一帧中提取特征,并进行匹配,并对匹配结果进行最邻近和双向筛选。若匹配特征数量也足够(大于τm),则可视为待定初始帧对,使用对极几何解算基本矩阵F,并进一步解算本质矩阵E,进而初步估计两帧相机相对位姿。Extract point features from the input image. When the number of point features is sufficient (greater than the threshold τ d ), it is used as the undetermined initial frame for the next step, otherwise the frame is skipped and the calculation continues; Features are extracted from the frame, and matched, and the matching results are subjected to nearest neighbor and bidirectional filtering. If the number of matching features is also sufficient (greater than τ m ), it can be regarded as an undetermined initial frame pair, and the fundamental matrix F is solved by using epipolar geometry, and the essential matrix E is further solved, and then the relative poses of the two frames of cameras are preliminarily estimated.
步骤S32,通过相机运动状态估计和EPnP方法对待解算帧相对于已知帧的位姿关系进行估计,得到各个待解算帧的相对位姿关系。Step S32, estimating the pose relationship of the frame to be solved relative to the known frame by estimating the motion state of the camera and the EPnP method, and obtaining the relative pose relationship of each frame to be solved.
进行系统初始化。将初始帧作为关键帧加入系统,初始帧中的匹配特征点和线转换至世界坐标系并加入地图。然后使用图优化方法对相机位姿、点特征、线特征进行全局图优化,调整位姿关系。使用多相机序列影像数据进行实际操作时,使用0号相机的初始化结果作为标准,根据预处理中获得的相机标定结果将初始化位姿转换为Ladybug设备位姿,并映射到其余各个相机。Perform system initialization. The initial frame is added to the system as a key frame, and the matching feature points and lines in the initial frame are converted to the world coordinate system and added to the map. Then use the graph optimization method to optimize the global graph of the camera pose, point feature, and line feature, and adjust the pose relationship. When using the multi-camera sequence image data for actual operation, the initialization result of camera 0 is used as the standard, and the initialization pose is converted into the Ladybug device pose according to the camera calibration result obtained in the preprocessing, and is mapped to the remaining cameras.
步骤S33,以点特征、线特征和相机位置为顶点,点特征和线特征投影关系构建图结构。In step S33, a graph structure is constructed with the point feature, the line feature and the camera position as vertices, and the projection relationship between the point feature and the line feature.
通过相机运动状态估计和EPnP方法对待解算帧相对于已知帧的位姿关系进行粗略估计,作为图优化的初值。The pose relationship of the frame to be solved relative to the known frame is roughly estimated by the camera motion state estimation and EPnP method, which is used as the initial value of graph optimization.
根据点线特征匹配关系构建图,图中包含多个点特征顶点、多个线特征顶点、多个(局部或全局估计)或一个(单帧估计)相机顶点,以及特征-相机对应边关系。According to the point-line feature matching relationship, a graph is constructed, which includes multiple point feature vertices, multiple line feature vertices, multiple (local or global estimation) or one (single-frame estimation) camera vertices, and feature-camera corresponding edge relationships.
步骤S4、根据所提取点特征、线特征和所述图结构确定三维地图。Step S4: Determine a three-dimensional map according to the extracted point features, line features and the graph structure.
本步骤中根据上述步骤S2和步骤S3中提取到的点特征、线特征以及构建出的图结构,实现相机的轨迹跟踪以及得到构建出的三维地图。In this step, according to the point features, line features and the constructed graph structure extracted in the above steps S2 and S3, the trajectory tracking of the camera is implemented and the constructed three-dimensional map is obtained.
进一步的,本步骤中还包括利用G2O工具对上述步骤S3中构建出的图进行图优化的步骤,通过使用图优化算法优化相机在各拍摄帧的定位和姿态信息,对更新构建出的图结构,从而对点特征和线特征在图结构中的三维世界坐标进行优化,从而得到更加准确的三维地图。Further, this step also includes the step of using the G2O tool to optimize the graph constructed in the above step S3, by using the graph optimization algorithm to optimize the positioning and attitude information of the camera in each shooting frame, and update the constructed graph structure. , so as to optimize the 3D world coordinates of point features and line features in the graph structure, so as to obtain a more accurate 3D map.
具体的,所述以点特征、线特征和相机位置为顶点,点特征和线特征投影关系构建图结构的步骤包括:Specifically, the step of constructing a graph structure with the point feature, line feature and camera position as vertices, and the projection relationship between the point feature and the line feature includes:
将与待解算相机位姿关联的线特征和点特征作为顶点加入图结构;The line features and point features associated with the camera pose to be solved are added to the graph structure as vertices;
根据点特征和线特征在所述解算帧中的可视关系,在G2O开源库中分别构建多顶点边;其中,所述多顶点边为点特征、线特征和相机之间对应的相对位姿关系;According to the visual relationship between point features and line features in the solution frame, multi-vertex edges are respectively constructed in the G2O open source library; wherein, the multi-vertex edges are the corresponding relative positions between point features, line features and cameras posture relationship;
根据所述多顶点边中点特征、线特征和相机之间对应的相对位姿关系,在每帧影像中计算所述点特征重投影误差对相机姿态求解雅克比矩阵,和在每帧影像中计算所述线特征的重投影误差对相机姿态求解雅克比矩阵;According to the relative pose relationship between the point feature, the line feature and the camera corresponding to the multi-vertex edge, calculate the point feature reprojection error in each frame of image to solve the Jacobian matrix for the camera pose, and in each frame of image Calculate the reprojection error of the line feature to solve the Jacobian matrix for the camera pose;
根据求解出的重投影误差利用最优化算法迭代解算出相机位置和特征坐标。According to the solved reprojection error, the optimization algorithm is used to iteratively solve the camera position and feature coordinates.
使用G2O图结构进行位姿估计的具体步骤如下:The specific steps of using the G2O graph structure for pose estimation are as follows:
步骤S41,将与待解算相机位姿关联的线特征和点特征作为顶点加入图。顶点结构中需包含特征的几何位置描述向量,和根据优化过程中雅克比解算估计的增量对位姿进行优化更新的方法。点特征的描述向量为三维空间坐标根据向量加法进行更新;线特征的描述向量为:Step S41, adding line features and point features associated with the camera pose to be solved into the graph as vertices. The vertex structure needs to include the geometric position description vector of the feature, and the method to optimize the update of the pose according to the increment estimated by the Jacobian solution during the optimization process. The description vector of the point feature is the three-dimensional space coordinate Update based on vector addition; the description vector for line features is:
使用指数映射将增量转换到特殊欧氏群,再使用对数映射转换回正交描述坐标的方式进行更新。The delta is converted to a special Euclidean group using an exponential map, and updated using a logarithmic map that converts back to the orthogonal description coordinates.
步骤S42,根据特征在待解算帧中的可视关系,构建多顶点边:点特征-线特征-相机。此设定是为了方便解算,由于特征之间位姿关系相互独立,实际操作过程中为使用空特征填充,即使用空点-线-相机描述线特征-相机对应关系,使用点-空线-相机描述点特征-相机对应关系。边结构中需包含重投影误差和雅克比矩阵的解算方法,具体步骤如下所示:Step S42, construct a multi-vertex edge according to the visual relationship of the feature in the frame to be solved: point feature-line feature-camera. This setting is for the convenience of solving. Since the pose relationship between the features is independent of each other, the empty feature is used to fill in the actual operation process, that is, the empty point-line-camera description line feature-camera correspondence is used, and the point-empty line is used. - Camera description point feature-camera correspondence. The edge structure needs to include the reprojection error and the solution method of the Jacobian matrix. The specific steps are as follows:
步骤S42.1,第k帧中,设点特征i的世界坐标为Pi,对应影像坐标为pk,i,相机内置矩阵为K,世界坐标系到相机坐标系的转换矩阵为Tk,cw,则重投影误差epk,i=pk,i-KTk,cwPi。Step S42.1, in the kth frame, set the world coordinate of the point feature i as P i , the corresponding image coordinate as p k,i , the camera built-in matrix as K, and the transformation matrix from the world coordinate system to the camera coordinate system as T k , cw , then the reprojection error ep k,i =p k,i -KT k,cw P i .
点特征重投影误差对相机姿态δξ求解雅克比矩阵为:The Jacobian matrix of the point feature reprojection error for the camera pose δξ is:
步骤S42.2,第k帧中,线特征j的世界普吕克坐标为Lj,在该帧中对应的量测线段lk,j用其端点的齐次坐标表示,则特征线j的重投影误差所示,[]1-3表示向量的前三维,d(.)表示距离函数:l′(l1,l2,l3)为使用方程系数表示的直线从世界坐标系到影像的投影,x1(u1,v1,1)和x2(u2,v2,1)为使用齐次坐标表示的实际量测线段端点坐标。根据链式法则,线特征重投影误差对相机姿态求解雅克比矩阵为:Step S42.2, in the kth frame, the world Plück coordinate of the line feature j is L j , and the corresponding measurement line segment l k,j in this frame is represented by the homogeneous coordinates of its endpoints, then the characteristic line j is reprojection error shown, [] 1-3 represent the first three dimensions of the vector, and d(.) represents the distance function: l'(l 1 , l 2 , l 3 ) is the projection of the straight line represented by the equation coefficients from the world coordinate system to the image, x 1 (u 1 , v 1 , 1) and x 2 (u 2 , v 2 , 1 ) is the end point coordinates of the actual measured line segment represented by homogeneous coordinates. According to the chain rule, the Jacobian matrix of the line feature reprojection error for the camera pose is:
步骤S42.3,通过两个步骤增强优化过程的鲁棒性:(1)为每条边设定鲁棒核函数,以减少离群值的影响;(2)每次优化分组重复进行。每组迭代完成后,将重投影误差大于阈值th的边对应的特征标记为离群值,并将这些边剔除,下一组迭代时仅保留误差处于阈值范围内的边(Inliers),以此获得较稳定的位姿估值。按照卡方分布的不同自由度,点特征thp=5.991thp=5.991,线特征thl=7.815。如果初步位姿计算后非离群特征数量足够,且距离上一次局部优化达到一定间隔,则需要进行局部图优化,进一步稳定位姿和特征关系,减少累计误差。In step S42.3, the robustness of the optimization process is enhanced through two steps: (1) a robust kernel function is set for each edge to reduce the influence of outliers; (2) each optimization grouping is repeated. After each set of iterations is completed, the features corresponding to the edges with the reprojection error greater than the threshold th are marked as outliers, and these edges are eliminated. In the next set of iterations, only the edges (Inliers) whose errors are within the threshold range are retained. Obtain a more stable pose estimation. According to the different degrees of freedom of the chi-square distribution, the point feature th p =5.991th p =5.991, and the line feature th l =7.815. If the number of non-outlier features after the initial pose calculation is sufficient, and the distance from the last local optimization reaches a certain distance, local graph optimization is required to further stabilize the relationship between pose and feature and reduce the cumulative error.
局部优化进行时,通过地图搜索,寻找与当前关键帧存在公共视域的过往关键帧,使用这些共视关键帧的位姿和其中的特征构建顶点,根据可视关系构建对应边加入优化图,同时进行优化。而侦测到闭环后,还需进行全局优化。When local optimization is in progress, through map search, find past keyframes that have a common view field with the current keyframe, use the poses and features of these common view keyframes to construct vertices, and construct corresponding edges according to the visual relationship to join the optimization graph. optimize at the same time. After the closed loop is detected, global optimization is required.
步骤S43,在Ladybug多相机全景拍摄系统中,估计单帧位姿时,需分别解算各帧相对于初始化的局部坐标系位姿,最后统一到Ladybug坐标系内,将各个局部相机坐标转换至中心并求解均值,得到Ladybug设备在场景中的坐标值。Step S43, in the Ladybug multi-camera panoramic shooting system, when estimating the pose of a single frame, the pose of each frame relative to the initialized local coordinate system needs to be calculated separately, and finally unified into the Ladybug coordinate system, and the coordinates of each local camera are converted to Center and solve the mean to get the coordinates of the Ladybug device in the scene.
步骤S44,除初始帧必然作为关键帧之外,后续用于判断一帧影像是否为关键帧的条件有以下几项:(1)距离将上一关键帧加入地图已经过N个普通帧(实施过程中取N=9);(2)解算后得到的内点/线特征(Inliers)数量在某个范围内,如小于nfmax(nfmax=90)且大于nfmin(nfmin=70);(3)内点/线特征数量与上一帧匹配特征数量之比小于rkf(rkf=0.9),且大于50。条件(2)(3)表明匹配特征正在减少,但该帧解算结果依然稳定。若以上三个条件中,某帧解算结果满足条件(1),且满足(2)(3)中任一项,则将该帧作为关键帧加入地图。为减少关键帧间重复提供特征信息的情况,还需对过于密集的关键帧进行剔除操作。通过共视关系获取与当前关键帧共享视线范围的其余关键帧和其中的特征信息,如果可以在其它关键帧中观察到当前关键帧90%或以上的特征,则该帧属于多余关键帧,从地图中删除。In step S44, except that the initial frame must be used as a key frame, the subsequent conditions for judging whether a frame of image is a key frame are as follows: (1) N ordinary frames have passed since the previous key frame was added to the map (implementation; In the process, take N=9); (2) The number of interior points/line features (Inliers) obtained after the solution is within a certain range, such as less than nf max (nf max =90) and greater than nf min (nf min =70) ); (3) the ratio of the number of internal point/line features to the number of matching features in the previous frame is less than rkf (r kf = 0.9) and greater than 50. Condition (2)(3) indicates that the matching features are decreasing, but the solution result for this frame is still stable. If, in the above three conditions, the solution result of a certain frame satisfies the condition (1), and satisfies any one of (2) and (3), the frame will be added to the map as a key frame. In order to reduce the situation of repeatedly providing feature information between keyframes, it is also necessary to remove the too dense keyframes. Obtain the remaining keyframes that share the line-of-sight range with the current keyframe and the feature information in them through the common vision relationship. If 90% or more of the features of the current keyframe can be observed in other keyframes, the frame belongs to the redundant keyframe. removed from the map.
进一步的,本步骤还包括:使用DBoW2生成的字典(dictionary)进行词袋法闭环检测,以检查相机是否到达此前已经过的地点,并进行闭环校正,从而对轨迹施加约束以消除或减少累积误差。然后对优化后的线段进行裁剪,最终生成场景稀疏特征地图。Further, this step also includes: using a dictionary generated by DBoW2 to perform bag-of-words closed-loop detection to check whether the camera has reached a location that it has passed before, and perform closed-loop correction, so as to impose constraints on the trajectory to eliminate or reduce accumulated errors . Then the optimized line segments are clipped, and finally the scene sparse feature map is generated.
步骤S44.1,基于共视关系和相邻帧之间的相似度,从所述多角度影像数据中相邻帧进行筛选,得到闭环备用组。Step S44.1, based on the common view relationship and the similarity between adjacent frames, filter adjacent frames in the multi-angle image data to obtain a closed-loop spare group.
通过共视关系筛选,剔除与当前帧具有公共视域的相邻帧,在地图中剩余关键帧中寻找与当前帧具有相同单词的帧列为备选帧。统计所有备选帧与当前帧共同单词的数量,取出最大共同单词数量的80%作为筛选的阈值,共同单词数超过该阈值,且与当前帧相似度超过当前帧与前述相邻帧最低相似度的所有关键帧。将这些关键帧与其之前的相邻10个关键帧分组,计算每个组中各个帧与当前帧相似度之和作为组得分,取最高组得分的75%作为阈值,找出所有组得分高于阈值的组内与当前帧相似度最高的关键帧作为备选组。将备选组内所有帧进行一致性检验(检查是否与当前帧具有共同的相邻帧),从而筛选出最终的闭环备选组。Through the common view relationship screening, the adjacent frames that have a common field of view with the current frame are eliminated, and the frames with the same word as the current frame are found in the remaining key frames in the map as candidate frames. Count the number of common words between all candidate frames and the current frame, and take 80% of the maximum number of common words as the threshold for screening. The number of common words exceeds the threshold, and the similarity with the current frame exceeds the minimum similarity between the current frame and the aforementioned adjacent frames. all keyframes. Group these keyframes and their previous 10 adjacent keyframes, calculate the sum of the similarity between each frame in each group and the current frame as the group score, take 75% of the highest group score as the threshold, and find out all groups with a score higher than The key frame with the highest similarity with the current frame in the threshold group is used as the candidate group. All frames in the candidate group are checked for consistency (check whether they have a common adjacent frame with the current frame), so as to filter out the final closed-loop candidate group.
步骤S44.2,利用词袋法闭环检测法对所述闭环备用组对应的匹配区域进行检测,并根据检测结果校正当前帧的共视关系,并更新匹配区域内的特征点的坐标值,以及校正闭环在世界坐标系中的位置。Step S44.2, using the bag-of-words method closed-loop detection method to detect the matching area corresponding to the closed-loop standby group, and correcting the common view relationship of the current frame according to the detection result, and updating the coordinate values of the feature points in the matching area, and Correct the position of the closed loop in the world coordinate system.
根据匹配点数量筛选备选组,使用备选组内剩余帧与当前帧内3对匹配点进行sim3解算,即通过相似变换群获取匹配点之间的旋转矩阵、平移向量和尺度因子,依据所求得的sim3关系以匹配的内点(Inliers)数量作为条件筛选出大致的匹配区域。将匹配区域的相机位姿和特征利用sim3关系进行校正后,在附近区域搜索进行匹配。若匹配点超过40个,则判定检测到闭环并通知解算线程,使其停止插入新的关键帧并进入下一步,否则清空备选组,等待进行下一次检测。The candidate group is screened according to the number of matching points, and the remaining frames in the candidate group and the three pairs of matching points in the current frame are used for sim3 calculation, that is, the rotation matrix, translation vector and scale factor between matching points are obtained through the similarity transformation group. The obtained sim3 relationship uses the number of matching inliers as a condition to filter out the approximate matching area. After correcting the camera pose and features of the matching area using the sim3 relationship, search in the nearby area for matching. If there are more than 40 matching points, it is determined that a closed loop is detected and the solution thread is notified to stop inserting new key frames and enter the next step, otherwise the candidate group is cleared and the next detection is awaited.
步骤S44.3,根据检测到的闭环更新图结构中过往帧之间的公共时域和连接关系。Step S44.3, update the common time domain and connection relationship between past frames in the graph structure according to the detected closed loop.
根据闭环检测结果更新当前帧的共视关系,解算与当前帧相邻帧的sim3关系,根据闭环sim3更新区域内各个特征点的坐标值,并将sim3关系转换为T∈SE(3)T∈SE(3);Update the common view relationship of the current frame according to the closed-loop detection result, solve the sim3 relationship with the adjacent frames of the current frame, update the coordinate values of each feature point in the area according to the closed-loop sim3, and convert the sim3 relationship into T∈SE(3)T ∈SE(3);
校正闭环在世界坐标系中的位置,然后,根据新发现的闭环,更新地图中过往关键帧之间的公共视域和连接关系。最后根据更新后的关系重建图结构,首先优化图中由于加入闭环而产生新连接关系的关键部分(Essential Graph),最后进行所有关键帧和特征点线的全局优化。Correct the position of the closed loop in the world coordinate system, and then, according to the newly discovered closed loop, update the common viewshed and connection relationship between past keyframes in the map. Finally, the graph structure is reconstructed according to the updated relationship. First, the key part (Essential Graph) of the new connection relationship in the graph is optimized due to the addition of closed loops. Finally, the global optimization of all key frames and feature point lines is performed.
步骤S44.4,结合更新后的图结构,以及使用预先保存的线特征在多角度影像上的端点坐标对提取到的直线进行裁剪,使用裁剪后的线段确定所述三维地图。Step S44.4, combining the updated graph structure and using the pre-saved end point coordinates of the line feature on the multi-angle image to crop the extracted straight line, and use the cropped line segment to determine the three-dimensional map.
在使用线特征进行图优化时,为了减少描述参数而进一步减少计算量,降低由像幅截断或线段检测算子造成的不同帧间线段端点沿所在直线方向漂移的影响,所使用的正交参数描述的是线段投影到世界坐标系后所在直线的位置。构建场景特征稀疏地图时,使用预先保存的线特征在影像上的端点坐标先对直线进行裁剪,使用裁剪后的线段构建稀疏场景地图。局部特征稀疏地图的结构示意图如图4所示。When using line features for graph optimization, in order to reduce the description parameters to further reduce the amount of calculation, and reduce the influence of the line segment endpoints between different frames drifting along the straight line caused by image frame truncation or line segment detection operators, the orthogonal parameters used Describes the position of the line where the line segment is projected to the world coordinate system. When constructing a sparse map of scene features, use the pre-saved endpoint coordinates of line features on the image to first crop the straight line, and use the cropped line segments to construct a sparse scene map. A schematic diagram of the structure of the local feature sparse map is shown in Figure 4.
示例性设备Exemplary Equipment
本实施例公开了一种信息处理设备,如图5所示,包括处理器、与处理器通信连接的存储介质,所述存储介质适于存储多条指令;所述处理器适于调用所述存储介质中的指令,以执行实现上述所述的点线结合的多相机视觉SLAM方法的步骤。This embodiment discloses an information processing device, as shown in FIG. 5 , including a processor and a storage medium communicatively connected to the processor, where the storage medium is suitable for storing a plurality of instructions; the processor is suitable for calling the The instructions in the storage medium are used to execute the steps of implementing the above-mentioned multi-camera vision SLAM method combined with dots and lines.
此外,上述的存储器22中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。In addition, the above-mentioned logic instructions in the
存储器22作为一种计算机可读存储介质,可设置为存储软件程序、计算机可执行程序,如本公开实施例中的方法对应的程序指令或模块。处理器30通过运行存储在存储器22中的软件程序、指令或模块,从而执行功能应用以及数据处理,即实现上述实施例中的方法。As a computer-readable storage medium, the
存储器22可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端设备的使用所创建的数据等。此外,存储器22可以包括高速随机存取存储器,还可以包括非易失性存储器。例如,U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等多种可以存储程序代码的介质,也可以是暂态存储介质。The
另一方面,本实施例还提供了一种计算机可读存储介质,其中,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现所述的一种点线结合的多相机视觉SLAM方法的步骤。On the other hand, this embodiment also provides a computer-readable storage medium, wherein the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors Execute the steps to realize the multi-camera vision SLAM method of point-line combination.
本发明所公开的方法实施例与现有特征法视觉SLAM相比,本发明实施例综合使用了点线特征和多相机拍摄的多角度数据:Compared with the existing feature-based visual SLAM, the method embodiment disclosed in the present invention comprehensively uses point-line features and multi-angle data captured by multiple cameras:
1)使用Ladybug全景设备上搭载的6个相机同时采集360°场景数据。相比使用单个普通相机采集数据的系统,使用多个广角相机采集数据可在同样长的时间内可获取更多的场景信息;轨迹运算结果综合了各个角度的信息,提高了系统的稳定性。1) Use the six cameras on the Ladybug panoramic device to collect 360° scene data at the same time. Compared with a system that uses a single ordinary camera to collect data, using multiple wide-angle cameras to collect data can obtain more scene information in the same period of time; the trajectory calculation results integrate information from various angles, improving the stability of the system.
2)在LBD线特征匹配方法的基础上,加入了多通道匹配、双向验证、Knn条件约束等优化策略,提高了匹配的准确度。2) On the basis of the LBD line feature matching method, optimization strategies such as multi-channel matching, two-way verification, and Knn conditional constraints are added to improve the matching accuracy.
3)采用了点与线特征联合解算的方法。线特征比点特征维度更高,包含更多场景结构信息,将点线特征同时加入优化图中,进行联合解算,可提高跟踪的稳定性和精确性。使用线特征构建的稀疏特征地图可更清晰、直观地对场景进行抽象化描述。3) The joint solution method of point and line features is adopted. Line features have higher dimensions than point features, and contain more scene structure information. Adding point and line features to the optimization graph at the same time for joint calculation can improve the stability and accuracy of tracking. The sparse feature map constructed using line features can abstract the scene more clearly and intuitively.
通过以上三点,本发明实施例能够取得较优的轨迹跟踪和地图构建结果。Through the above three points, the embodiment of the present invention can obtain better trajectory tracking and map construction results.
下面分别给出使用点线结合特征算法对单相机序列影像进行实验,与仅使用点特征的ORB算法进行实验的结果对比、多相机全景数据进行实验d的相机参数设置,以及多相机全景数据实验中各个单相机轨迹解算的精度结果对比。In the following, the experiment of single-camera sequence images using the point-line combined feature algorithm, the comparison of the results of the experiment with the ORB algorithm using only point features, the camera parameter settings of the multi-camera panoramic data experiment d, and the multi-camera panoramic data experiment are given below. Comparison of the accuracy results of each single-camera trajectory solution in .
表1Table 1
表2Table 2
表3table 3
表1为使用点线结合特征算法对单相机序列影像进行实验,与仅使用点特征的ORB算法进行实验的结果对比。表中轨迹长度单位为单位长度,RPE RMSE为相对位姿误差(Relative Pose Error)的均方根误差值,百分比为误差与轨迹长度之比,用于衡量轨迹跟踪的精度。Table 1 compares the results of experiments on single-camera sequence images using the point-line combined feature algorithm and the ORB algorithm using only point features. In the table, the unit of trajectory length is unit length, RPE RMSE is the root mean square error value of Relative Pose Error, and the percentage is the ratio of error to trajectory length, which is used to measure the accuracy of trajectory tracking.
表2为使用多相机全景数据进行实验的结果,其中每个相机拍摄影像尺寸为1232*1024,序列影像通过专用背包搭载Ladybug5+设备拍摄获取。表3为前述多相机全景数据实验中各个单相机轨迹解算的精度结果。综合表2和表3可见,多角度拍摄跟踪轨迹精度相比单个相机得到了明显的提升。Table 2 shows the results of experiments using multi-camera panoramic data. The size of the images captured by each camera is 1232*1024, and the sequence images are captured by a special backpack equipped with Ladybug5+ equipment. Table 3 shows the accuracy results of each single-camera trajectory calculation in the aforementioned multi-camera panoramic data experiments. Combining Table 2 and Table 3, it can be seen that the tracking accuracy of multi-angle shooting has been significantly improved compared with that of a single camera.
综上所述,使用单目鱼眼相机数据进行实验时,在加入线特征后跟踪帧数和轨迹长度高于仅使用点特征的ORB算法实施结果,且精确度也得到了提高。To sum up, when experimenting with monocular fisheye camera data, the number of tracking frames and the track length after adding line features are higher than the ORB algorithm implementation results using only point features, and the accuracy is also improved.
本发明提出了一种点线结合的多相机视觉SLAM方法、系统及设备,通过采集目标场景的多角度影像数据,提取并匹配所述多角度影像数据中的点特征和线特征,并得到所述点特征和线特征在三维空间中的位置信息,对所述多角度影像数据中的每帧影像进行相机位姿初步估计,并结合提取并匹配出的点特征、线特征和所述相机位姿初步估计结果,构建图结构,根据所提取点特征、线特征和所述图结构确定三维地图。本实施例采用点与线特征联合解算的方法,由于线特征含有更多信息,因此可提高跟踪的稳定性和精确性,使用线特征构建的稀疏特征地图可更清晰、直观地对场景进行抽象化描述。The present invention proposes a multi-camera visual SLAM method, system and device combining point and line, by collecting multi-angle image data of a target scene, extracting and matching point features and line features in the multi-angle image data, and obtaining the result. Describe the position information of point features and line features in three-dimensional space, carry out preliminary estimation of camera pose for each frame of image in the multi-angle image data, and combine the extracted and matched point features, line features and the camera pose The initial pose estimation result is used to construct a graph structure, and a three-dimensional map is determined according to the extracted point features, line features and the graph structure. This embodiment adopts the joint solution method of point and line features. Since the line features contain more information, the stability and accuracy of the tracking can be improved, and the sparse feature map constructed by using the line features can make the scene more clearly and intuitively. abstract description.
可以理解的是,对本领域普通技术人员来说,可以根据本发明的技术方案及其发明构思加以等同替换或改变,而所有这些改变或替换都应属于本发明所附的权利要求的保护范围。It can be understood that for those of ordinary skill in the art, equivalent replacements or changes can be made according to the technical solutions of the present invention and the inventive concept thereof, and all these changes or replacements should belong to the protection scope of the appended claims of the present invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010819166.1A CN112085790A (en) | 2020-08-14 | 2020-08-14 | Point-line combined multi-camera visual SLAM method, equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010819166.1A CN112085790A (en) | 2020-08-14 | 2020-08-14 | Point-line combined multi-camera visual SLAM method, equipment and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN112085790A true CN112085790A (en) | 2020-12-15 |
Family
ID=73727925
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010819166.1A Pending CN112085790A (en) | 2020-08-14 | 2020-08-14 | Point-line combined multi-camera visual SLAM method, equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112085790A (en) |
Cited By (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112902950A (en) * | 2021-01-21 | 2021-06-04 | 武汉大学 | Novel initial alignment method for MEMS-level IMU in low-speed motion carrier |
| CN113012233A (en) * | 2021-03-05 | 2021-06-22 | 南京翱翔信息物理融合创新研究院有限公司 | Monocular vision positioning method for AR indoor structured scene |
| CN113450412A (en) * | 2021-07-15 | 2021-09-28 | 北京理工大学 | Visual SLAM method based on linear features |
| CN113886402A (en) * | 2021-12-08 | 2022-01-04 | 成都飞机工业(集团)有限责任公司 | Aviation wire harness information integration method based on branches and readable storage medium |
| CN114170366A (en) * | 2022-02-08 | 2022-03-11 | 荣耀终端有限公司 | Three-dimensional reconstruction method based on dotted line feature fusion and electronic equipment |
| CN115311353A (en) * | 2022-08-29 | 2022-11-08 | 上海鱼微阿科技有限公司 | Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system |
| CN116681733A (en) * | 2023-08-03 | 2023-09-01 | 南京航空航天大学 | Near-distance real-time pose tracking method for space non-cooperative target |
| CN116778188A (en) * | 2023-06-28 | 2023-09-19 | 磅客策(上海)智能医疗科技有限公司 | A hair information identification method, system and storage medium |
| CN118823125A (en) * | 2024-09-18 | 2024-10-22 | 人工智能与数字经济广东省实验室(深圳) | An image quality-oriented multi-camera SLAM positioning method and system |
| CN118967819A (en) * | 2024-10-12 | 2024-11-15 | 北京集度科技有限公司 | A camera posture determination method, computer device and program product |
| CN119205921A (en) * | 2024-09-20 | 2024-12-27 | 电子科技大学 | A SLAM backend pose optimization method based on graph neural network |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106909877A (en) * | 2016-12-13 | 2017-06-30 | 浙江大学 | A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously |
| CN110490085A (en) * | 2019-07-24 | 2019-11-22 | 西北工业大学 | The quick pose algorithm for estimating of dotted line characteristic visual SLAM system |
-
2020
- 2020-08-14 CN CN202010819166.1A patent/CN112085790A/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106909877A (en) * | 2016-12-13 | 2017-06-30 | 浙江大学 | A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously |
| CN110490085A (en) * | 2019-07-24 | 2019-11-22 | 西北工业大学 | The quick pose algorithm for estimating of dotted line characteristic visual SLAM system |
Non-Patent Citations (4)
| Title |
|---|
| ALBERT PUMAROLA等: "PL-SLAM: Real-time monocular visual SLAM with points and lines", 《2017 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)》 * |
| 刘康: "基于视觉的移动机器人室内实时定位与制图技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
| 秦梓杰: "基于多镜头组合式全景相机的SLAM系统研究", 《中国优秀硕士学位论文全文数据库基础科学辑》 * |
| 谢晓佳: "基于点线综合特征的双目视觉SLAM方法", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112902950A (en) * | 2021-01-21 | 2021-06-04 | 武汉大学 | Novel initial alignment method for MEMS-level IMU in low-speed motion carrier |
| CN112902950B (en) * | 2021-01-21 | 2022-10-21 | 武汉大学 | An Initial Alignment Method for MEMS-level IMUs in Low-Speed Motion Carriers |
| CN113012233A (en) * | 2021-03-05 | 2021-06-22 | 南京翱翔信息物理融合创新研究院有限公司 | Monocular vision positioning method for AR indoor structured scene |
| CN113450412A (en) * | 2021-07-15 | 2021-09-28 | 北京理工大学 | Visual SLAM method based on linear features |
| CN113886402A (en) * | 2021-12-08 | 2022-01-04 | 成都飞机工业(集团)有限责任公司 | Aviation wire harness information integration method based on branches and readable storage medium |
| CN113886402B (en) * | 2021-12-08 | 2022-03-15 | 成都飞机工业(集团)有限责任公司 | Aviation wire harness information integration method based on branches and readable storage medium |
| CN114170366A (en) * | 2022-02-08 | 2022-03-11 | 荣耀终端有限公司 | Three-dimensional reconstruction method based on dotted line feature fusion and electronic equipment |
| CN114170366B (en) * | 2022-02-08 | 2022-07-12 | 荣耀终端有限公司 | 3D reconstruction method and electronic device based on point-line feature fusion |
| CN115311353A (en) * | 2022-08-29 | 2022-11-08 | 上海鱼微阿科技有限公司 | Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system |
| CN115311353B (en) * | 2022-08-29 | 2023-10-10 | 玩出梦想(上海)科技有限公司 | A multi-sensor multi-handle controller graph optimized tightly coupled tracking method and system |
| CN116778188A (en) * | 2023-06-28 | 2023-09-19 | 磅客策(上海)智能医疗科技有限公司 | A hair information identification method, system and storage medium |
| CN116681733A (en) * | 2023-08-03 | 2023-09-01 | 南京航空航天大学 | Near-distance real-time pose tracking method for space non-cooperative target |
| CN116681733B (en) * | 2023-08-03 | 2023-11-07 | 南京航空航天大学 | A short-distance real-time pose tracking method for non-cooperative targets in space |
| CN118823125A (en) * | 2024-09-18 | 2024-10-22 | 人工智能与数字经济广东省实验室(深圳) | An image quality-oriented multi-camera SLAM positioning method and system |
| CN118823125B (en) * | 2024-09-18 | 2024-12-24 | 人工智能与数字经济广东省实验室(深圳) | Multi-camera SLAM positioning method and system for image quality guidance |
| CN119205921A (en) * | 2024-09-20 | 2024-12-27 | 电子科技大学 | A SLAM backend pose optimization method based on graph neural network |
| CN119205921B (en) * | 2024-09-20 | 2025-09-23 | 电子科技大学 | A SLAM backend pose optimization method based on graph neural network |
| CN118967819A (en) * | 2024-10-12 | 2024-11-15 | 北京集度科技有限公司 | A camera posture determination method, computer device and program product |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112085790A (en) | Point-line combined multi-camera visual SLAM method, equipment and storage medium | |
| CN111462135B (en) | Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation | |
| CN111258313B (en) | Multi-sensor fusion SLAM system and robot | |
| CN110070615B (en) | Multi-camera cooperation-based panoramic vision SLAM method | |
| CN109166149B (en) | Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU | |
| CN106683173B (en) | A Method of Improving the Density of 3D Reconstruction Point Cloud Based on Neighborhood Block Matching | |
| CN110176032B (en) | Three-dimensional reconstruction method and device | |
| Mueggler et al. | Continuous-time trajectory estimation for event-based vision sensors | |
| CN103106688B (en) | Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering | |
| CN110782494A (en) | Visual SLAM method based on point-line fusion | |
| CN108615244B (en) | An Image Depth Estimation Method and System Based on CNN and Depth Filter | |
| CN111144349B (en) | Indoor visual relocation method and system | |
| CN110223348A (en) | Robot scene adaptive bit orientation estimation method based on RGB-D camera | |
| CN117197333A (en) | Space target reconstruction and pose estimation method and system based on multi-view vision | |
| CN107843251B (en) | Pose Estimation Methods for Mobile Robots | |
| CN108986037A (en) | Monocular vision odometer localization method and positioning system based on semi-direct method | |
| CN111127524A (en) | A method, system and device for trajectory tracking and three-dimensional reconstruction | |
| CN110097584A (en) | The method for registering images of combining target detection and semantic segmentation | |
| CN110807809A (en) | Light-weight monocular vision positioning method based on point-line characteristics and depth filter | |
| CN112767482B (en) | Indoor and outdoor positioning method and system with multi-sensor fusion | |
| WO2022156755A1 (en) | Indoor positioning method and apparatus, device, and computer-readable storage medium | |
| CN111325828B (en) | Three-dimensional face acquisition method and device based on three-dimensional camera | |
| Frohlich et al. | Absolute pose estimation of central cameras using planar regions | |
| CN112419497A (en) | Monocular vision-based SLAM method combining feature method and direct method | |
| CN112767546B (en) | Binocular image-based visual map generation method for mobile robot |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201215 |