CN116772834A - A mapping and positioning method for automatic inspection robots in open outdoor scenes - Google Patents
A mapping and positioning method for automatic inspection robots in open outdoor scenes Download PDFInfo
- Publication number
- CN116772834A CN116772834A CN202310617859.6A CN202310617859A CN116772834A CN 116772834 A CN116772834 A CN 116772834A CN 202310617859 A CN202310617859 A CN 202310617859A CN 116772834 A CN116772834 A CN 116772834A
- Authority
- CN
- China
- Prior art keywords
- data
- point cloud
- laser
- robot
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000013507 mapping Methods 0.000 title claims abstract description 39
- 238000007689 inspection Methods 0.000 title claims abstract description 18
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 31
- 230000004927 fusion Effects 0.000 claims abstract description 14
- 239000011159 matrix material Substances 0.000 claims description 31
- 238000001914 filtration Methods 0.000 claims description 17
- 230000009466 transformation Effects 0.000 claims description 17
- 230000006870 function Effects 0.000 claims description 15
- 230000033001 locomotion Effects 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 14
- 238000005259 measurement Methods 0.000 claims description 14
- 238000001514 detection method Methods 0.000 claims description 10
- 230000010354 integration Effects 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 description 14
- 230000005540 biological transmission Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000012360 testing method Methods 0.000 description 4
- 238000005457 optimization Methods 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 244000025254 Cannabis sativa Species 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000036316 preload Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Landscapes
- Navigation (AREA)
Abstract
Description
技术领域Technical field
本发明涉及移动机器人技术领域,具体而言,涉及一种室外空旷场景的自动巡检机器人的建图与定位方法。The invention relates to the technical field of mobile robots, and specifically to a mapping and positioning method for an automatic inspection robot in an outdoor open scene.
背景技术Background technique
传统的工巡检效率低下,需要花费大量时间和人力资源,且容易出现疏漏和误判,无法保证巡检的准确性和全面性,此外,在一些危险区域,人工巡检存在着极大的安全隐患。因此,发展自主巡检机器人已经成为趋势。而移动机器人的自主运动需要依赖目标环境的地图信息,同时为保证自主导航的安全性,对移动机器人在环境中的定位精度提出了一定的要求。Traditional industrial inspections are inefficient, require a lot of time and human resources, are prone to omissions and misjudgments, and cannot guarantee the accuracy and comprehensiveness of inspections. In addition, in some dangerous areas, manual inspections pose great challenges. Security risks. Therefore, the development of autonomous inspection robots has become a trend. The autonomous movement of mobile robots depends on the map information of the target environment. At the same time, in order to ensure the safety of autonomous navigation, certain requirements are put forward for the positioning accuracy of mobile robots in the environment.
现有的用于移动机器人的定位输入源主要有全球卫星导航系统(GlobalNavigation Satellite System,GNSS)、动态实时差分系统(Real-time Kinematic,RTK)以及惯性导航系统(Inertial Navigation System,INS)。然而,通过卫星手段进行定位很容易受到环境的影响,在高楼、森林以及山脉等场景下的定位效果会大打折扣,而单独使用惯性导航系统又不满足高精度定位的需求。The existing positioning input sources for mobile robots mainly include Global Navigation Satellite System (GNSS), dynamic real-time differential system (Real-time Kinematic, RTK) and inertial navigation system (Inertial Navigation System, INS). However, positioning through satellite means is easily affected by the environment, and the positioning effect will be greatly reduced in scenes such as high-rise buildings, forests, and mountains, and the use of inertial navigation systems alone does not meet the demand for high-precision positioning.
发明内容Contents of the invention
有鉴于此,本申请实施例的目的在于提供一种室外空旷场景的自动巡检机器人的建图与定位方法,能够改善机器人定位方式单一容易导致定位不准的问题。In view of this, the purpose of the embodiments of the present application is to provide a mapping and positioning method for an automatic inspection robot in an outdoor open scene, which can improve the problem that a single robot positioning method can easily lead to inaccurate positioning.
为实现上述技术目的,本申请采用的技术方案如下:In order to achieve the above technical objectives, the technical solutions adopted in this application are as follows:
本申请实施例提供了一种室外空旷场景的自动巡检机器人的建图与定位方法,应用于基于自动巡检的机器人,所述方法包括:The embodiment of the present application provides a mapping and positioning method for an automatic inspection robot in an outdoor open scene, which is applied to robots based on automatic inspection. The method includes:
获取机器人的多传感器在室外空旷场景采集的第一数据集,所述第一数据集包括激光点云数据、IMU数据和GPS数据;Obtain the first data set collected by the robot's multi-sensor in an outdoor open scene. The first data set includes laser point cloud data, IMU data and GPS data;
根据所述第一数据集中的激光点云数据、IMU数据和GPS数据,通过预设的LIO-SAM建图算法构建所述室外空旷场景的三维点云地图,并将所述三维点云地图作为pcd全局地图;According to the laser point cloud data, IMU data and GPS data in the first data set, a three-dimensional point cloud map of the outdoor open scene is constructed through the preset LIO-SAM mapping algorithm, and the three-dimensional point cloud map is used as pcd global map;
获取所述机器人再次行驶至所述室外空旷场景时由所述多传感器采集的第二数据集,所述第二数据集包括激光点云数据、IMU数据、里程计数据和GPS数据;Obtain a second data set collected by the multi-sensor when the robot travels to the outdoor open scene again. The second data set includes laser point cloud data, IMU data, odometer data and GPS data;
基于所述pcd全局地图和所述第二数据集,确定所述机器人在所述pcd全局地图中的目标位置信息。Based on the pcd global map and the second data set, the target position information of the robot in the pcd global map is determined.
在一些可选的实施方式中,获取机器人的多传感器在室外空旷场景采集的第一数据集,包括:In some optional implementations, obtaining the first data set collected by the robot's multi-sensors in an outdoor open scene includes:
获取所述多传感器中的激光雷达在所述室外空旷场景采集的激光点云数据,并将所述激光点云数据的格式转换为velodyne点云格式;Obtain the laser point cloud data collected by the lidar in the multi-sensor in the outdoor open scene, and convert the format of the laser point cloud data into the velodyne point cloud format;
获取所述多传感器中的惯性测量单元在所述室外空旷场景采集的IMU数据;Obtain the IMU data collected by the inertial measurement unit in the multi-sensor in the outdoor open scene;
获取所述多传感器中的GPS模块在所述室外空旷场景采集的GPS数据;Obtain the GPS data collected by the GPS module in the multi-sensor in the outdoor open scene;
基于所述激光点云数据和所述IMU数据,得到外参旋转矩阵;Based on the laser point cloud data and the IMU data, an external parameter rotation matrix is obtained;
将velodyne点云格式的所述激光点云数据、IMU数据、GPS数据及外参旋转矩阵作为所述第一数据集。The laser point cloud data, IMU data, GPS data and external parameter rotation matrix in velodyne point cloud format are used as the first data set.
在一些可选的实施方式中,根据所述第一数据集中的激光点云数据、IMU数据和GPS数据,通过预设的LIO-SAM建图算法构建所述室外空旷场景的三维点云地图,包括:In some optional implementations, based on the laser point cloud data, IMU data and GPS data in the first data set, a three-dimensional point cloud map of the outdoor open scene is constructed through a preset LIO-SAM mapping algorithm, include:
对所述激光点云数据进行去畸变处理,得到去畸变后的激光点云数据;Perform de-distortion processing on the laser point cloud data to obtain de-distorted laser point cloud data;
对所述去畸变后的激光点云数据进行特征提取,并根据曲率公式确定激光点云数据的曲率,并将曲率大于预设阈值的激光点作为环境点云中的角点,将曲率小于所述预设阈值的激光点作为平面点,所述曲率公式如下:Feature extraction is performed on the dedistorted laser point cloud data, and the curvature of the laser point cloud data is determined according to the curvature formula, and laser points with curvature greater than the preset threshold are used as corner points in the environment point cloud, and laser points with curvature less than the curvature are The laser point with the preset threshold is used as a plane point, and the curvature formula is as follows:
其中,c指曲率,S指所述多传感器中的激光雷达在同一次扫描中返回的i的连续点集,i指激光点,指激光点在所述激光雷达的坐标系中的位置,L指所述激光雷达的坐标系,k指所述激光雷达的扫描次数,j指在i附近的激光点;Among them, c refers to the curvature, S refers to the continuous point set i returned by the lidar in the multi-sensor in the same scan, i refers to the laser point, refers to the position of the laser point in the coordinate system of the lidar, L refers to the coordinate system of the lidar, k refers to the scanning number of the lidar, and j refers to the laser point near i;
基于所述第一数据集中的IMU数据,构建对应的IMU预积分因子;Based on the IMU data in the first data set, construct the corresponding IMU pre-integration factor;
将所述第一数据集中的GPS数据转换成笛卡尔坐标系下的GPS数据,并在所述激光雷达对应的时间戳下进行线性插值获得GPS因子;Convert the GPS data in the first data set into GPS data in the Cartesian coordinate system, and perform linear interpolation under the timestamp corresponding to the lidar to obtain the GPS factor;
基于欧氏距离,根据所述角点、所述平面点、所述IMU预积分因子和所述GPS因子,构建闭环检测因子以进行回环检测,并得到对应的pcd文件,所述pcd文件包括所述角点、所述平面点、建图轨迹和所述三维点云地图。Based on the Euclidean distance, according to the corner point, the plane point, the IMU pre-integration factor and the GPS factor, a closed-loop detection factor is constructed to perform loop detection, and a corresponding pcd file is obtained. The pcd file includes the The corner points, the plane points, the mapping trajectory and the three-dimensional point cloud map.
在一些可选的实施方式中,基于所述pcd全局地图和所述第二数据集,确定所述机器人在所述pcd全局地图中的目标位置信息,包括:In some optional implementations, determining the target position information of the robot in the pcd global map based on the pcd global map and the second data set includes:
采用预设的正态分布算法对目标点云和源点云进行点云配准,得到所述机器人的第一位置信息,其中,所述目标点云指所述pcd全局地图中的激光点云数据,所述源点云指所述第二数据集中的激光点云数据;Using a preset normal distribution algorithm to perform point cloud registration on the target point cloud and the source point cloud, the first position information of the robot is obtained, where the target point cloud refers to the laser point cloud in the pcd global map Data, the source point cloud refers to the laser point cloud data in the second data set;
通过预设的扩展卡尔曼滤波算法,对所述第一位置信息与所述第二数据集中的IMU数据、GPS数据、里程计数据进行数据融合,并输出全局定位数据;Through the preset extended Kalman filter algorithm, perform data fusion on the first location information and the IMU data, GPS data, and odometer data in the second data set, and output global positioning data;
根据所述全局定位数据和所述机器人的底盘发布的里程计数据,确定所述机器人在所述室外空旷场景的所述目标位置信息。The target position information of the robot in the outdoor open scene is determined based on the global positioning data and the odometer data released by the robot's chassis.
在一些可选的实施方式中,所述源点云为所述多传感器中的激光雷达采集的原始点云数据进行体素降采样后,并以预设滤波条件进行滤波得到的数据;In some optional implementations, the source point cloud is the data obtained by voxel downsampling of the original point cloud data collected by the lidar in the multi-sensor and filtered with preset filtering conditions;
所述目标点云为所述pcd全局地图中的激光点云数据进行体素降采样后,并以所述预设滤波条件进行滤波得到的数据。The target point cloud is the data obtained by performing voxel downsampling on the laser point cloud data in the pcd global map and filtering it according to the preset filtering conditions.
在一些可选的实施方式中,采用预设的正态分布算法对目标点云和源点云进行点云配准,得到所述机器人的第一位置信息,包括:In some optional implementations, a preset normal distribution algorithm is used to perform point cloud registration on the target point cloud and the source point cloud to obtain the first position information of the robot, including:
将所述目标点云进行栅格化,并划分得到指定尺寸的栅格,并将落在第i个栅格中的k个点构成集合X,记为xi,并计算所述目标点云的正态分布参数,其中,所述目标点云均值μ和协方差矩阵∑的计算方式如下:Rasterize the target point cloud and divide it into grids of specified sizes, and form a set The normal distribution parameters of , where the target point cloud mean μ and covariance matrix Σ are calculated as follows:
计算落到每个栅格中单个源点云的概率密度,计算函数如下:Computation falls to a single source point cloud in each raster The probability density of , the calculation function is as follows:
记两帧点云之间的变换参数为P,针对所述源点云中的所有点,计算所述源点云中的任一点落到栅格中的概率密度并求和,得到所述目标点云与所述源点云之间的匹配置信度score(P):Denote the transformation parameter between the two frame point clouds as P. For all points in the source point cloud, calculate the probability density of any point in the source point cloud falling into the grid and sum it up to obtain the target The matching confidence score (P) between the point cloud and the source point cloud:
根据预设的牛顿迭代法计算score(R)的极小值,即变换参数P的最优值,所述牛顿迭代法的雅可比矩阵g和海森矩阵H的计算函数如下:Calculate the minimum value of score(R) according to the preset Newton iteration method, that is, the optimal value of the transformation parameter P. The calculation functions of the Jacobian matrix g and Hessian matrix H of the Newton iteration method are as follows:
根据所述的雅可比矩阵g和海森矩阵H,对变换参数P进行更新,如下:According to the Jacobian matrix g and Hessian matrix H, the transformation parameter P is updated as follows:
ΔP=-H-1g (7)ΔP=-H -1 g (7)
P←P+ΔP (8)P←P+ΔP (8)
通过公式(3)至公式(8)进行迭代计算,直至迭代计算之后的变换参数P达到预设值,使得所述源点云与所述目标点云的联合概率密度分布达到最大,以得到所述第一位置信息。Iterative calculation is performed through formula (3) to formula (8) until the transformation parameter P after the iterative calculation reaches the preset value, so that the joint probability density distribution of the source point cloud and the target point cloud reaches the maximum, so as to obtain the desired first location information.
在一些可选的实施方式中,通过预设的扩展卡尔曼滤波算法,对所述第一位置信息与所述第二数据集中的IMU数据、GPS数据、里程计数据进行数据融合,并输出全局定位数据,包括:In some optional implementations, the first location information is fused with the IMU data, GPS data, and odometer data in the second data set through a preset extended Kalman filter algorithm, and the global Location data, including:
将所述第一位置信息、所述第二数据集中的IMU数据、GPS数据及里程计数据划分为表征Pose类型数据的第一组数据,以及表征Twist类型数据的第二组数据;Divide the first location information, the IMU data, GPS data and odometer data in the second data set into a first group of data representing Pose type data, and a second group of data representing Twist type data;
根据所述第一组数据、所述第二组数据,通过所述扩展卡尔曼滤波算法,对所述机器人的当前位置状态信息及状态协方差/>进行预估,计算函数如下:According to the first set of data and the second set of data, through the extended Kalman filter algorithm, the current position status information of the robot is and state covariance/> To make an estimate, the calculation function is as follows:
其中,f为由运动方程和状态方程得到的非线性函数,Ft为运动转换矩阵,Qt为噪声协方差;更新卡尔曼增益K以及后验概率分布并计算出卡尔曼最终估计结果/>更新方式如下:Among them, f is the nonlinear function obtained from the motion equation and the state equation, F t is the motion transformation matrix, Q t is the noise covariance; update the Kalman gain K and the posterior probability distribution And calculate the final Kalman estimate/> The update method is as follows:
其中,Gt为观测矩阵,Rt为测量状态协方差矩阵,xt为运动方程,g为非线性观测方程,为输出的所述全局定位数据。Among them, G t is the observation matrix, R t is the measurement state covariance matrix, x t is the motion equation, and g is the nonlinear observation equation, is the output global positioning data.
在一些可选的实施方式中,所述方法还包括:In some optional implementations, the method further includes:
将所述三维点云地图转换为二维栅格地图,所述二维栅格地图用于所述机器人的路径规划。The three-dimensional point cloud map is converted into a two-dimensional grid map, and the two-dimensional grid map is used for path planning of the robot.
在一些可选的实施方式中,将所述三维点云地图转换为二维栅格地图,包括:In some optional implementations, converting the three-dimensional point cloud map into a two-dimensional raster map includes:
基于预设高度范围,对所述三维点云地图中的激光点云数据进行直通滤波,得到滤波后的第一激光点云数据;Based on the preset height range, perform pass-through filtering on the laser point cloud data in the three-dimensional point cloud map to obtain filtered first laser point cloud data;
计算所述第一激光点云数据中每个激光点的邻近量,并将所述第一激光点云数据中邻近量小于预设邻近量阈值的激光点云数据进行滤除,得到第二激光点云数据,其中,所述邻近量为以任一激光点为中心的指定半径内的区域中存在的激光点总数,所述激光点总数不包括作为中心的所述任一激光点;Calculate the proximity of each laser point in the first laser point cloud data, and filter out the laser point cloud data whose proximity is less than the preset proximity threshold in the first laser point cloud data to obtain the second laser point cloud data. Point cloud data, wherein the proximity amount is the total number of laser points existing in the area within a specified radius with any laser point as the center, and the total number of laser points does not include the any laser point as the center;
将所述第二激光点云数据对应的新的三维点云地图转为二维栅格地图,并通过去噪工具擦除所述二维栅格地图中的噪点数据。Convert the new three-dimensional point cloud map corresponding to the second laser point cloud data into a two-dimensional grid map, and use a denoising tool to erase the noise data in the two-dimensional grid map.
采用上述技术方案的发明,具有如下优点:The invention adopting the above technical solution has the following advantages:
在本申请提供的技术方案中,机器人在建图时,利用LIO-SAM算法结合激光点云数据、IMU数据和GPS数据等多种数据来创建三维点云地图,如此,有利于得到高精度的地图。另外,在后续进行机器人定位时,基于重新采集的激光点云数据、IMU数据、里程计数据和GPS数据等多种数据,利用高精度的三维点云地图进行融合定位,如此,能丰富定位方式,提高机器人的定位精度,改善机器人定位方式单一而容易导致定位不准的问题。In the technical solution provided by this application, when the robot is mapping, it uses the LIO-SAM algorithm to combine laser point cloud data, IMU data, GPS data and other data to create a three-dimensional point cloud map. This is conducive to obtaining high-precision map. In addition, during the subsequent robot positioning, based on the re-collected laser point cloud data, IMU data, odometer data, GPS data and other data, high-precision three-dimensional point cloud maps are used for fusion positioning. This can enrich the positioning methods. , improve the positioning accuracy of the robot, and improve the problem that the single positioning method of the robot easily leads to inaccurate positioning.
附图说明Description of drawings
本申请可以通过附图给出的非限定性实施例进一步说明。应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。The application can be further illustrated by the non-limiting examples given in the accompanying drawings. It should be understood that the following drawings only show certain embodiments of the present application, and therefore should not be regarded as limiting the scope. For those of ordinary skill in the art, without exerting creative efforts, they can also Other relevant drawings are obtained based on these drawings.
图1为本申请实施例提供的一种室外空旷场景的自动巡检机器人的建图与定位方法的流程示意图。Figure 1 is a schematic flowchart of a mapping and positioning method for an automatic inspection robot in an outdoor open scene provided by an embodiment of the present application.
图2为本申请实施例提供的在室外空旷草地场景下建图得到的二维栅格地图的示意图。Figure 2 is a schematic diagram of a two-dimensional grid map obtained by mapping in an outdoor open grassland scene provided by an embodiment of the present application.
具体实施方式Detailed ways
以下将结合附图和具体实施例对本申请进行详细说明,需要说明的是,在附图或说明书描述中,相似或相同的部分都使用相同的图号,附图中未绘示或描述的实现方式,为所属技术领域中普通技术人员所知的形式。在本申请的描述中,术语“第一”、“第二”等仅用于区分描述,而不能理解为指示或暗示相对重要性。The present application will be described in detail below with reference to the drawings and specific embodiments. It should be noted that in the drawings or the description of the specification, similar or identical parts use the same figure numbers. Implementations not shown or described in the drawings The method is a form known to those of ordinary skill in the technical field. In the description of the present application, the terms "first", "second", etc. are only used to differentiate the description and cannot be understood as indicating or implying relative importance.
请参照图1,本申请提供一种室外空旷场景的自动巡检机器人的建图与定位方法,简称为建图与定位方法。该建图与定位方法可以应用于具有自动巡检功能的机器人,由机器人执行或实现方法的各步骤。Please refer to Figure 1. This application provides a mapping and positioning method for an automatic inspection robot in an outdoor open scene, which is referred to as the mapping and positioning method for short. This mapping and positioning method can be applied to a robot with automatic inspection function, and the robot executes or realizes each step of the method.
其中,机器人可以包括处理模块、存储模块及多传感器。其中,多传感器可以包括激光雷达、惯性测量单元(Inertial Measurement Unit,IMU)、全球定位系统(GlobalPositioning System,GPS)模块及里程计等。处理模块可以用于对多传感器采集的数据进行建图或融合定位。存储模块可以存储所创建的地图以及多传感器采集的数据。Among them, the robot can include a processing module, a storage module and multiple sensors. Among them, multi-sensors can include lidar, inertial measurement unit (Inertial Measurement Unit, IMU), global positioning system (Global Positioning System, GPS) module and odometer. The processing module can be used for mapping or fusion positioning of data collected by multiple sensors. The storage module can store the created map and data collected by multiple sensors.
在本实施例中,建图与定位方法可以包括如下步骤:In this embodiment, the mapping and positioning method may include the following steps:
步骤110,获取机器人的多传感器在室外空旷场景采集的第一数据集,所述第一数据集包括激光点云数据、IMU数据和GPS数据;Step 110: Obtain the first data set collected by the robot's multi-sensor in an outdoor open scene. The first data set includes laser point cloud data, IMU data and GPS data;
步骤120,根据所述第一数据集中的激光点云数据、IMU数据和GPS数据,通过预设的LIO-SAM建图算法构建所述室外空旷场景的三维点云地图,并将所述三维点云地图作为pcd全局地图;Step 120: Construct a three-dimensional point cloud map of the outdoor open scene through the preset LIO-SAM mapping algorithm based on the laser point cloud data, IMU data and GPS data in the first data set, and add the three-dimensional point cloud map The cloud map serves as the pcd global map;
步骤130,获取所述机器人再次行驶至所述室外空旷场景时由所述多传感器采集的第二数据集,所述第二数据集包括激光点云数据、IMU数据、里程计数据和GPS数据;Step 130: Obtain the second data set collected by the multi-sensor when the robot travels to the outdoor open scene again. The second data set includes laser point cloud data, IMU data, odometer data and GPS data;
步骤140,基于所述pcd全局地图和所述第二数据集,确定所述机器人在所述pcd全局地图中的目标位置信息。Step 140: Determine the target position information of the robot in the pcd global map based on the pcd global map and the second data set.
下面将对建图与定位方法的各步骤进行详细阐述,如下:Each step of the mapping and positioning method will be explained in detail below, as follows:
在执行步骤110之前,需要对机器人的系统及多传感器进行部署。其中,部署操作可以包括:Before executing step 110, the robot's system and multi-sensors need to be deployed. Among them, deployment operations can include:
在Ubuntu系统中部署机器人的操作系统ROS,并进行话题服务等相关通讯的测试;Deploy the robot's operating system ROS in the Ubuntu system, and test topic services and other related communications;
在机器人本体上部署16线激光雷达,接入12V DC电源盒的电源线与网线进行数据传输,配置相关网段信息;Deploy a 16-line lidar on the robot body, connect the power cord and network cable of the 12V DC power box for data transmission, and configure relevant network segment information;
加入惯性测量单元,进行系统内参的标定,下载驱动并通过串口与ROS进行通信;Add the inertial measurement unit, calibrate the internal parameters of the system, download the driver and communicate with ROS through the serial port;
部署高精度MEMS组合导航系统,以在室外空旷无遮挡环境进行GPS数据标定;Deploy a high-precision MEMS integrated navigation system to perform GPS data calibration in an open and unobstructed outdoor environment;
进行机器人底盘系统的初始化,通过底盘电源给机器人的其他模块(如机器人的电机、处理模块等)供电,并接入底盘网线,测试机器人的底盘通讯正常与否。当机器人完成系统及多传感器的部署后,并进行系统初始化后,便可以执行步骤110的操作。Initialize the robot chassis system, supply power to other modules of the robot (such as the robot's motor, processing module, etc.) through the chassis power supply, and connect the chassis network cable to test whether the robot's chassis communication is normal. After the robot completes the deployment of the system and multi-sensors and initializes the system, the operation of step 110 can be performed.
在步骤110中,多传感器中的各个传感器采集第一数据集和采集下述的第二数据集的频率可以根据实际情况灵活设置。In step 110, the frequency at which each sensor in the multi-sensor collects the first data set and the second data set described below can be flexibly set according to the actual situation.
机器人可以根据实时采集的第一数据集进行建图以更新地图。另外,机器人也可以基于实时采集的第二数据集进行融合定位,以及进行定位的更新。The robot can perform mapping based on the first data set collected in real time to update the map. In addition, the robot can also perform fusion positioning based on the second data set collected in real time, and update the positioning.
在本实施例中,步骤110获取机器人的多传感器在室外空旷场景采集的第一数据集,可以包括:In this embodiment, step 110 obtains the first data set collected by the robot's multi-sensors in an outdoor open scene, which may include:
步骤111,获取所述多传感器中的激光雷达在所述室外空旷场景采集的激光点云数据,并将所述激光点云数据的格式转换为velodyne点云格式;Step 111, obtain the laser point cloud data collected by the lidar in the multi-sensor in the outdoor open scene, and convert the format of the laser point cloud data into the velodyne point cloud format;
步骤112,获取所述多传感器中的惯性测量单元在所述室外空旷场景采集的IMU数据;Step 112: Obtain the IMU data collected by the inertial measurement unit in the multi-sensor in the outdoor open scene;
步骤113,获取所述多传感器中的GPS模块在所述室外空旷场景采集的GPS数据;Step 113, obtain the GPS data collected by the GPS module in the multi-sensor in the outdoor open scene;
步骤114,基于所述激光点云数据和所述IMU数据,得到外参旋转矩阵;Step 114: Obtain an external parameter rotation matrix based on the laser point cloud data and the IMU data;
步骤115,将velodyne点云格式的所述激光点云数据、IMU数据、GPS数据及外参旋转矩阵作为所述第一数据集。Step 115: Use the laser point cloud data, IMU data, GPS data and external parameter rotation matrix in the velodyne point cloud format as the first data set.
在步骤112中,惯性测量单元采集的原始IMU数据需要进行坐标变换,也就是将原始IMU数据变换至激光雷达的坐标系下。其中,IMU和激光雷达的坐标通常是在Z轴上相差90°,因此,只需将IMU的yaw旋转90°,便可以实现坐标变换。其中,IMU数据、激光点云数据均为经过标定后的数据。In step 112, the original IMU data collected by the inertial measurement unit needs to undergo coordinate transformation, that is, the original IMU data is transformed into the coordinate system of the lidar. Among them, the coordinates of the IMU and the lidar are usually 90° different on the Z axis. Therefore, the coordinate transformation can be achieved by simply rotating the yaw of the IMU by 90°. Among them, IMU data and laser point cloud data are calibrated data.
在步骤113中,在完成GPS数据读取源码下载后,可以固定GPS串口号,修改代码中串口号与波特率。修改的波特率可以根据实际情况灵活设置,以确保修改的波特率有利于在保证传输速率的情况下,提高数据传输的完整性。例如,如果波特率设置过高,则在传输过程中可能会出现数据丢失或传输不完整的情况;如果波特率设置过低,则数据传输速度会变慢,影响传输效率。In step 113, after downloading the GPS data reading source code, you can fix the GPS serial port number and modify the serial port number and baud rate in the code. The modified baud rate can be flexibly set according to the actual situation to ensure that the modified baud rate is conducive to improving the integrity of data transmission while ensuring the transmission rate. For example, if the baud rate is set too high, data loss or incomplete transmission may occur during the transmission process; if the baud rate is set too low, the data transmission speed will slow down, affecting transmission efficiency.
在步骤114中,基于经过标定后的激光点云数据和IMU数据,便可以创建得到外参旋转矩阵。In step 114, based on the calibrated laser point cloud data and IMU data, an external parameter rotation matrix can be created.
在本实施例中,步骤120根据所述第一数据集中的激光点云数据、IMU数据和GPS数据,通过预设的LIO-SAM建图算法构建所述室外空旷场景的三维点云地图,可以包括:In this embodiment, step 120 constructs a three-dimensional point cloud map of the outdoor open scene through the preset LIO-SAM mapping algorithm based on the laser point cloud data, IMU data and GPS data in the first data set. include:
步骤121,对所述激光点云数据进行去畸变处理,得到去畸变后的激光点云数据;Step 121: Perform de-distortion processing on the laser point cloud data to obtain de-distorted laser point cloud data;
步骤122,对所述去畸变后的激光点云数据进行特征提取,并根据曲率公式确定激光点云数据的曲率,并将曲率大于预设阈值的激光点作为环境点云中的角点,将曲率小于所述预设阈值的激光点作为平面点,所述曲率公式如下:Step 122: Perform feature extraction on the dedistorted laser point cloud data, determine the curvature of the laser point cloud data according to the curvature formula, and use laser points with a curvature greater than a preset threshold as corner points in the environment point cloud. Laser points with curvature less than the preset threshold are regarded as plane points, and the curvature formula is as follows:
其中,c指曲率,S指所述多传感器中的激光雷达在同一次扫描中返回的i的连续点集,i指激光点,指激光点在所述激光雷达的坐标系中的位置,L指所述激光雷达的坐标系,k指所述激光雷达的扫描次数,j指在i附近的激光点;Among them, c refers to the curvature, S refers to the continuous point set i returned by the lidar in the multi-sensor in the same scan, i refers to the laser point, refers to the position of the laser point in the coordinate system of the lidar, L refers to the coordinate system of the lidar, k refers to the scanning number of the lidar, and j refers to the laser point near i;
步骤123,基于所述第一数据集中的IMU数据,构建对应的IMU预积分因子,其中,IMU预积分因子用于为雷达因子提供先验位姿信息;Step 123: Construct the corresponding IMU pre-integration factor based on the IMU data in the first data set, where the IMU pre-integration factor is used to provide a priori pose information for the radar factor;
步骤124,将所述第一数据集中的GPS数据转换成笛卡尔坐标系下的GPS数据,并在所述激光雷达对应的时间戳下进行线性插值获得GPS因子;Step 124: Convert the GPS data in the first data set into GPS data in the Cartesian coordinate system, and perform linear interpolation under the timestamp corresponding to the lidar to obtain the GPS factor;
步骤125,基于欧氏距离,根据所述角点、所述平面点、所述IMU预积分因子、所述GPS因子,构建闭环检测因子以进行回环检测,并得到对应的pcd文件(pcd是一种存储点云数据的文件格式),所述pcd文件包括所述角点、所述平面点、建图轨迹和所述三维点云地图(即pcd全局地图)。Step 125: Based on the Euclidean distance, according to the corner point, the plane point, the IMU pre-integration factor, and the GPS factor, a closed-loop detection factor is constructed for loop closure detection, and the corresponding pcd file (pcd is a A file format for storing point cloud data), the pcd file includes the corner points, the plane points, the mapping trajectory and the three-dimensional point cloud map (ie, pcd global map).
可理解地,在进行建图时,在室外空旷无遮挡环境下,工程师可以遥控机器人行驶在空旷的草地上,以测试机器人在草地环境运行情况。在测试期间,机器人的各个传感器可以实时采集数据,并将每帧采集得到的各类定位数据作为第一数据集。Understandably, when mapping, in an open and unobstructed outdoor environment, engineers can remotely control the robot to drive on the open grass to test the operation of the robot in the grass environment. During the test, each sensor of the robot can collect data in real time, and use various positioning data collected in each frame as the first data set.
在步骤121中,机器人可以通过计算采集过程中激光雷达的运动情况,在每帧中根据每个激光点的相对时间进行旋转和平移的补偿,从而实现激光点云数据的去畸变处理。In step 121, the robot can calculate the movement of the lidar during the acquisition process, and perform rotation and translation compensation according to the relative time of each laser point in each frame, thereby achieving dedistortion processing of the laser point cloud data.
在步骤122中,特征提取可以理解为从激光点中,提取出角点和平面点。即,将曲率大于预设阈值的激光点作为环境点云中的角点,将曲率小于所述预设阈值的激光点作为平面点。其中,曲率公式的计算方式可以理解为:曲率=(当前激光点到附近激光点的距离差/当前激光点的值)的总和再求平均值。In step 122, feature extraction can be understood as extracting corner points and plane points from laser points. That is, laser points with curvature greater than the preset threshold are regarded as corner points in the environment point cloud, and laser points with curvature less than the preset threshold are regarded as plane points. The calculation method of the curvature formula can be understood as: the sum of curvature = (the distance difference between the current laser point and the nearby laser point/the value of the current laser point) and then average it.
在本实施例中,各种因子指处理过的各传感器的数据。例如,各种因子的功能如下:In this embodiment, various factors refer to the processed data of each sensor. For example, the various factors function as follows:
IMU预积分因子:IMU预积分因子用于将IMU的测量值与机器人的运动状态关联起来,从而提高状态估计的精度。IMU预积分因子包括机器人的加速度和角速度测量值,以及时间戳信息。IMU pre-integration factor: The IMU pre-integration factor is used to associate the IMU measurement value with the robot's motion state, thereby improving the accuracy of state estimation. The IMU preintegration factor includes the robot's acceleration and angular velocity measurements, as well as timestamp information.
雷达因子:雷达因子用于将激光雷达的测量值与机器人的运动状态关联起来,从而进行建图。雷达因子包括激光雷达的点云数据,以及与机器人位置状态相关的位姿信息。Radar factors: Radar factors are used to correlate lidar measurements with the robot's motion state for mapping. Radar factors include LiDAR point cloud data and pose information related to the robot's position and state.
GPS因子:GPS因子用于将GPS模块的测量值与机器人的运动状态关联起来,从而提供全局定位信息。GPS因子包括GPS模块的位置和时间戳信息,以及与机器人位置状态相关的位姿信息。GPS factor: GPS factor is used to associate the measurement value of the GPS module with the motion status of the robot to provide global positioning information. GPS factors include the position and timestamp information of the GPS module, as well as pose information related to the robot's position and status.
闭环检测因子:闭环检测因子用于检测地图中的闭环,并对地图进行优化。闭环检测因子包括两个位姿之间的相对变换信息,以及它们之间的协方差信息。Loop closure detection factor: The loop closure detection factor is used to detect closed loops in the map and optimize the map. The loop closure detection factor includes the relative transformation information between the two poses, and the covariance information between them.
在步骤125中,通过利用LIO-SAM建图算法,可以实现建图。其中,LIO-SAM建图算法中的回环检测是通过scan-to-map匹配来实现的,具体步骤可以如下:In step 125, mapping can be achieved by utilizing the LIO-SAM mapping algorithm. Among them, loop detection in the LIO-SAM mapping algorithm is implemented through scan-to-map matching. The specific steps can be as follows:
基于第一数据集中在不同时刻采集的点云数据,将当前帧的点云数据与之前的地图点云数据进行匹配,得到初始位姿估计;Based on the point cloud data collected at different times in the first data set, match the point cloud data of the current frame with the previous map point cloud data to obtain an initial pose estimate;
使用ICP算法对初始位姿进行细化,得到更精确的位姿估计;Use the ICP algorithm to refine the initial pose to obtain a more accurate pose estimate;
判断当前帧与之前的哪些帧构成了闭环,如果满足闭环条件,则将满足闭环条件的当前帧和之前的帧标记为闭环,并将闭环约束添加到优化问题中;Determine which of the current frame and previous frames form a closed loop. If the closed-loop conditions are met, mark the current frame and previous frames that meet the closed-loop conditions as closed-loop, and add closed-loop constraints to the optimization problem;
使用优化算法对所有约束进行联合优化,得到最优的地图和机器人轨迹,从而得到三维点云地图。An optimization algorithm is used to jointly optimize all constraints to obtain the optimal map and robot trajectory, thereby obtaining a three-dimensional point cloud map.
在步骤130中,第二数据集中的各数据,是在机器人已经存储有室外空旷场景的三维点云地图之后,再次行驶在该室外空旷场景时,由机器人的多传感器采集的数据集合。第二数据集用于为机器人的融合定位提供数据支持。In step 130, each data in the second data set is a collection of data collected by the robot's multi-sensors after the robot has stored the three-dimensional point cloud map of the outdoor open scene and then drives in the outdoor open scene again. The second data set is used to provide data support for the fusion positioning of the robot.
在本实施例中,步骤140基于所述pcd全局地图和所述第二数据集,确定所述机器人在所述pcd全局地图中的目标位置信息,可以包括:In this embodiment, step 140 determines the target position information of the robot in the pcd global map based on the pcd global map and the second data set, which may include:
步骤141,采用预设的正态分布算法(Normal Distrbution Transform,NDT)对目标点云和源点云进行点云配准,得到所述机器人的第一位置信息,其中,所述目标点云指所述pcd全局地图中的激光点云数据,所述源点云指所述第二数据集中的激光点云数据;Step 141: Use a preset normal distribution algorithm (Normal Distribution Transform, NDT) to perform point cloud registration on the target point cloud and the source point cloud to obtain the first position information of the robot, where the target point cloud refers to The laser point cloud data in the pcd global map, the source point cloud refers to the laser point cloud data in the second data set;
步骤142,通过预设的扩展卡尔曼滤波算法(Extended Kalman Filter,EKF),对所述第一位置信息与所述第二数据集中的IMU数据、GPS数据、里程计数据进行数据融合,并输出全局定位数据;Step 142: Perform data fusion on the first location information and the IMU data, GPS data, and odometer data in the second data set through the preset Extended Kalman Filter algorithm (EKF), and output global positioning data;
步骤143,根据所述全局定位数据和所述机器人的底盘发布的里程计数据,确定所述机器人在所述室外空旷场景的所述目标位置信息。Step 143: Determine the target position information of the robot in the outdoor open scene based on the global positioning data and the odometer data released by the robot's chassis.
可理解地,在步骤141中,机器人可以使用正态分布算法进行点云配准,通过预加载提前建好的pcd全局地图,并将pcd全局地图中的点云数据作为目标点云,再与由激光雷达输入的源点云进行逐帧匹配,得到最优转换关系,并最终输出机器人的位置信息,以作为第一位置信息。Understandably, in step 141, the robot can use the normal distribution algorithm to perform point cloud registration, preload the pcd global map built in advance, and use the point cloud data in the pcd global map as the target point cloud, and then match it with The source point cloud input by the lidar is matched frame by frame to obtain the optimal conversion relationship, and the position information of the robot is finally output as the first position information.
在步骤141中,目标点云和源点云均为经过预处理后的数据。预处理方式为:针对激光雷达采集的原始点云数据和pcd全局地图中的激光点云数据,进行体素降采样,即设置指定的采样密度,再对pcd全局地图中的点云数据以及由激光雷达输入的原始点云数据进行滤波处理。通过进行预处理操作,可以在降低点云密度的同时,保留点云数据的大部分特征,对提高数据处理速度有显著效果。In step 141, both the target point cloud and the source point cloud are preprocessed data. The preprocessing method is: perform voxel downsampling on the original point cloud data collected by lidar and the laser point cloud data in the pcd global map, that is, set the specified sampling density, and then perform voxel downsampling on the point cloud data in the pcd global map and the The original point cloud data input by the lidar is filtered. By performing preprocessing operations, the point cloud density can be reduced while retaining most of the characteristics of the point cloud data, which has a significant effect on improving the data processing speed.
可理解地,源点云为多传感器中的激光雷达采集的原始点云数据进行体素降采样后,并以预设滤波条件进行滤波得到的数据。目标点云为pcd全局地图中的激光点云数据进行体素降采样后,并以预设滤波条件进行滤波得到的数据。预设滤波条件可以根据实际情况灵活设置。Understandably, the source point cloud is the data obtained by voxel downsampling of the original point cloud data collected by the lidar in the multi-sensor and filtered with preset filtering conditions. The target point cloud is the data obtained by voxel downsampling of the laser point cloud data in the PCD global map and filtered with preset filtering conditions. The preset filtering conditions can be flexibly set according to the actual situation.
在本实施例中,步骤141可以包括:In this embodiment, step 141 may include:
将所述目标点云进行栅格化,并划分得到指定尺寸的栅格,并将落在第i个栅格中的k个点构成集合X,记为xi,并计算所述目标点云的正态分布参数,其中,目标点云和源点云作为分别两帧点云S1和S2,所述目标点云均值μ和协方差矩阵∑的计算方式如下:Rasterize the target point cloud and divide it into grids of specified sizes, and form a set The normal distribution parameters of , where the target point cloud and the source point cloud are regarded as two frame point clouds S1 and S2 respectively. The target point cloud mean μ and covariance matrix Σ are calculated as follows:
计算落到每个栅格中单个源点云的概率密度,计算函数/>如下:Computation falls to a single source point cloud in each raster Probability density of , calculation function/> as follows:
记两帧点云之间的变换参数为P,针对所述源点云中的所有点,计算所述源点云中的任一点落到栅格中的概率密度并求和,得到所述目标点云与所述源点云之间的匹配置信度score(P):Denote the transformation parameter between the two frame point clouds as P. For all points in the source point cloud, calculate the probability density of any point in the source point cloud falling into the grid and sum it up to obtain the target The matching confidence score (P) between the point cloud and the source point cloud:
其中,n指源点云中的点云总数,i为1至n中的整数;Among them, n refers to the total number of point clouds in the source point cloud, and i is an integer from 1 to n;
根据预设的牛顿迭代法计算score(R)的极小值,即变换参数P的最优值,所述牛顿迭代法的雅可比矩阵g和海森矩阵H的计算函数如下:Calculate the minimum value of score(R) according to the preset Newton iteration method, that is, the optimal value of the transformation parameter P. The calculation functions of the Jacobian matrix g and Hessian matrix H of the Newton iteration method are as follows:
根据所述的雅可比矩阵g和海森矩阵H,对变换参数P进行更新,如下:According to the Jacobian matrix g and Hessian matrix H, the transformation parameter P is updated as follows:
ΔP=-H-1g (7)ΔP=-H -1 g (7)
P←P+ΔP (8)P←P+ΔP (8)
通过针对源点云中的每个激光点,通过公式(3)至公式(8)进行迭代计算,使得更新后的P达到预设值的方式来优化求解出两帧点云之间的最优变换,直至迭代计算之后使得所述源点云与所述目标点云的联合概率密度分布达到最大,实现两个点云之间的最佳匹配,以得到所述第一位置信息。For each laser point in the source point cloud, iterative calculation is performed through formula (3) to formula (8), so that the updated P reaches the preset value to optimize and solve the optimal point cloud between the two frames. Transform until the joint probability density distribution of the source point cloud and the target point cloud reaches the maximum after iterative calculation, and achieve the best match between the two point clouds to obtain the first position information.
在步骤142中,机器人采用扩展卡尔曼滤波算法以松耦合的方式进行数据融合。用于融合的输入位姿数据包括了点云配准定位信息、IMU数据、GPS数据、里程计数据。融合过程可以包括先对各个输入数据进行分类,将数据分为Pose类型数据与Twist类型数据,并将其分别放入两个不同的结构体中,作为两组不同的测量数据。完成数据分类的预处理后,便可以通过扩展卡尔曼滤波器对其进行处理。作为一种示例,步骤142可以包括:In step 142, the robot uses the extended Kalman filter algorithm to perform data fusion in a loosely coupled manner. The input pose data used for fusion includes point cloud registration and positioning information, IMU data, GPS data, and odometer data. The fusion process may include first classifying each input data, dividing the data into Pose type data and Twist type data, and putting them into two different structures respectively as two different sets of measurement data. Once the preprocessing of data classification is complete, it can be processed through the extended Kalman filter. As an example, step 142 may include:
将所述第一位置信息、所述第二数据集中的IMU数据、GPS数据及里程计数据划分为表征Pose类型数据的第一组数据,以及表征Twist类型数据的第二组数据;Divide the first location information, the IMU data, GPS data and odometer data in the second data set into a first group of data representing Pose type data, and a second group of data representing Twist type data;
根据所述第一组数据、所述第二组数据,通过所述扩展卡尔曼滤波算法,对所述机器人的当前位置状态信息及状态协方差/>进行预估,计算函数如下:According to the first set of data and the second set of data, through the extended Kalman filter algorithm, the current position status information of the robot is and state covariance/> To make an estimate, the calculation function is as follows:
其中,f为由运动方程和状态方程得到的非线性函数,Ft为运动转换矩阵,Qt为噪声协方差;更新卡尔曼增益K以及后验概率分布并计算出卡尔曼最终估计结果/>更新方式如下:Among them, f is the nonlinear function obtained from the motion equation and the state equation, F t is the motion transformation matrix, Q t is the noise covariance; update the Kalman gain K and the posterior probability distribution And calculate the final Kalman estimate/> The update method is as follows:
其中,Gt为观测矩阵,Rt为测量状态协方差矩阵,xt为运动方程,g为非线性观测方程,为输出的经过融合后的所述全局定位数据。Among them, G t is the observation matrix, R t is the measurement state covariance matrix, x t is the motion equation, and g is the nonlinear observation equation, is the output global positioning data after fusion.
在步骤143中,基于全局定位数据,并与机器人底盘输出的局部定位数据(即为里程计数据)相结合,便可以完成巡检机器人的精确定位。其中,在步骤143中,是将里程计输出数据作为局部定位数据,用来估计机器人启动后的初始位姿。在步骤142中,里程计数据用于在进行数据融合过程中,对点云配准定位起到修正作用,防止点云配准出现偏差后机器人定位不准确,而在前期建图过程中,无需使用里程计数据。In step 143, based on the global positioning data and combined with the local positioning data output by the robot chassis (that is, the odometer data), the precise positioning of the inspection robot can be completed. Among them, in step 143, the odometer output data is used as local positioning data to estimate the initial pose of the robot after it is started. In step 142, the odometer data is used to correct the point cloud registration and positioning during the data fusion process to prevent inaccurate positioning of the robot after deviations in point cloud registration. In the early mapping process, no need to Use odometer data.
在本实施例中,方法还可以包括:In this embodiment, the method may also include:
步骤150,将所述三维点云地图转换为二维栅格地图,所述二维栅格地图用于所述机器人的路径规划。Step 150: Convert the three-dimensional point cloud map into a two-dimensional grid map, and the two-dimensional grid map is used for path planning of the robot.
请参照图2,作为一种示例,在室外空旷草地场景下建图得到三维点云地图后,再将该三维点云地图转换为二维栅格地图后,所得到的二维栅格地图可以如图2所示。Please refer to Figure 2. As an example, after mapping in an outdoor open grassland scene to obtain a three-dimensional point cloud map, and then converting the three-dimensional point cloud map into a two-dimensional raster map, the resulting two-dimensional raster map can be as shown in picture 2.
进一步地,步骤150将所述三维点云地图转换为二维栅格地图,可以包括:Further, step 150 converts the three-dimensional point cloud map into a two-dimensional grid map, which may include:
步骤151,基于预设高度范围,对所述三维点云地图中的激光点云数据进行直通滤波,得到滤波后的第一激光点云数据,如此,可以滤出高度范围外的点云数据;Step 151: Perform pass-through filtering on the laser point cloud data in the three-dimensional point cloud map based on the preset height range to obtain filtered first laser point cloud data. In this way, point cloud data outside the height range can be filtered out;
步骤152,计算所述第一激光点云数据中每个激光点的邻近量,并将所述第一激光点云数据中邻近量小于预设邻近量阈值的激光点云数据进行滤除,得到第二激光点云数据,如此,可以滤除部分孤立点,其中,所述邻近量为以任一激光点为中心的指定半径内的区域中存在的激光点总数,所述激光点总数不包括作为中心的所述任一激光点;Step 152: Calculate the proximity of each laser point in the first laser point cloud data, and filter out the laser point cloud data whose proximity is less than the preset proximity threshold in the first laser point cloud data to obtain The second laser point cloud data, in this way, can filter out some isolated points, where the proximity amount is the total number of laser points existing in the area within a specified radius centered on any laser point, and the total number of laser points does not include Said any laser point as the center;
步骤153,将所述第二激光点云数据对应的新的三维点云地图转为二维栅格地图,并通过去噪工具(如ArcGIS软件工具)擦除所述二维栅格地图中的噪点数据。Step 153: Convert the new three-dimensional point cloud map corresponding to the second laser point cloud data into a two-dimensional raster map, and erase the noise in the two-dimensional raster map using a denoising tool (such as ArcGIS software tool). Noisy data.
在本实施例中,预设高度范围、预设邻近量阈值、指定半径均可以根据实际情况灵活设置。In this embodiment, the preset height range, the preset proximity threshold, and the specified radius can be flexibly set according to actual conditions.
可理解地,机器人通过读取多传感器数据,提取激光雷达扫描的关键帧(即为激光点云数据),并融合IMU预积分数据进行位姿优化,通过紧耦合的IMU和雷达里程计结构,结合GPS数据进行精确定位与建图,最后通过滤波将获得的点云地图转换为二维栅格地图。其中,得到的二维栅格地图有利于机器人在后续规划导航路径时,提供更高精度的路径规划与导航。Understandably, the robot reads the multi-sensor data, extracts the key frames of the lidar scan (that is, the laser point cloud data), and integrates the IMU pre-integrated data for pose optimization. Through the tightly coupled IMU and radar odometry structure, Combined with GPS data for precise positioning and mapping, the obtained point cloud map is finally converted into a two-dimensional raster map through filtering. Among them, the obtained two-dimensional grid map is conducive to providing higher-precision path planning and navigation when the robot subsequently plans a navigation path.
在本实施例中,LIO-SAM是一个基于激光雷达、IMU、GPS等传感器的三维建图算法,能够通过构建因子图约束的方式,在室外大场景环境下建立出高精度地图。因此针对室外大面积草地,较少特征点的情况,使用多种传感器并结合先进的建图算法,可以帮助我们提高定位精度,建立高精度的地图,有效的促进移动机器人自主巡检技术的发展。In this embodiment, LIO-SAM is a three-dimensional mapping algorithm based on sensors such as lidar, IMU, and GPS. It can build high-precision maps in large outdoor scene environments by constructing factor graph constraints. Therefore, for large areas of outdoor grassland with few feature points, using a variety of sensors combined with advanced mapping algorithms can help us improve positioning accuracy, establish high-precision maps, and effectively promote the development of mobile robot autonomous inspection technology. .
在本实施例中,针对室外大场景的建图,主要使用3D激光雷达,并融合多种传感器实现,相比与使用相机,本实施例中的机器人采用的多线激光雷达据有射程远、扫描范围大、精度高、不受光照影响等优势。In this embodiment, for mapping large outdoor scenes, 3D lidar is mainly used and integrated with multiple sensors. Compared with using cameras, the multi-line lidar used by the robot in this embodiment has long range, It has the advantages of large scanning range, high precision and not affected by light.
基于上述设计,机器人通过将雷达、IMU、GPS、里程计及MEMS等多传感器结合,通过LIO-SAM算法完成室外大场景下同步定位与建图,并生成清晰的二维栅格地图,便于移动机器人的自主巡检与导航。另外,利用扩展卡尔曼滤波算法可以实现多传感器数据的融合定位,有利于确保定位的精确性和稳定性。Based on the above design, the robot combines multiple sensors such as radar, IMU, GPS, odometer and MEMS, and uses the LIO-SAM algorithm to complete simultaneous positioning and mapping in large outdoor scenes, and generates a clear two-dimensional grid map for easy movement. Autonomous inspection and navigation of robots. In addition, the extended Kalman filter algorithm can be used to achieve fusion positioning of multi-sensor data, which is beneficial to ensuring the accuracy and stability of positioning.
需要说明的是,机器人采用本申请提供的方法进行建图与定位时,不仅可以应用在室外空旷场景,还可以应用在高楼、森林以及山脉等场景进行建图与定位,通过多种数据融合建图与定位,有利于提高机器人在复杂环境下的建图精度与定位精度。It should be noted that when the robot uses the method provided by this application for mapping and positioning, it can not only be used in outdoor open scenes, but also can be used in scenes such as high buildings, forests, and mountains for mapping and positioning. It can be constructed through a variety of data fusions. Mapping and positioning are helpful to improve the mapping accuracy and positioning accuracy of robots in complex environments.
在本实施例中,机器人中的处理模块可以是一种集成电路芯片,具有信号的处理能力。上述处理模块可以是通用处理器。例如,该处理器可以是中央处理器(CentralProcessing Unit,CPU)、数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。In this embodiment, the processing module in the robot may be an integrated circuit chip with signal processing capabilities. The above-mentioned processing module may be a general-purpose processor. For example, the processor can be a central processing unit (Central Processing Unit, CPU), a digital signal processor (Digital Signal Processing, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, and discrete hardware components can implement or execute the disclosed methods, steps, and logical block diagrams in the embodiments of this application.
存储模块可以是,但不限于,随机存取存储器,只读存储器,可编程只读存储器,可擦除可编程只读存储器,电可擦除可编程只读存储器等。在本实施例中,存储模块可以用于存储pcd文件、第一数据集、第二数据集等。当然,存储模块还可以用于存储程序,处理模块在接收到执行指令后,执行该程序。The memory module may be, but is not limited to, random access memory, read only memory, programmable read only memory, erasable programmable read only memory, electrically erasable programmable read only memory, etc. In this embodiment, the storage module can be used to store pcd files, first data sets, second data sets, etc. Of course, the storage module can also be used to store a program, and the processing module executes the program after receiving the execution instruction.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到本申请可以通过硬件实现,也可以借助软件加必要的通用硬件平台的方式来实现,基于这样的理解,本申请的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施场景所述的方法。Through the above description of the embodiments, those skilled in the art can clearly understand that the present application can be implemented by hardware or by software plus a necessary general hardware platform. Based on this understanding, the technical solution of the present application It can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a USB flash drive, a mobile hard disk, etc.) and includes a number of instructions to make a computer device (which can It is a personal computer, server, or network device, etc.) that executes the methods described in each implementation scenario of this application.
在本申请所提供的实施例中,应该理解到,所揭露的机器人和方法,也可以通过其它的方式实现。以上所描述的机器人和方法实施例仅仅是示意性的,例如,附图中的流程图和框图显示了根据本申请的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,所述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。另外,在本申请各个实施例中的各功能模块可以集成在一起形成一个独立的部分,也可以是各个模块单独存在,也可以两个或两个以上模块集成形成一个独立的部分。In the embodiments provided in this application, it should be understood that the disclosed robots and methods can also be implemented in other ways. The robot and method embodiments described above are only illustrative. For example, the flowcharts and block diagrams in the accompanying drawings show possible implementation architectures of systems, methods and computer program products according to multiple embodiments of the present application. Functionality and operation. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more components for implementing the specified logical function(s). Executable instructions. It will also be noted that each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts. , or can be implemented using a combination of specialized hardware and computer instructions. In addition, each functional module in each embodiment of the present application can be integrated together to form an independent part, each module can exist alone, or two or more modules can be integrated to form an independent part.
以上所述仅为本申请的实施例而已,并不用于限制本申请的保护范围,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。The above descriptions are only examples of the present application and are not intended to limit the scope of protection of the present application. For those skilled in the art, the present application may have various modifications and changes. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of this application shall be included in the protection scope of this application.
Claims (9)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310617859.6A CN116772834A (en) | 2023-05-29 | 2023-05-29 | A mapping and positioning method for automatic inspection robots in open outdoor scenes |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310617859.6A CN116772834A (en) | 2023-05-29 | 2023-05-29 | A mapping and positioning method for automatic inspection robots in open outdoor scenes |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN116772834A true CN116772834A (en) | 2023-09-19 |
Family
ID=87995531
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310617859.6A Pending CN116772834A (en) | 2023-05-29 | 2023-05-29 | A mapping and positioning method for automatic inspection robots in open outdoor scenes |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116772834A (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119763102A (en) * | 2024-11-27 | 2025-04-04 | 岭南师范学院 | Dragon fruit growth monitoring method, device, equipment and storage medium |
| CN119879917A (en) * | 2025-03-27 | 2025-04-25 | 北京同创信通科技有限公司 | Indoor and outdoor positioning and mapping method for multi-sensor fusion |
| CN119902522A (en) * | 2024-12-31 | 2025-04-29 | 优地机器人(无锡)股份有限公司 | Robot outdoor protection method, robot, storage medium and program product |
-
2023
- 2023-05-29 CN CN202310617859.6A patent/CN116772834A/en active Pending
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119763102A (en) * | 2024-11-27 | 2025-04-04 | 岭南师范学院 | Dragon fruit growth monitoring method, device, equipment and storage medium |
| CN119902522A (en) * | 2024-12-31 | 2025-04-29 | 优地机器人(无锡)股份有限公司 | Robot outdoor protection method, robot, storage medium and program product |
| CN119879917A (en) * | 2025-03-27 | 2025-04-25 | 北京同创信通科技有限公司 | Indoor and outdoor positioning and mapping method for multi-sensor fusion |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN114636993B (en) | A method, device and equipment for extrinsic parameter calibration of laser radar and IMU | |
| US11416644B2 (en) | Supervised automatic roof modeling | |
| CN112639502B (en) | Robot pose estimation | |
| CN113341397B (en) | Reflection value map construction method and device | |
| US9836871B2 (en) | Three-dimentional plane panorama creation through hough-based line detection | |
| KR102130687B1 (en) | System for information fusion among multiple sensor platforms | |
| CN112327326A (en) | Two-dimensional map generation method, system and terminal with three-dimensional information of obstacles | |
| CN116753945A (en) | Navigation method of industrial inspection robot based on multi-sensor fusion | |
| CN110989619B (en) | Method, apparatus, device and storage medium for locating objects | |
| CN113960614A (en) | Elevation map construction method based on frame-map matching | |
| KR20200028210A (en) | System for structuring observation data and platform for mobile mapping or autonomous vehicle | |
| CN114972668A (en) | Laser SLAM method and system based on height information | |
| CN116202511B (en) | Method and device for determining pose of mobile equipment under long roadway ultra-wideband one-dimensional constraint | |
| CN112686951B (en) | Method, device, terminal and storage medium for determining robot position | |
| CN115112122A (en) | Localization method, system, device and storage medium based on multi-sensor fusion | |
| Pang et al. | Low-cost and high-accuracy LiDAR SLAM for large outdoor scenarios | |
| CN116772834A (en) | A mapping and positioning method for automatic inspection robots in open outdoor scenes | |
| US20250029332A1 (en) | Building modeling method using aerial lidar and computer program recorded on recording medium to execute the same | |
| CN119002538A (en) | Laser radar line-imitating flight method and device based on ICP | |
| CN118549939A (en) | Method for global positioning of vehicle based on laser radar | |
| WO2024260255A1 (en) | Mowing robot positioning method and apparatus, and electronic device and readable storage medium | |
| CN113538677A (en) | Positioning method, robot and storage medium | |
| CN116030204B (en) | Automatic driving map generation method, device, electronic equipment and storage medium | |
| CN116878488A (en) | Picture construction method and device, storage medium and electronic device | |
| CN117593383A (en) | External parameter calibration method and device and external parameter detection method and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication |