CN115127547B - A tunnel inspection vehicle positioning method based on strapdown inertial navigation system and image positioning - Google Patents
A tunnel inspection vehicle positioning method based on strapdown inertial navigation system and image positioning Download PDFInfo
- Publication number
- CN115127547B CN115127547B CN202210736964.7A CN202210736964A CN115127547B CN 115127547 B CN115127547 B CN 115127547B CN 202210736964 A CN202210736964 A CN 202210736964A CN 115127547 B CN115127547 B CN 115127547B
- Authority
- CN
- China
- Prior art keywords
- inspection vehicle
- tunnel
- tunnel inspection
- coordinate system
- earth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1656—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Navigation (AREA)
Abstract
本发明一种基于捷联惯导系统和图像定位的隧道检测车定位方法,先求隧检车在地球表面的纬度和经度;隧检车前端和后端的单目面阵相机先后对隧道内各车道间的间断虚线进行拍摄,两次间断虚线相同,以标识线端点和相机底部中心点为特征点对隧检车的横摆角解算,横摆角和编码器的位置量融合处理,再结合隧检车进入隧道的初始时刻位置,得到隧检车在地球坐标系下隧道内的运动轨迹;将纬度转化成地球坐标系下的数值记为M,将M与X比较,绝对值记为O,将经度转化成地球坐标系下的数值记为N,将N与Y比较,绝对值记为P,当O和P均在设定误差值内,用M和N定位隧检车,否则将M和N、X和Y用卡尔曼滤波器融合,融合后的数值定位隧检车。
The invention discloses a positioning method for a tunnel inspection vehicle based on a strapdown inertial navigation system and image positioning. The method comprises the following steps: firstly, the latitude and longitude of the tunnel inspection vehicle on the earth surface are obtained; the monocular array cameras at the front end and the rear end of the tunnel inspection vehicle successively shoot the discontinuous dotted lines between lanes in the tunnel, the two discontinuous dotted lines are the same, the yaw angle of the tunnel inspection vehicle is solved by taking the end point of the identification line and the center point of the bottom of the camera as feature points, the yaw angle and the position quantity of the encoder are fused and processed, and then the motion track of the tunnel inspection vehicle in the tunnel under the earth coordinate system is obtained by combining the initial moment position of the tunnel inspection vehicle entering the tunnel; the latitude is converted into a numerical value under the earth coordinate system and recorded as M, M is compared with X, and the absolute value is recorded as O, the longitude is converted into a numerical value under the earth coordinate system and recorded as N, N is compared with Y, and the absolute value is recorded as P, when O and P are both within the set error value, M and N are used to locate the tunnel inspection vehicle, otherwise M and N, X and Y are fused by a Kalman filter, and the fused numerical value locates the tunnel inspection vehicle.
Description
技术领域Technical Field
本发明属于隧道检测定位领域,具体为一种基于捷联惯导系统和图像定位的隧道检测车定位方法。The invention belongs to the field of tunnel detection and positioning, and in particular relates to a tunnel detection vehicle positioning method based on a strapdown inertial navigation system and image positioning.
背景技术Background Art
随着交通运输行业的持续发展,隧道的数量和长度不断增加。对隧道进行定期的检测是公路正常运行和人身安全的重要保障。传统的隧道检测通常是人工作业,检测时间长且往往需要封锁隧道,具有效率低、检测强度大、加重交通压力等缺陷。With the continuous development of the transportation industry, the number and length of tunnels are increasing. Regular inspection of tunnels is an important guarantee for the normal operation of highways and personal safety. Traditional tunnel inspection is usually done manually, which takes a long time and often requires blocking the tunnel. It has the disadvantages of low efficiency, high inspection intensity, and increased traffic pressure.
现有技术有使用搭载单目面阵相机、激光器、可见光等装置的隧道检测车对隧道内壁进行检测。然而在较长的隧道内,卫星信号较弱甚至几乎没有,依靠卫星来确定隧道检测车的位置在实际隧道检测中很难实现。The existing technology uses tunnel inspection vehicles equipped with monocular array cameras, lasers, visible light and other devices to inspect the inner wall of the tunnel. However, in longer tunnels, the satellite signal is weak or even almost non-existent, and it is difficult to rely on satellites to determine the position of the tunnel inspection vehicle in actual tunnel inspection.
发明内容Summary of the invention
针对现有技术中存在的问题,本发明提供一种基于捷联惯导系统和图像定位的隧道检测车定位方法,能准确反映隧道检测车的运载方位,实现隧道检测车在隧道内的快速检测。In view of the problems existing in the prior art, the present invention provides a tunnel inspection vehicle positioning method based on a strapdown inertial navigation system and image positioning, which can accurately reflect the transport orientation of the tunnel inspection vehicle and realize rapid detection of the tunnel inspection vehicle in the tunnel.
本发明是通过以下技术方案来实现:The present invention is achieved through the following technical solutions:
一种基于捷联惯导系统和图像定位的隧道检测车定位方法,包括如下步骤:A tunnel inspection vehicle positioning method based on a strapdown inertial navigation system and image positioning comprises the following steps:
S1,先利用加速度计得到隧道检测车的位置更新微分方程,之后求解所述的位置更新微分方程,得到隧道检测车在地球表面的纬度和经度;S1, first using an accelerometer to obtain a position update differential equation of the tunnel inspection vehicle, and then solving the position update differential equation to obtain the latitude and longitude of the tunnel inspection vehicle on the earth's surface;
隧道检测车的前端单目面阵相机先对隧道内各车道间的间断虚线进行拍摄,隧道检测车的后端单目面阵相机之后对隧道内各车道间的间断虚线进行拍摄,两次拍摄的间断虚线相同,然后以标识线端点和单目面阵相机底部中心点为特征点对隧道检测车的横摆角进行解算,将得到的横摆角和编码器的位置量进行融合处理,再结合隧道检测车进入隧道的初始时刻位置,得到隧道检测车在地球坐标系下隧道内的运动轨迹,运动轨迹为X和Y的点集,其中X对应隧道检测车在地球坐标系下x方向的数值,Y对应隧道检测车在地球坐标系下y方向的数值;The front monocular array camera of the tunnel inspection vehicle first shoots the discontinuous dotted lines between the lanes in the tunnel, and the rear monocular array camera of the tunnel inspection vehicle then shoots the discontinuous dotted lines between the lanes in the tunnel. The discontinuous dotted lines shot twice are the same. Then, the yaw angle of the tunnel inspection vehicle is solved with the endpoint of the identification line and the center point of the bottom of the monocular array camera as feature points. The obtained yaw angle and the position of the encoder are fused and processed. Then, combined with the initial position of the tunnel inspection vehicle entering the tunnel, the motion trajectory of the tunnel inspection vehicle in the tunnel in the earth coordinate system is obtained. The motion trajectory is a set of X and Y points, where X corresponds to the value of the tunnel inspection vehicle in the x direction in the earth coordinate system, and Y corresponds to the value of the tunnel inspection vehicle in the y direction in the earth coordinate system.
S2,将隧道检测车在地球表面的纬度转化成地球坐标系下的数值后记为M,将M与S1所述运动轨迹中的X进行比较,所得的绝对值记为O,将隧道检测车在地球表面的经度转化成地球坐标系下的数值后记为N,将N与S1所述运动轨迹中的Y进行比较,所得的绝对值记为P,当O和P均在设定误差值以内,用M和N定位隧道检测车,当O和P中至少有一个大于设定误差值,将M和N,以及S1所述运动轨迹中的X和Y在卡尔曼滤波器中进行融合,用融合后的数值定位隧道检测车。S2, convert the latitude of the tunnel inspection vehicle on the earth's surface into a numerical value in the earth's coordinate system and record it as M, compare M with X in the motion trajectory described in S1, and record the absolute value obtained as O, convert the longitude of the tunnel inspection vehicle on the earth's surface into a numerical value in the earth's coordinate system and record it as N, compare N with Y in the motion trajectory described in S1, and record the absolute value obtained as P, when O and P are both within the set error value, use M and N to locate the tunnel inspection vehicle, when at least one of O and P is greater than the set error value, M and N, as well as X and Y in the motion trajectory described in S1 are fused in the Kalman filter, and the fused value is used to locate the tunnel inspection vehicle.
优选的,S1中所述的位置更新微分方程如下所示:Preferably, the position update differential equation described in S1 is as follows:
其中,L为隧道检测车在地球表面的纬度,vy为隧道检测车在坐标系b中y1方向上的速度分量,RM为隧道检测车所在点对应地球子午圈的半径,h为海拔高度;λ为隧道检测车在地球表面的经度,vx为隧道检测车在坐标系b中x1方向上的速度分量,RN为隧道检测车所在点对应卯酉圈的曲率半径;Wherein, L is the latitude of the tunnel inspection vehicle on the earth's surface, v y is the velocity component of the tunnel inspection vehicle in the y 1 direction in the coordinate system b, RM is the radius of the earth's meridian corresponding to the point where the tunnel inspection vehicle is located, and h is the altitude; λ is the longitude of the tunnel inspection vehicle on the earth's surface, vx is the velocity component of the tunnel inspection vehicle in the x 1 direction in the coordinate system b, and RN is the radius of curvature of the meridian corresponding to the point where the tunnel inspection vehicle is located;
在坐标系b中,以隧道检测车的质心为原点,沿隧道检测车轴线向右为x1轴正方向,沿垂直x1轴向前为y1轴正方向。In coordinate system b, with the center of mass of the tunnel inspection vehicle as the origin, the rightward direction along the axis of the tunnel inspection vehicle is the positive direction of the x1 axis, and the forward direction along the perpendicular x1 axis is the positive direction of the y1 axis.
优选的,S1中前端单目面阵相机、后端单目面阵相机提取两幅相同图像的特征点集,分别记为N1、N2,之后找出N1中任意一个特征点a与N2中欧氏距离最小和次最小的特征点b和特征点c,特征点a和特征点b的欧氏距离、特征点a和特征点c的欧氏距离分别为d1、d2。若d1和d2的比值小于阈值则判定特征点a和特征点b为匹配点对,记为(a,b),按照相同的过程得到两幅图像的匹配点对集合,之后剔除所述匹配点对集合中的误匹配,再选取标识线端点为特征点。Preferably, the front-end monocular area array camera and the rear-end monocular area array camera in S1 extract feature point sets of two identical images, which are respectively recorded as N 1 and N 2 , and then find the feature point b and feature point c with the smallest and second smallest Euclidean distances between any feature point a in N 1 and N 2 , and the Euclidean distances between feature point a and feature point b and feature point c are d 1 and d 2 respectively. If the ratio of d 1 and d 2 is less than a threshold, feature point a and feature point b are determined to be a matching point pair, which is recorded as (a, b). The matching point pair sets of the two images are obtained according to the same process, and then the mismatches in the matching point pair set are eliminated, and then the endpoints of the identification line are selected as feature points.
进一步,从所述匹配点对集合中找出任意一个特征点与其他匹配的特征点相交点数目大于3的匹配点对,判定这些匹配点对为错误匹配对,将错误匹配对剔除,之后计算剩余匹配点对的欧氏距离比值ri,将ri按照递增排序组成新的匹配点对集合,再将新的匹配点对集合平均分成四组,从第一组中任意选择四对匹配点对,若所述的四对匹配点对全不是内点,舍弃第一组且不对剩余匹配点对进行判定,若第一组中有两对匹配点对是内点,再对第一组内剩余匹配点对进行内点的判定,对剩余组进行相同操作,选择内点数最多的组作为最优模型,完成匹配点对集合中误匹配的剔除。Further, find out the matching point pairs in which the number of intersection points between any one feature point and other matching feature points is greater than 3 from the matching point pair set, determine these matching point pairs as wrong matching pairs, and eliminate the wrong matching pairs. Then calculate the Euclidean distance ratios ri of the remaining matching point pairs, sort ri in ascending order to form a new matching point pair set, and then divide the new matching point pair set into four groups on average. Select four matching point pairs at random from the first group. If none of the four matching point pairs are inliers, discard the first group and do not make a judgment on the remaining matching point pairs. If two matching point pairs in the first group are inliers, make an inlier judgment on the remaining matching point pairs in the first group. Perform the same operation on the remaining groups, select the group with the largest number of inliers as the optimal model, and complete the elimination of wrong matches in the matching point pair set.
优选的,S1将前端单目面阵相机提取到的标识线端点和前端单目面阵相机底部中心点进行世界坐标系变换,分别记为(X1,Y1)和(X2,Y2);将后端单目面阵相机提取到的标识线端点和后端单目面阵相机底部中心点进行世界坐标系变换,分别记为(X1′,Y1′),(X2′,Y2′);前端单目面阵相机距标识线端点的距离L1和后端单目面阵相机距标识线端点的距离L2用下式得到:Preferably, S1 transforms the endpoint of the identification line extracted by the front monocular area array camera and the center point of the bottom of the front monocular area array camera into a world coordinate system, which are recorded as (X 1 , Y 1 ) and (X 2 , Y 2 ) respectively; transforms the endpoint of the identification line extracted by the rear monocular area array camera and the center point of the bottom of the rear monocular area array camera into a world coordinate system, which are recorded as (X 1 ′, Y 1 ′) and (X 2 ′, Y 2 ′) respectively; the distance L 1 from the front monocular area array camera to the endpoint of the identification line and the distance L 2 from the rear monocular area array camera to the endpoint of the identification line are obtained by the following formula:
前端单目面阵相机的横摆角和后端单目面阵相机的横摆角依次用下式表示:The yaw angle of the front monocular array camera and the yaw angle of the rear monocular array camera are expressed by the following formulas respectively:
式中,x1和x2分别为前端单目面阵相机和后端单目面阵相机距标识线的垂直距离,隧道检测车的横摆角用下式表示:Where x1 and x2 are the vertical distances from the front monocular array camera and the rear monocular array camera to the marking line, respectively. The yaw angle of the tunnel inspection vehicle is expressed as follows:
优选的,S1将得到的横摆角和编码器的位置量采用递推算法进行融合处理,具体公式如下;Preferably, S1 uses a recursive algorithm to fuse the obtained yaw angle and the encoder position. The specific formula is as follows:
式中,τ∈(tk,tk+1),v(τ)为编码器在不同时刻τ输出的隧道检测车行驶方向上的速度,vx代表隧道检测车沿地球坐标系x方向的速度分量,vy代表隧道检测车沿地球坐标系y方向的速度分量,x(tk+1)和y(tk+1)分别代表隧道检测车在k+1时刻的位置,为隧道检测车在不同时刻τ下的横摆角,不同时刻下隧道检测车的坐标值点集为隧道检测车在隧道内的运动轨迹,具体表示为下式:s={(x(t0),y(t0)),(x(t1),y(t1)),…,(x(tk),y(tk)),(x(tk+1),y(tk+1))}。Where τ∈(t k ,t k+1 ), v(τ) is the speed of the tunnel detection vehicle in the driving direction output by the encoder at different times τ, v x represents the speed component of the tunnel detection vehicle along the x direction of the earth coordinate system, v y represents the speed component of the tunnel detection vehicle along the y direction of the earth coordinate system, x(t k+1 ) and y(t k+1 ) represent the positions of the tunnel detection vehicle at time k+1, respectively. is the yaw angle of the tunnel inspection vehicle at different times τ, and the coordinate value point set of the tunnel inspection vehicle at different times is the motion trajectory of the tunnel inspection vehicle in the tunnel, which is specifically expressed as the following formula: s = {(x(t 0 ), y(t 0 )), (x(t 1 ), y(t 1 )), … , (x(t k ), y(t k )), (x(t k+1 ), y(t k+1 ))}.
进一步,当k=0时,x(t0),y(t0)分别为隧道检测车进入隧道的初始时刻位置,所述的初始时刻位置由GPS提供。Furthermore, when k=0, x(t 0 ) and y(t 0 ) are the initial positions of the tunnel inspection vehicle when it enters the tunnel, respectively, and the initial positions are provided by GPS.
优选的,S2在将隧道检测车在地球表面的纬度、经度转化成地球坐标系下的数值时,采用如下公式:Preferably, S2 uses the following formula when converting the latitude and longitude of the tunnel inspection vehicle on the earth's surface into values in the earth's coordinate system:
其中,地球坐标系记为坐标系e; Among them, the earth coordinate system is denoted as coordinate system e;
式中:θ、γ、ψ分别为隧道检测车的俯仰角、横滚角和横摆角,λ和L分别为隧道检测车的经度和纬度。Where: θ, γ, ψ are the pitch angle, roll angle and yaw angle of the tunnel inspection vehicle, respectively; λ and L are the longitude and latitude of the tunnel inspection vehicle, respectively.
进一步,所述隧道检测车的俯仰角、横滚角和横摆角按如下过程得到:Furthermore, the pitch angle, roll angle and yaw angle of the tunnel inspection vehicle are obtained according to the following process:
先定义坐标系n,其中以隧道检测车的质心为原点,取向东为x2轴正方向,取向北为y2轴正方向,取垂直向上的方向为z2轴正方向,之后定义坐标系b中的矢量rb,rb=xi+yi+zk,转换到坐标系n中的矢量记为rn,再定义四元数用四元数运算法则,rn可以表示为下式:First, define the coordinate system n, with the center of mass of the tunnel inspection vehicle as the origin, the east as the positive direction of the x2 axis, the north as the positive direction of the y2 axis, and the vertical upward direction as the positive direction of the z2 axis. Then define the vector r b in the coordinate system b, r b = xi + yi + zk, and convert it to the vector in the coordinate system n as r n , and then define the quaternion Using quaternion operation rules, r n can be expressed as follows:
进一步可得到下式: Further we can get the following formula:
坐标系b到坐标系n的姿态位置矩阵表示为下式:The attitude position matrix from coordinate system b to coordinate system n It is expressed as the following formula:
表示为q0、q1、q2、q3均为实数,i、j和k均为方向矢量,是的共轭矩阵。 Expressed as q 0 , q 1 , q 2 , q 3 are all real numbers, i, j and k are all direction vectors, yes The conjugate matrix of .
与的关系用矩阵形式表示为下式: and The relationship is expressed in matrix form as follows:
式中,为MEMS陀螺测得的隧道检测车在坐标系b下的角速度,分别为绕x1轴、y1轴、z1轴的角速度,垂直x1轴向上为z1轴正方向;其中为MEMS陀螺输出隧道检测车的角动量,为地球的自转角速度,为地理坐标系相对地球的角速率,是的转置矩阵;In the formula, is the angular velocity of the tunnel inspection vehicle in coordinate system b measured by the MEMS gyroscope, They are The angular velocity around the x1 axis , y1 axis, and z1 axis, with the positive direction of the z1 axis perpendicular to the x1 axis ; in Output the angular momentum of the tunnel inspection vehicle for the MEMS gyro, is the Earth's rotational velocity, is the angular velocity of the geographic coordinate system relative to the earth, yes The transposed matrix of
对与的关系式用四元数毕卡算法计算,得到下式的结果:right and The relationship is calculated using the quaternion Picard algorithm, and the following result is obtained:
其中:进一步进行泰勒级数展开,得到下式所示的姿态位置矩阵 in: Further Taylor series expansion is performed to obtain the attitude position matrix shown in the following formula:
进一步用含有不同双下标的T来表示姿态位置矩阵记为: Further use T with different double subscripts to represent Posture position matrix Denoted as:
最后求得: Finally, we get:
优选的,S2用卡尔曼滤波器对M和N,以及S1所述运动轨迹中的X和Y进行融合时,采用如下过程:Preferably, when S2 uses a Kalman filter to fuse M and N, and X and Y in the motion trajectory described in S1, the following process is adopted:
先求解M和N的噪声方差以及S1所述运动轨迹中X和Y的噪声方差隧道检测车当前位置Pos(k)与M和N对应的位置Pos1(k),以及S1所述运动轨迹中X和Y对应的位置Pos2(k)的关系如下:First solve the noise variance of M and N And the noise variance of X and Y in the motion trajectory described in S1 The relationship between the current position Pos(k) of the tunnel inspection vehicle, the position Pos 1 (k) corresponding to M and N, and the position Pos 2 (k) corresponding to X and Y in the motion trajectory of S1 is as follows:
利用隧道检测车当前位置Pos(k)得到融合后的数值。The current position Pos(k) of the tunnel detection vehicle is used to obtain the fused value.
与现有技术相比,本发明具有以下有益的技术效果:Compared with the prior art, the present invention has the following beneficial technical effects:
本发明一种基于捷联惯导系统和图像定位的隧道检测车定位方法,在隧道内由于GPS几乎接收不到信号,因此无法依靠GPS实现对隧道检测车的定位,因此可利用加速度计得到隧道检测车的位置更新微分方程,进一步即可求解得到隧道检测车在地球表面的纬度和经度,实现对隧道检测车位置的追踪;前端单目面阵相机和后端单目面阵相机可对隧道内各车道间的相同间断虚线先后进行拍摄,之后以标识线端点和单目面阵相机底部中心点为特征点对隧道检测车的横摆角进行解算,将得到的横摆角和编码器的位置量可进行融合处理,再结合隧道检测车进入隧道的初始时刻位置,便可得到隧道检测车在地球坐标系下隧道内的运动轨迹,该运动轨迹为X和Y的点集,其中X对应隧道检测车在地球坐标系下x方向的数值,Y对应隧道检测车在地球坐标系下y方向的数值,实现隧道检测车的实时定位。理论上讲这两种方式应该能保持完全的轨迹重合,但前者误差累积较为严重,后者由于单目面阵相机易受到光照的影响及隧道检测车自身的抖动使定位精度降低。所以将隧道检测车在地球表面的纬度、经度转化成地球坐标系下的数值,之后分别与单目面阵相机所得运动轨迹中的X和Y进行比较,当差值均在设定误差值以内,定位误差较小,用转化后的纬度、经度定位隧道检测车,否则将转化后的纬度数值、经度数值、单目面阵相机所得运动轨迹中的X和Y用卡尔曼滤波器进行融合,用融合后的数值定位隧道检测车,最终实现高精度的综合定位,完成在隧道内缺陷检测时隧道检测车的定位。The invention discloses a positioning method for a tunnel inspection vehicle based on a strapdown inertial navigation system and image positioning. Since GPS can hardly receive signals in a tunnel, the positioning of the tunnel inspection vehicle cannot be realized by relying on GPS. Therefore, an accelerometer can be used to obtain a differential equation for updating the position of the tunnel inspection vehicle, and the latitude and longitude of the tunnel inspection vehicle on the earth surface can be further solved to realize tracking of the position of the tunnel inspection vehicle. A front monocular array camera and a rear monocular array camera can successively shoot the same discontinuous dotted lines between lanes in the tunnel, and then the yaw angle of the tunnel inspection vehicle is solved with the end point of the marking line and the center point of the bottom of the monocular array camera as feature points. The obtained yaw angle and the position of the encoder can be fused and processed, and then combined with the initial moment position of the tunnel inspection vehicle entering the tunnel, the motion trajectory of the tunnel inspection vehicle in the tunnel in the earth coordinate system can be obtained. The motion trajectory is a point set of X and Y, wherein X corresponds to the value of the tunnel inspection vehicle in the x direction in the earth coordinate system, and Y corresponds to the value of the tunnel inspection vehicle in the y direction in the earth coordinate system, so as to realize real-time positioning of the tunnel inspection vehicle. Theoretically, these two methods should be able to maintain complete trajectory overlap, but the former has more serious error accumulation, and the latter has reduced positioning accuracy due to the monocular array camera being easily affected by light and the jitter of the tunnel inspection vehicle itself. Therefore, the latitude and longitude of the tunnel inspection vehicle on the earth's surface are converted into values in the earth's coordinate system, and then compared with the X and Y in the motion trajectory obtained by the monocular array camera. When the difference is within the set error value, the positioning error is small, and the converted latitude and longitude are used to locate the tunnel inspection vehicle. Otherwise, the converted latitude value, longitude value, and X and Y in the motion trajectory obtained by the monocular array camera are fused with a Kalman filter, and the fused values are used to locate the tunnel inspection vehicle, ultimately achieving high-precision comprehensive positioning and completing the positioning of the tunnel inspection vehicle during defect detection in the tunnel.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明所述单目面阵相机和LED灯在隧道检测车上的安装位置示意图。FIG1 is a schematic diagram of the installation positions of the monocular area array camera and the LED lamp on a tunnel inspection vehicle according to the present invention.
图2为本发明所述捷联惯导系统在隧道检测车底盘上的安装位置示意图。FIG. 2 is a schematic diagram of the installation position of the strapdown inertial navigation system of the present invention on the chassis of the tunnel inspection vehicle.
图3为本发明的两台单目面阵相机拍摄同一路段示意图。FIG. 3 is a schematic diagram of two monocular area array cameras of the present invention photographing the same road section.
图4为本发明的隧道检测车横摆角解算示意图。FIG. 4 is a schematic diagram of yaw angle calculation of a tunnel inspection vehicle according to the present invention.
图5为本发明的总流程图。FIG5 is a general flow chart of the present invention.
图6为本发明的惯导定位流程图。FIG. 6 is a flow chart of the inertial navigation positioning of the present invention.
图7为本发明的图像定位流程图。FIG. 7 is a flow chart of image positioning according to the present invention.
图8为本发明的融合定位流程图。FIG8 is a flow chart of the fusion positioning of the present invention.
图中:隧道检测车1、前端单目面阵相机2、前端LED灯3、后端单目面阵相机4、后端LED灯5和捷联惯导系统6。In the figure: tunnel inspection vehicle 1, front monocular area array camera 2, front LED light 3, rear monocular area array camera 4, rear LED light 5 and strapdown inertial navigation system 6.
具体实施方式DETAILED DESCRIPTION
下面结合具体的实施例对本发明做进一步的详细说明,所述是对本发明的解释而不是限定。The present invention is further described in detail below in conjunction with specific embodiments, which are intended to explain the present invention rather than to limit it.
本发明一种基于捷联惯导系统(简写为SINS)和图像定位的隧道检测车线形定位方法,如图1所示,包括安装在隧道检测车1前端顶部的前端单目面阵相机2、前端LED灯3、后端单目面阵相机4和后端LED灯5,以及如图2所示安装在隧道检测车1底盘中部横梁上的捷联惯导系统6,隧道检测车1为客车底盘改装,隧道内缺陷检测时隧道检测车1保持稳定车速前进。前端单目面阵相机2、后端单目面阵相机4用于路面标识线的图像采集。前端LED灯3、后端LED灯5用来实现照明,具有亮度高的效果,能保证隧道内图像采集的光照。捷联惯导系统7包括MEMS陀螺、加速度计和微型计算机,结合加速度计求得的位置更新微分方程求解得到隧道检测车1的位置,实现隧道检测车1在隧道内的定位。MEMS陀螺可对隧道检测车1的姿态进行实时更新,得到相应的姿态角,即之后提到的俯仰角、横滚角和横摆角。如图5所示,在隧道检测车1进入隧道之前,利用车载GPS对隧道检测车1进行定位,隧道内图像采集时,隧道检测车1保持稳定车速行驶,进入隧道后,利用单目面阵相机采集隧道内标识线实现定位,通过对比轨迹图中基于隧道内标识线的图像定位轨迹与基于捷联惯导系统7的定位轨迹,视情况判断是否需要通过卡尔曼滤波器将二者融合,输出隧道检测车1的定位位置。整套定位系统运用图像和惯导的融合技术实现高精度的定位。The present invention discloses a method for linear positioning of a tunnel inspection vehicle based on a strapdown inertial navigation system (abbreviated as SINS) and image positioning, as shown in FIG1, comprising a front monocular array camera 2, a front LED lamp 3, a rear monocular array camera 4 and a rear LED lamp 5 installed at the top of the front end of the tunnel inspection vehicle 1, and a strapdown inertial navigation system 6 installed on the crossbeam in the middle of the chassis of the tunnel inspection vehicle 1 as shown in FIG2. The tunnel inspection vehicle 1 is modified from a passenger car chassis, and the tunnel inspection vehicle 1 maintains a stable speed when inspecting defects in the tunnel. The front monocular array camera 2 and the rear monocular array camera 4 are used for image acquisition of road marking lines. The front LED lamp 3 and the rear LED lamp 5 are used to realize lighting, have a high brightness effect, and can ensure the illumination of image acquisition in the tunnel. The strapdown inertial navigation system 7 includes a MEMS gyroscope, an accelerometer and a microcomputer, and the position of the tunnel inspection vehicle 1 is solved by updating the differential equation in combination with the position obtained by the accelerometer, so as to realize the positioning of the tunnel inspection vehicle 1 in the tunnel. The MEMS gyroscope can update the attitude of the tunnel inspection vehicle 1 in real time and obtain the corresponding attitude angle, namely the pitch angle, roll angle and yaw angle mentioned later. As shown in Figure 5, before the tunnel inspection vehicle 1 enters the tunnel, the on-board GPS is used to locate the tunnel inspection vehicle 1. When collecting images in the tunnel, the tunnel inspection vehicle 1 maintains a stable speed. After entering the tunnel, the monocular array camera is used to collect the marking lines in the tunnel to achieve positioning. By comparing the image positioning trajectory based on the marking lines in the tunnel in the trajectory diagram with the positioning trajectory based on the strapdown inertial navigation system 7, it is determined whether the two need to be fused through the Kalman filter according to the situation, and the positioning position of the tunnel inspection vehicle 1 is output. The entire positioning system uses the fusion technology of image and inertial navigation to achieve high-precision positioning.
下面具体介绍本发明的算法部分。本发明的算法包括捷联惯导系统定位、图像定位和融合定位三部分。The algorithm of the present invention is described in detail below. The algorithm of the present invention includes three parts: strapdown inertial navigation system positioning, image positioning and fusion positioning.
第一,如图6所示,利用姿态位置解算,实现捷联惯导系统定位;First, as shown in Figure 6, the strapdown inertial navigation system positioning is realized by using attitude position solution;
首先建立两个坐标系:坐标系b和坐标系n。以隧道检测车1的质心为原点o,取沿隧道检测车1轴线向右为x1轴正方向,取垂直x1轴向前为y1轴正方向,取垂直x1轴向上为z1轴正方向,将上述确定的坐标系定义为坐标系b。以隧道检测车1的质心为原点o,取向东为x2轴正方向,取向北为y2轴正方向,取垂直向天的方向为z2轴正方向,将此坐标系定义为坐标系n。First, establish two coordinate systems: coordinate system b and coordinate system n. Take the center of mass of the tunnel inspection vehicle 1 as the origin o, take the right direction along the axis of the tunnel inspection vehicle 1 as the positive direction of the x1 axis, take the vertical x1 axis forward as the positive direction of the y1 axis, take the vertical x1 axis upward as the positive direction of the z1 axis, and define the above-determined coordinate system as coordinate system b. Take the center of mass of the tunnel inspection vehicle 1 as the origin o, take the east direction as the positive direction of the x2 axis, take the north direction as the positive direction of the y2 axis, take the direction vertical to the sky as the positive direction of the z2 axis, and define this coordinate system as coordinate system n.
姿态位置解算是求出隧道检测车1当前的坐标系b相对于坐标系n的变化。存在这样一个由坐标系b到坐标系n的姿态位置矩阵,对于坐标系b中的一个向量,当它左乘这个姿态位置矩阵之后,就能得出这个向量对应于坐标系n中向量的大小。而姿态位置解算的目标就是根据隧道检测车1上MEMS陀螺输出的数据实时计算出这个姿态位置矩阵。本发明采用四元数法更新算法进行隧道检测车1的姿态位置解算,四元数法更新算法是目前捷联惯导系统常用的姿态解算算法,除此之外还有欧拉角法更新算法、方向余弦法更新算法和等效旋转矢量法更新算法。The attitude and position solution is to find out the change of the current coordinate system b of the tunnel detection vehicle 1 relative to the coordinate system n. There is such an attitude and position matrix from the coordinate system b to the coordinate system n. For a vector in the coordinate system b, when it is multiplied on the left by the attitude and position matrix, it can be obtained that the size of this vector corresponds to the vector in the coordinate system n. The goal of the attitude and position solution is to calculate this attitude and position matrix in real time based on the data output by the MEMS gyroscope on the tunnel detection vehicle 1. The present invention adopts the quaternion method update algorithm to perform the attitude and position solution of the tunnel detection vehicle 1. The quaternion method update algorithm is a commonly used attitude solution algorithm for the current strapdown inertial navigation system. In addition, there are also the Euler angle method update algorithm, the direction cosine method update algorithm and the equivalent rotation vector method update algorithm.
当隧道检测车1的姿态变化时,坐标系b原点和坐标系n原点的位置及其相应的坐标均会随之变化。安装在隧道检测车1上的MEMS陀螺可间隔地感测隧道检测车1在坐标系b下的角动量,进而可以得到隧道检测车1在坐标系b下的角速度加速度计可感测隧道检测车1在坐标系b下的线速度。二者的感测数据会实时反馈到微型计算机中,微型计算机利用四元数毕卡算法可解算出由坐标系b到坐标系n的姿态位置矩阵进而得到关于隧道检测车1姿态的3个角度,即在坐标系b中,取顺时针为正,隧道检测车1绕x1轴旋转的俯仰角θ,绕y1轴旋转的横滚角γ和绕z1轴旋转的横摆角ψ。MEMS陀螺输出的一般是采样时间间隔内的角增量,为了避免噪声的微分放大,应直接用角增量来确定四元数。四元数毕卡算法是由角增量计算四元数法更新算法的常用算法。When the posture of the tunnel inspection vehicle 1 changes, the positions of the origins of the coordinate system b and the coordinate system n and their corresponding coordinates will change accordingly. The MEMS gyroscope installed on the tunnel inspection vehicle 1 can sense the angular momentum of the tunnel inspection vehicle 1 in the coordinate system b at intervals, and then the angular velocity of the tunnel inspection vehicle 1 in the coordinate system b can be obtained. The accelerometer can sense the linear velocity of the tunnel inspection vehicle 1 in the coordinate system b. The sensing data of the two will be fed back to the microcomputer in real time, and the microcomputer can solve the attitude position matrix from the coordinate system b to the coordinate system n using the quaternion Picard algorithm Then, three angles about the attitude of the tunnel inspection vehicle 1 are obtained, that is, in the coordinate system b, the pitch angle θ of the tunnel inspection vehicle 1 rotating around the x1 axis, the roll angle γ of the tunnel inspection vehicle 1 rotating around the y1 axis, and the yaw angle ψ of the tunnel inspection vehicle 1 rotating around the z1 axis. The output of the MEMS gyroscope is generally the angle increment within the sampling time interval. In order to avoid differential amplification of noise, the quaternion should be determined directly using the angle increment. The quaternion Pica algorithm is a common algorithm for calculating the quaternion update algorithm by the angle increment.
定义坐标系b中的矢量rb=xi+yi+zk,转换到坐标系n中的矢量记为rn。定义四元数用四元数运算法则,rn可以表示为:展开有Define the vector r b in coordinate system b as xi+yi+zk, and the vector converted to coordinate system n is denoted as r n . Define quaternion Using quaternion operation rules, r n can be expressed as: Expand
可以得到姿态位置矩阵为The posture position matrix can be obtained as
q0,q1,q2,q3均为实数,i、j和k均为方向矢量,是的共轭矩阵。 q 0 ,q 1 ,q 2 ,q 3 are all real numbers, i, j and k are all direction vectors, yes The conjugate matrix of .
为求得姿态位置矩阵就要先求得四元数将四元数与角速度的关系用矩阵形式表示为下式(1):To obtain the attitude position matrix We must first find the quaternion Quaternion With angular velocity The relationship is expressed in matrix form as follows (1):
式中,分别为MEMS陀螺测得的隧道检测车1绕x1轴、y1轴、z1轴的角速度。其中为MEMS陀螺输出隧道检测车1的角动量,为地球的自转角速度,为地理坐标系相对地球的角速率,是的转置矩阵,以上均可求得,之后将该式代入式(1)。In the formula, They are the angular velocities of the tunnel inspection vehicle 1 around the x1 - axis, y1 - axis, and z1 - axis measured by the MEMS gyroscope. in The angular momentum of the tunnel detection vehicle 1 is output by the MEMS gyro, is the Earth's rotational velocity, is the angular velocity of the geographic coordinate system relative to the earth, yes The transposed matrix of can be obtained above, and then substituted into formula (1).
对于式(1)的求解,用四元数毕卡算法计算,可以得到结果,For the solution of formula (1), the quaternion Picard algorithm is used to calculate and the result can be obtained:
其中: in:
再对其结果进行泰勒级数展开可得到四元数进而确定出实时姿态位置矩阵 Then the Taylor series expansion of the result can be obtained as quaternion Then determine the real-time posture position matrix
将式(2)中第二行的矩阵分别用含有不同双下标的T来表示,则式(2)可记为:The matrices in the second row of formula (2) are represented by T with different double subscripts, then formula (2) can be written as:
由于式(3)中的9个矩阵值均可求出,所以进一步求得:Since the 9 matrix values in formula (3) can be obtained, we can further obtain:
式(4)中,θ、γ、ψ分别为之前提到的隧道检测车1的俯仰角、横滚角和横摆角,称为隧道检测车1的三个姿态角,可用于后面融合部分中地球坐标系与坐标系b的转换。In formula (4), θ, γ, and ψ are the pitch angle, roll angle, and yaw angle of the tunnel inspection vehicle 1 mentioned above, respectively, which are called the three attitude angles of the tunnel inspection vehicle 1 and can be used for the conversion between the earth coordinate system and the coordinate system b in the subsequent fusion part.
隧道检测车1的位置更新微分方程为:The position update differential equation of tunnel inspection vehicle 1 is:
式(5)中,vx、vy、vz分别为坐标系b中三个坐标轴的速度,加速度计测量的三个坐标轴的加速度为a=[ax,ay,az]T,对加速度进行一次积分即可得到vx、vy、vz。RM、RN分别为隧道检测车1所在点对应地球子午圈的半径和对应卯酉圈的曲率半径,L为隧道检测车1在地球表面的纬度,λ为隧道检测车1在地球表面的经度,h为海拔高度。对式(5)进行一次积分便可得到隧道检测车1所在位置的纬度、经度和海拔高度。In formula (5), vx , vy , and vz are the velocities of the three coordinate axes in the coordinate system b, respectively. The acceleration of the three coordinate axes measured by the accelerometer is a = [ ax , ay , az ] T. By integrating the acceleration once, vx , vy , and vz can be obtained. R M and RN are the radius of the earth's meridian circle and the curvature radius of the corresponding meridian circle at the location of the tunnel inspection vehicle 1, respectively. L is the latitude of the tunnel inspection vehicle 1 on the earth's surface, λ is the longitude of the tunnel inspection vehicle 1 on the earth's surface, and h is the altitude. By integrating formula (5) once, the latitude, longitude, and altitude of the location of the tunnel inspection vehicle 1 can be obtained.
第二,如图7所示的图像定位Second, the image positioning as shown in Figure 7
图像定位是基于隧道路面标识线完成的,通过前端单目面阵相机2和后端单目面阵相机4采集路面标识线的图像,如图3所示,两条弯曲实线表示道路边界,中间虚点划线是道路标志线,图3示意性地画出了弯线,在实际的隧道中弯度较小,v所指的是隧道检测车1的前进方向,tk表示在tk时刻,之后的算法通过对时间段积分来定位隧道检测车1的位置。对采集到的图像进行处理,利用针孔成像模型对提取到的特征点进行坐标解算,具体地,前端单目面阵相机2和后端单目面阵相机4将空间点投影到二维照片上,投影的过程能够用对应单目面阵相机的针孔成像模型表示,也可以利用针孔成像模型获得图片坐标与空间坐标的关系,针孔成像模型是现在普遍使用的相机成像模型。之后利用前端单目面阵相机2和后端单目面阵相机4的姿态角来解算隧道检测车1的横摆角,此处的横摆角不同于之前得到的横摆角,捷联惯导系统定位与图像定位需要计算各自的横摆角。最后将隧道检测车1的横摆角和编码器的位置量融合,便可解算出基于隧道路面标识线的隧道检测车1的线形定位。Image positioning is based on the tunnel road marking lines. The images of the road marking lines are collected by the front monocular array camera 2 and the rear monocular array camera 4. As shown in Figure 3, the two curved solid lines represent the road boundaries, and the dotted dash line in the middle is the road marking line. Figure 3 schematically draws the curved line. In the actual tunnel, the curvature is small. v refers to the forward direction of the tunnel detection vehicle 1. Tk represents the time tk . The subsequent algorithm locates the position of the tunnel detection vehicle 1 by integrating the time period. The collected image is processed, and the coordinates of the extracted feature points are solved using the pinhole imaging model. Specifically, the front monocular array camera 2 and the rear monocular array camera 4 project the spatial points onto the two-dimensional photo. The projection process can be represented by the pinhole imaging model of the corresponding monocular array camera. The relationship between the image coordinates and the spatial coordinates can also be obtained using the pinhole imaging model. The pinhole imaging model is a camera imaging model commonly used now. Then, the attitude angles of the front monocular array camera 2 and the rear monocular array camera 4 are used to calculate the yaw angle of the tunnel inspection vehicle 1. The yaw angle here is different from the yaw angle obtained previously. The strapdown inertial navigation system positioning and image positioning need to calculate their respective yaw angles. Finally, the yaw angle of the tunnel inspection vehicle 1 and the position of the encoder are integrated to calculate the linear positioning of the tunnel inspection vehicle 1 based on the tunnel road marking line.
(1)对图像进行预处理(1) Preprocess the image
单目面阵相机的拍摄环境在户外的公路上,户外环境会对拍摄的成像造成干扰,因此需对图像进行滤波处理。这里采用中值滤波法,首先从采样窗口中取出奇数个数据,再将这些数据的中值作为当前像素点的灰度值:The shooting environment of the monocular area array camera is on an outdoor road. The outdoor environment will interfere with the image, so the image needs to be filtered. The median filter method is used here. First, an odd number of data is taken from the sampling window, and then the median of these data is used as the gray value of the current pixel:
再对滤波后的图像进行二值化处理,设定一个阈值T,当图像像素值大于阈值T,则认定为白色;反之,当图像像素值小于阈值T,则认定为黑色。The filtered image is then binarized and a threshold T is set. When the image pixel value is greater than the threshold T, it is considered white; conversely, when the image pixel value is less than the threshold T, it is considered black.
用Roberts算子对图像进行边缘提取,用对角线方向相邻两像素之差近似梯度幅值的方法来检测边缘,运算公式如下:The Roberts operator is used to extract the edge of the image. The difference between two adjacent pixels in the diagonal direction is used to approximate the gradient amplitude to detect the edge. The calculation formula is as follows:
式中,f(x,y)为输入图像,g(x,y)为输出图像,x和y分别为图像的像素坐标点。Where f(x,y) is the input image, g(x,y) is the output image, and x and y are the pixel coordinates of the image.
(2)提取与匹配图像特征点(2) Extracting and matching image feature points
隧道内各车道间的线是白色的间断虚线,因此可基于隧道内各车道间的间断虚线选取特征点。本发明图像定位是基于隧道检测车1顶部的前端单目面阵相机2和后端单目面阵相机4分别对相同标识线图像段拍摄,进而推算出隧道检测车1的位置,因此需对相同图像段进行立体匹配。The lines between the lanes in the tunnel are white discontinuous dashed lines, so feature points can be selected based on the discontinuous dashed lines between the lanes in the tunnel. The image positioning of the present invention is based on the front monocular area array camera 2 and the rear monocular area array camera 4 on the top of the tunnel inspection vehicle 1 respectively shooting the same identification line image segment, and then calculating the position of the tunnel inspection vehicle 1, so the same image segment needs to be stereo matched.
首先对图像中一点在σ尺度上计算Hessian矩阵,其表达式为:First, the Hessian matrix is calculated for a point in the image on the σ scale, and its expression is:
式中,Lxx(x,σ)表示图像点与高斯二阶微分的卷积,Lxy(x,σ)表示图像点与高斯二阶微分的卷积,Lyx(x,σ)表示图像点与高斯二阶微分的卷积,Lyy(x,σ)表示图像点与高斯二阶微分的卷积。Where L xx (x,σ) represents the image point and the Gaussian second-order differential Convolution, L xy (x,σ) represents the convolution of the image point and the Gaussian second-order differential convolution, Lyx (x,σ) represents the convolution of the image point and the Gaussian second-order differential convolution, L yy (x,σ) represents the convolution of the image point and the Gaussian second-order differential convolution.
利用盒式滤波结构近似代替高斯二阶微分,计算待选特征点及其周围点的Hessian值,结合积分图可以进行快速计算。判别式为:The box filter structure is used to approximate the Gaussian second-order differential, and the Hessian value of the selected feature point and its surrounding points is calculated. Combined with the integral graph, fast calculation can be performed. The discriminant is:
Det(Hessian(x,σ))=Dxx×Dyy-(0.9Dxy)2 (9)Det(Hessian(x,σ))=D xx ×D yy -(0.9D xy ) 2 (9)
式中,Det(Hessian(x,σ))代表Hessian矩阵的行列式,Dxx、Dyy、Dxy分别为用盒式滤波器得到的Lxx(x,σ)、Lyy(x,σ)、Lxy(x,σ)。若Hessian矩阵的行列式值为正,且该像素点在附近像素点中的特征值最大,则认定为特征点。加权系数0.9是为了平衡盒式滤波器的近似误差。In the formula, Det(Hessian(x,σ)) represents the determinant of the Hessian matrix, and Dxx , Dyy , and Dxy are Lxx (x,σ), Lyy (x,σ), and Lxy (x,σ) obtained by the box filter, respectively. If the determinant of the Hessian matrix is positive and the pixel has the largest eigenvalue among the nearby pixels, it is identified as a feature point. The weighting coefficient of 0.9 is to balance the approximation error of the box filter.
前端单目面阵相机2和后端单目面阵相机4提取到的两幅图像的特征点集记为N1、N2。找出N1中特征点a与N2中欧氏距离最小和次最小的特征点b和特征点c,特征点a和特征点b的欧氏距离,特征点a和特征点c的欧氏距离分别为d1、d2。若它们的比值小于阈值则判定特征点a和特征点b为匹配点对,记为(a,b),判定公式如下:The feature point sets of the two images extracted by the front monocular array camera 2 and the rear monocular array camera 4 are recorded as N 1 and N 2. Find the feature point a in N 1 and the feature point b and feature point c with the smallest and second smallest Euclidean distances in N 2. The Euclidean distances of feature point a and feature point b, and the Euclidean distances of feature point a and feature point c are d 1 and d 2 respectively. If their ratio is less than the threshold, feature point a and feature point b are determined to be a matching point pair, recorded as (a, b), and the determination formula is as follows:
按照相同的过程得到两幅图像的匹配点对集合。The same process is followed to obtain the set of matching point pairs of the two images.
(3)改进RANSAC算法,剔除误匹配(3) Improve the RANSAC algorithm to eliminate false matches
首先,采用目前现有的改进RANSAC算法,从上述匹配点对集合中找出任意一个特征点与其他匹配的特征点相交点数目大于3的匹配点对,判定这些匹配点对为错误匹配对,将错误匹配对剔除。计算剩余匹配点对的欧氏距离比值ri,将ri按照递增排序组成新的匹配点对集合。再将该集合平均分成四组,可以保证第一组中有不少于五对匹配点对,从第一组中任意选择四对匹配点对,作为一个最优模型的判定。如果选取的四对匹配点对全不是内点,则这组模型作为最优模型的概率很小,这时舍弃此组且不对其他匹配点对进行判定;若发现第一组中有两对匹配点对是内点,再对本组内剩余匹配点对进行内点的判定。对剩余组进行相同操作,选择内点数最多的组作为最优模型。First, the currently available improved RANSAC algorithm is used to find matching point pairs in which the number of intersections between any feature point and other matching feature points is greater than 3 from the above matching point pair set, and these matching point pairs are determined to be wrong matching pairs, and the wrong matching pairs are eliminated. The Euclidean distance ratios ri of the remaining matching point pairs are calculated, and ri is sorted in ascending order to form a new matching point pair set. The set is then evenly divided into four groups, which can ensure that there are no less than five matching point pairs in the first group. Four matching point pairs are randomly selected from the first group as the determination of an optimal model. If none of the four selected matching point pairs are inliers, the probability of this group of models being the optimal model is very small. At this time, this group is discarded and other matching point pairs are not determined; if two matching point pairs in the first group are found to be inliers, the remaining matching point pairs in this group are determined as inliers. The same operation is performed on the remaining groups, and the group with the largest number of inliers is selected as the optimal model.
(4)图像定位(4) Image positioning
利用张正友标定法对单目面阵相机标定,得到面阵相机的内部参数、外部参数,进而得到特征点的像素坐标系与世界坐标系的转换关系为:The monocular area array camera is calibrated using Zhang Zhengyou's calibration method to obtain the internal and external parameters of the area array camera, and then the conversion relationship between the pixel coordinate system and the world coordinate system of the feature point is obtained as follows:
式中,fx=f/dx,fy=f/dy,fx和fy分别表示图像水平轴和垂直轴的尺度因子,cx和cy分别图像坐标系原点在像素坐标系中的坐标,Zc为常数,u、v分别表示特征点在像素坐标系下的横坐标值、纵坐标值。R、t为外部参数,Xw、Yw和Zw分别代表世界坐标系里的三维坐标,M为投影矩阵,M1、M2由面阵相机标定所得的内、外参确定,为特征点在世界坐标系下的坐标值。M1中,fx、fy、cx和cy仅与单目面阵相机内部结构有关系,称这些参数为单目面阵相机内部参数;M2仅由单目面阵相机对于世界坐标系的方位决定,称为单目面阵相机外部参数,M1、M2由单目面阵相机标定所得的内部参数、外部参数确定,M为投影矩阵,为特征点在世界坐标系下的坐标值。In the formula, fx = f/dx, fy = f/dy, fx and fy represent the scale factors of the horizontal and vertical axes of the image, cx and cy represent the coordinates of the origin of the image coordinate system in the pixel coordinate system, Zc is a constant, u and v represent the horizontal and vertical coordinate values of the feature point in the pixel coordinate system, R and t are external parameters, Xw , Yw and Zw represent the three-dimensional coordinates in the world coordinate system, M is the projection matrix, M1 and M2 are determined by the internal and external parameters obtained by the calibration of the area array camera, is the coordinate value of the feature point in the world coordinate system. In M1 , fx , fy , cx and cy are only related to the internal structure of the monocular array camera, and these parameters are called the internal parameters of the monocular array camera; M2 is only determined by the orientation of the monocular array camera with respect to the world coordinate system, and is called the external parameters of the monocular array camera. M1 and M2 are determined by the internal parameters and external parameters obtained by the calibration of the monocular array camera. M is the projection matrix. is the coordinate value of the feature point in the world coordinate system.
选取标识线端点和单目相机底部中心点为特征点,经过坐标变换后,前端单目面阵相机2提取到的标识线端点和单目相机底部中心点的世界坐标分别为(X1,Y1),(X2,Y2),后端单目面阵相机4提取到的标识线端点和单目相机底部中心点的世界坐标分别为(X1′,Y1′),(X2′,Y2′)。则可得到前端单目面阵相机2拍摄的隧道检测车1距标识线的距离,记为L1。同理,可得到后端单目面阵相机4拍摄的隧道检测车1距标识线的距离,记为L2。The endpoint of the marking line and the center point of the bottom of the monocular camera are selected as feature points. After coordinate transformation, the world coordinates of the endpoint of the marking line and the center point of the bottom of the monocular camera extracted by the front monocular array camera 2 are (X 1 ,Y 1 ) and (X 2 ,Y 2 ), respectively. The world coordinates of the endpoint of the marking line and the center point of the bottom of the monocular camera extracted by the rear monocular array camera 4 are (X 1 ′,Y 1 ′) and (X 2 ′,Y 2 ′), respectively. Then, the distance between the tunnel inspection vehicle 1 and the marking line captured by the front monocular array camera 2 can be obtained, which is recorded as L 1 . Similarly, the distance between the tunnel inspection vehicle 1 and the marking line captured by the rear monocular array camera 4 can be obtained, which is recorded as L 2 .
在对隧道检测车1进行二维定位时,利用装在车顶部前端单目面阵相机2对标识线拍摄,对前端单目面阵相机2横摆角进行解算。隧道检测车1往前行驶,再利用安装于车顶后端单目面阵相机4对原路段再次拍摄,对后端单目面阵相机2横摆角进行解算,如图4所示。When the tunnel inspection vehicle 1 is two-dimensionally positioned, the front monocular array camera 2 installed on the top of the vehicle is used to shoot the marking line, and the yaw angle of the front monocular array camera 2 is calculated. The tunnel inspection vehicle 1 moves forward, and the rear monocular array camera 4 installed on the roof is used to shoot the original road section again, and the yaw angle of the rear monocular array camera 2 is calculated, as shown in FIG4 .
式中,x1,x2为前端单目面阵相机2和后端单目面阵相机4分别距标志线的垂直距离。若隧道检测车1在该路段未发生偏转,则隧道检测车1前后端的横摆角相等;若隧道检测车1在该路段发生偏转,则隧道检测车前后端的横摆角之差即为车体在该路段的横摆角大小。Where x1 and x2 are the vertical distances from the front monocular array camera 2 and the rear monocular array camera 4 to the marking line respectively. If the tunnel inspection vehicle 1 does not deflect on the road section, the yaw angles of the front and rear ends of the tunnel inspection vehicle 1 are equal; if the tunnel inspection vehicle 1 deflects on the road section, the difference between the yaw angles of the front and rear ends of the tunnel inspection vehicle is the yaw angle of the vehicle body on the road section.
式中,分别为隧道检测车1前后端横摆角大小。In the formula, are the yaw angles of the front and rear ends of the tunnel inspection vehicle 1 respectively.
安装于隧道检测车1车轮输出轴的编码器可得到隧道检测车1的位置量,将编码器的位置量融合和横摆角进行融合处理,可得到在相应角度下隧道检测车1的位置,实现基于隧道标识线的隧道检测车线形定位。递推算法如下:The encoder installed on the wheel output shaft of the tunnel inspection vehicle 1 can obtain the position of the tunnel inspection vehicle 1. By fusing the encoder position with the yaw angle, the position of the tunnel inspection vehicle 1 at the corresponding angle can be obtained, thus realizing the linear positioning of the tunnel inspection vehicle based on the tunnel marking line. The recursive algorithm is as follows:
式中,τ∈(tk,tk+1),v(τ)为编码器在不同时刻τ输出的隧道检测车1行驶方向上的速度,vx代表隧道检测车1速度沿地球坐标系x2方向的分量,vy代表隧道检测车1速度沿地球坐标系y2方向的分量,x(tk+1)和y(tk+1)分别代表隧道检测车1在k+1时刻的位置,为基于隧道标识线推算的隧道检测车1在不同时刻τ下的横摆角,当k=0时,x(t0),y(t0)分别为进入隧道初始时刻隧道检测车1的初始位置,此位置量信息由GPS提供。Wherein, τ∈(t k ,t k+1 ), v(τ) is the speed of the tunnel detection vehicle 1 in the driving direction output by the encoder at different times τ, v x represents the component of the speed of the tunnel detection vehicle 1 along the x2 direction of the earth coordinate system, v y represents the component of the speed of the tunnel detection vehicle 1 along the y2 direction of the earth coordinate system, x(t k+1 ) and y(t k+1 ) represent the positions of the tunnel detection vehicle 1 at time k+1, respectively. is the yaw angle of the tunnel inspection vehicle 1 at different times τ calculated based on the tunnel marking line. When k=0, x(t 0 ) and y(t 0 ) are the initial positions of the tunnel inspection vehicle 1 at the initial moment of entering the tunnel. This position information is provided by GPS.
隧道检测车1在隧道内的运动轨迹即为不同时刻隧道检测车1坐标值的点集,即为:The motion trajectory of the tunnel inspection vehicle 1 in the tunnel is the point set of the coordinate values of the tunnel inspection vehicle 1 at different times, namely:
s={(x(t0),y(t0)),(x(t1),y(t1)),…,(x(tk),y(tk)),(x(tk+1),y(tk+1))} (16)s={(x(t 0 ),y(t 0 )),(x(t 1 ),y(t 1 )),…,(x(t k ),y(t k )),(x( t k+1 ),y(t k+1 ))} (16)
另外,隧道内标识线由于磨损或路面设计因素会使得部分路段无标识线的情况。因此,本发明在对短距离无标识线路况下无法通过图像定位隧道检测车的横向位置,高速公路上的白色虚实线称之为车行道分界线,其标准长度为长度6米,间隔9米。道路中心虚实线连续两条缺失,短距离即15米时,故默认为上一时刻的横向位置,若超出此距离,本发明的技术方案则不适用。In addition, due to wear and tear or road design factors, some sections of the tunnel may have no marking lines. Therefore, the present invention cannot locate the lateral position of the tunnel detection vehicle through images under the condition of short-distance unmarked roads. The white dashed and solid lines on the highway are called lane dividing lines, and their standard length is 6 meters and 9 meters apart. When two consecutive dashed and solid lines in the center of the road are missing, the short distance is 15 meters, so the default is the lateral position at the previous moment. If this distance is exceeded, the technical solution of the present invention is not applicable.
以隧道检测车1进隧道前一刻t0为例,隧道检测车1此时坐标位置为(x(t0),y(t0)),若隧检车在该路段未发生偏移,带入(15)式,有Take the moment t 0 before the tunnel inspection vehicle 1 enters the tunnel as an example. The coordinate position of the tunnel inspection vehicle 1 at this time is (x(t 0 ), y(t 0 )). If the tunnel inspection vehicle does not deviate in this section, substituting it into equation (15), we have
若在t1-t2时刻之间隧道检测车1发生偏移,则有If the tunnel inspection vehicle 1 deviates between time t 1 and t 2 , then
第三,融合定位Third, integrated positioning
图像定位和捷联惯导系统定位是两套独立的系统,在实现融合前,首先应解决坐标系的统一,即将坐标系b与地球坐标系进行坐标转换,转换关系如下:Image positioning and strapdown inertial navigation system positioning are two independent systems. Before realizing fusion, the coordinate system should be unified first, that is, coordinate conversion between coordinate system b and earth coordinate system. The conversion relationship is as follows:
式中,θ、γ、ψ分别为隧道检测车1的俯仰角、横滚角和横摆角;(λ,L)分别为隧道检测车1的经度、纬度。Wherein, θ, γ, ψ are the pitch angle, roll angle and yaw angle of the tunnel inspection vehicle 1 respectively; (λ, L) are the longitude and latitude of the tunnel inspection vehicle 1 respectively.
将基于隧道标识线的图像定位轨迹与基于捷联惯导系统的定位轨迹(即转化后的纬度数值、经度数值)显示在一张轨迹图中。在轨迹图中可看出在各时刻隧道检测车1行驶的路径以及对应点的实时坐标信息。在理论上,二者应该能保持完全的轨迹重合,但捷联惯导误差累积较为严重。而单纯依靠图像定位可具有较高精度,但由于单目面阵相机易受到光照的影响及隧道检测车1自身的抖动使定位精度降低。基于此点可利用图像和捷联惯导的融合实现综合定位。The image positioning trajectory based on the tunnel marking line and the positioning trajectory based on the strapdown inertial navigation system (i.e. the converted latitude and longitude values) are displayed in a trajectory diagram. The trajectory diagram shows the path of the tunnel inspection vehicle 1 at each moment and the real-time coordinate information of the corresponding points. In theory, the two should be able to maintain complete trajectory overlap, but the strapdown inertial navigation error accumulation is more serious. While relying solely on image positioning can have higher accuracy, the positioning accuracy is reduced because the monocular array camera is easily affected by light and the tunnel inspection vehicle 1 itself shakes. Based on this point, the fusion of image and strapdown inertial navigation can be used to achieve comprehensive positioning.
如图8所示,通过对比轨迹图中隧道检测车1的实时位置数据,利用设定的误差值x判断是否需要融合定位。当两张图中隧道检测车1的任一坐标值(x,y)之差的绝对值超过设定误差值x=0.2m时,即判定该时刻捷联惯导定位误差较大,此时采用图像的定位值作为补偿进行融合定位,利用卡尔曼滤波器将二者进行融合,首先求解惯导定位(即求解转化后的纬度数值、经度数值)的噪声方差其次求解图像定位的运动轨迹中x和y的噪声方差求解隧道检测车1当前位置Pos(k)与转化后的纬度数值、经度数值对应的定位位置Pos1(k)与图像定位的运动轨迹中x和y对应的定位位置Pos2(k)的关系进而能够得到融合后的位置参数,进一步定位隧道检测车。若两张图中隧道检测车1的任一坐标值之差均不超过误差值0.2m时,即判定该时刻惯导定位准确,此时输出惯导定位数据来定位隧道检测车。As shown in Figure 8, by comparing the real-time position data of the tunnel inspection vehicle 1 in the trajectory diagram, the set error value x is used to determine whether fusion positioning is required. When the absolute value of the difference between any coordinate value (x, y) of the tunnel inspection vehicle 1 in the two diagrams exceeds the set error value x = 0.2m, it is determined that the strapdown inertial navigation positioning error is large at this moment. At this time, the positioning value of the image is used as compensation for fusion positioning. The Kalman filter is used to fuse the two. First, the noise variance of the inertial navigation positioning (that is, the converted latitude and longitude values) is solved. Secondly, solve the noise variance of x and y in the motion trajectory of image positioning Solve the relationship between the current position Pos(k) of the tunnel inspection vehicle 1 and the positioning position Pos 1 (k) corresponding to the converted latitude and longitude values and the positioning position Pos 2 (k) corresponding to x and y in the motion trajectory of image positioning Then, the fused position parameters can be obtained to further locate the tunnel inspection vehicle. If the difference between any coordinate values of the tunnel inspection vehicle 1 in the two images does not exceed the error value of 0.2m, it is determined that the inertial navigation positioning is accurate at this moment, and the inertial navigation positioning data is output to locate the tunnel inspection vehicle.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210736964.7A CN115127547B (en) | 2022-06-27 | 2022-06-27 | A tunnel inspection vehicle positioning method based on strapdown inertial navigation system and image positioning |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210736964.7A CN115127547B (en) | 2022-06-27 | 2022-06-27 | A tunnel inspection vehicle positioning method based on strapdown inertial navigation system and image positioning |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN115127547A CN115127547A (en) | 2022-09-30 |
| CN115127547B true CN115127547B (en) | 2024-04-19 |
Family
ID=83378999
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210736964.7A Active CN115127547B (en) | 2022-06-27 | 2022-06-27 | A tunnel inspection vehicle positioning method based on strapdown inertial navigation system and image positioning |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115127547B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115760916A (en) * | 2022-11-11 | 2023-03-07 | 北京信路威科技股份有限公司 | Target object positioning method and device, computer equipment and storage medium |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106197407A (en) * | 2016-06-23 | 2016-12-07 | 长沙学院 | A kind of subway localization method based on inertial sensor and system |
| CN107806874A (en) * | 2017-10-23 | 2018-03-16 | 西北工业大学 | A kind of inertial navigation polar region Initial Alignment Method of vision auxiliary |
| CN109974697A (en) * | 2019-03-21 | 2019-07-05 | 中国船舶重工集团公司第七0七研究所 | A kind of high-precision mapping method based on inertia system |
| CN111065043A (en) * | 2019-10-25 | 2020-04-24 | 重庆邮电大学 | System and method for fusion positioning of vehicles in tunnel based on vehicle-road communication |
| WO2020087846A1 (en) * | 2018-10-31 | 2020-05-07 | 东南大学 | Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision |
| CN114061611A (en) * | 2021-11-17 | 2022-02-18 | 腾讯科技(深圳)有限公司 | Target object positioning method, apparatus, storage medium and computer program product |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7162367B2 (en) * | 1999-11-29 | 2007-01-09 | American Gnc Corporation | Self-contained/interruption-free positioning method and system thereof |
-
2022
- 2022-06-27 CN CN202210736964.7A patent/CN115127547B/en active Active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106197407A (en) * | 2016-06-23 | 2016-12-07 | 长沙学院 | A kind of subway localization method based on inertial sensor and system |
| CN107806874A (en) * | 2017-10-23 | 2018-03-16 | 西北工业大学 | A kind of inertial navigation polar region Initial Alignment Method of vision auxiliary |
| WO2020087846A1 (en) * | 2018-10-31 | 2020-05-07 | 东南大学 | Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision |
| CN109974697A (en) * | 2019-03-21 | 2019-07-05 | 中国船舶重工集团公司第七0七研究所 | A kind of high-precision mapping method based on inertia system |
| CN111065043A (en) * | 2019-10-25 | 2020-04-24 | 重庆邮电大学 | System and method for fusion positioning of vehicles in tunnel based on vehicle-road communication |
| CN114061611A (en) * | 2021-11-17 | 2022-02-18 | 腾讯科技(深圳)有限公司 | Target object positioning method, apparatus, storage medium and computer program product |
Non-Patent Citations (3)
| Title |
|---|
| 一种实用的飞行器捷联惯性导航算法;陈柯勋;邱伟;;太原理工大学学报;20201231(05);142-148 * |
| 掘进机全站仪与捷联惯导组合定位方法;张旭辉;刘博兴;张超;杨文娟;赵建勋;;工矿自动化;20201231(09);4-10 * |
| 无人车捷联惯导系统仿真器设计;李超;王吉华;郭栋;曲金玉;;计算机仿真;20200315(03);123-126 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN115127547A (en) | 2022-09-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10860871B2 (en) | Integrated sensor calibration in natural scenes | |
| CN109945858B (en) | Multi-sensing fusion positioning method for low-speed parking driving scene | |
| JP5168601B2 (en) | Own vehicle position recognition system | |
| JP4600357B2 (en) | Positioning device | |
| JP5435306B2 (en) | Image processing system and positioning system | |
| JP5057184B2 (en) | Image processing system and vehicle control system | |
| CN113223161B (en) | Robust panoramic SLAM system and method based on IMU and wheel speed meter tight coupling | |
| CN111065043B (en) | System and method for fusion positioning of vehicles in tunnel based on vehicle-road communication | |
| CN111882612A (en) | A vehicle multi-scale localization method based on 3D laser detection of lane lines | |
| CN112136021B (en) | System and method for constructing landmark-based high definition map | |
| CN107284455B (en) | An ADAS system based on image processing | |
| CN109871739B (en) | Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL | |
| CN109596121A (en) | A kind of motor-driven station Automatic Targets and space-location method | |
| CN101509782A (en) | Small-sized ground marker capturing and positioning method | |
| CN113313659A (en) | High-precision image splicing method under multi-machine cooperative constraint | |
| CN108759823A (en) | The positioning of low speed automatic driving vehicle and method for correcting error in particular link based on images match | |
| CN112424568B (en) | System and method for building high-definition maps | |
| CN116736322B (en) | Speed prediction method integrating camera image and airborne laser radar point cloud data | |
| CN114719873B (en) | A low-cost fine map automatic generation method, device and readable medium | |
| KR20210034253A (en) | Method and device to estimate location | |
| CN113744308A (en) | Pose optimization method, pose optimization device, electronic device, pose optimization medium, and program product | |
| CN115127547B (en) | A tunnel inspection vehicle positioning method based on strapdown inertial navigation system and image positioning | |
| CN119992140A (en) | A UAV visual positioning method and system based on satellite image map matching | |
| CN118691779A (en) | A Transformer-based global positioning method for autonomous commercial vehicles on structured roads | |
| JP4596566B2 (en) | Self-vehicle information recognition device and self-vehicle information recognition method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |