[go: up one dir, main page]

CN110514212A - A smart car map landmark positioning method integrating monocular vision and differential GNSS - Google Patents

A smart car map landmark positioning method integrating monocular vision and differential GNSS Download PDF

Info

Publication number
CN110514212A
CN110514212A CN201910684352.6A CN201910684352A CN110514212A CN 110514212 A CN110514212 A CN 110514212A CN 201910684352 A CN201910684352 A CN 201910684352A CN 110514212 A CN110514212 A CN 110514212A
Authority
CN
China
Prior art keywords
landmark
terrestrial reference
image
monocular vision
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910684352.6A
Other languages
Chinese (zh)
Inventor
程洪
詹惠琴
王杨
李航
田环根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201910684352.6A priority Critical patent/CN110514212A/en
Publication of CN110514212A publication Critical patent/CN110514212A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/03Cooperating elements; Interaction or communication between different cooperating elements or between cooperating elements and receivers
    • G01S19/10Cooperating elements; Interaction or communication between different cooperating elements or between cooperating elements and receivers providing dedicated supplementary positioning signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)

Abstract

本发明公开了一种融合单目视觉和差分GNSS的智能车地图地标定位方法,包括传感器同步、数据预处理、地标检测和地标特征点提取、地标跟踪和定位和地标描述子计算五个步骤。本发明利用传感器同步得到图像对应的车辆和相机的空间坐标;通过数据处理对图像地标进行检测和提取地标特征点;利用光流法对图像特征点跟踪;通过对同一个地标提取多个特征点进行跟踪,最终取平均值来计算此地标的位置。地标定位成功后对最后检测到的地标提取特征对此地标进行唯一性描述,并把地标位置、类型及特征描述放入数据库中保存。

The invention discloses a method for locating a landmark on a map of an intelligent vehicle fused with monocular vision and differential GNSS, comprising five steps of sensor synchronization, data preprocessing, landmark detection and landmark feature point extraction, landmark tracking and positioning, and landmark descriptor calculation. The present invention utilizes the sensor to synchronously obtain the spatial coordinates of the vehicle and the camera corresponding to the image; detects the image landmarks and extracts the landmark feature points through data processing; uses the optical flow method to track the image feature points; and extracts multiple feature points from the same landmark is tracked and eventually averaged to calculate the position of this landmark. After the landmark positioning is successful, the features of the last detected landmark are extracted to uniquely describe the landmark, and the landmark position, type and feature description are stored in the database.

Description

一种融合单目视觉和差分GNSS的智能车地图地标定位方法A smart car map landmark positioning method integrating monocular vision and differential GNSS

技术领域technical field

本发明涉及无人驾驶地图定位领域,尤其涉及一种融合单目视觉和差分GNSS的智能车地图地标定位方法。The invention relates to the field of unmanned driving map positioning, in particular to a smart car map landmark positioning method that integrates monocular vision and differential GNSS.

背景技术Background technique

随着位置服务的蓬勃发展与大型建筑的日益增多,人们对位置服务的需求不断增加,进行快速精准的定位成为了迫切需要。With the vigorous development of location-based services and the increasing number of large buildings, people's demand for location-based services continues to increase, and fast and accurate positioning has become an urgent need.

目前,智能车地图主要分为激光雷达建图和视觉建图两种方式。基于激光雷达建图,利用点云坐标得到地标空间位置,该方法计算量大,对硬件有一定要求,实时性低;基于视觉的智能车地图地标定位,采用双目相机,通过双目匹配可以直接得到地标空间位置,但该方法鲁棒性低、误差大。At present, smart car maps are mainly divided into two methods: lidar mapping and visual mapping. Based on lidar mapping, using point cloud coordinates to obtain the spatial position of landmarks, this method has a large amount of calculation, has certain requirements for hardware, and has low real-time performance; the landmark positioning of smart car maps based on vision uses binocular cameras, and can be achieved through binocular matching. Directly obtain the spatial position of landmarks, but this method has low robustness and large error.

发明内容Contents of the invention

本发明的目的在于,针对上述问题,提出一种融合单目视觉和差分GNSS的智能车地图地标定位方法。The object of the present invention is to, in view of the above problems, propose a kind of intelligent vehicle map landmark location method of fusion monocular vision and differential GNSS.

一种融合单目视觉和差分GNSS的智能车地图地标定位方法,包括如下步骤:A method for locating landmarks on a smart car map fused with monocular vision and differential GNSS, comprising the steps of:

S1:传感器同步得到图像对应的车辆和相机空间坐标;S1: The sensor synchronously obtains the vehicle and camera space coordinates corresponding to the image;

S2:数据处理对图像地标进行检测和提取地标特征点;S2: Data processing detects image landmarks and extracts landmark feature points;

S3:利用光流法对图像特征点跟踪;S3: Use the optical flow method to track the image feature points;

S4:计算获取地标位置并描述子计算。S4: Calculate and obtain the landmark position and describe the sub-calculation.

一种融合单目视觉和差分GNSS的智能车地图地标定位方法,S1包括如下子步骤:A method for locating landmarks on a smart car map that combines monocular vision and differential GNSS, S1 includes the following sub-steps:

S11:搭建采集系统,在车顶安装相机和差分GNSS双定位天线,使相机和GNSS天线在同一平面,相机方向和双天线方向相同,相机和定位天线距离为d;S11: Build the acquisition system, install the camera and differential GNSS dual positioning antennas on the roof, make the camera and GNSS antennas on the same plane, the direction of the camera is the same as that of the dual antennas, and the distance between the camera and the positioning antennas is d;

S12:利用不同传感器数据的时间戳进行时间最近邻同步,每一帧图像得到一组标准化数据,需要满足:S12: Use the time stamps of different sensor data to perform temporal nearest neighbor synchronization, and obtain a set of standardized data for each frame of image, which needs to meet:

ei=(pi,Vi)e i =(p i , V i )

其中,Pi表示每帧图像对应的车辆位置和姿态,即(xi,yi,zi,αi,βi,γi),其中(xi,yi,zi)为车辆上相机坐标,(αi,βi,γi)是车辆上相机姿态的三个角度,Vi表示当前位姿对应的图像;Among them, P i represents the position and attitude of the vehicle corresponding to each frame image, namely ( xi , y i , zi , α i , β i , γ i ), where ( xi , y i , zi ) is the Camera coordinates, (α i , β i , γ i ) are the three angles of the camera pose on the vehicle, V i represents the image corresponding to the current pose;

S13:利用相机畸变参数矫正图像得到矫正后图像,通过GNSS得到的位置利用传感器间相对位置变换得到相机位置。S13: Use the camera distortion parameters to correct the image to obtain the corrected image, and use the position obtained by GNSS to obtain the camera position by using the relative position transformation between the sensors.

S2包括如下子步骤:S2 includes the following sub-steps:

S21:利用深度学习算法检测图像中的地标,检测结果为地标类型(ID)及在图像中的二维位置;S21: Use a deep learning algorithm to detect landmarks in the image, and the detection result is the landmark type (ID) and the two-dimensional position in the image;

S22:对检测到地标的图像区域提取ORB特征;S22: Extracting ORB features from the image region where the landmark is detected;

所述步骤S3包括如下子步骤:The step S3 includes the following sub-steps:

S31:判断出现在连续两帧图像I、J中某一局部区域是否为同一目标,需要满足:S31: Judging whether a local area appearing in two consecutive frames of images I and J is the same target, it needs to meet:

I(x,y,t)=J(x′,y′,t+Δ)I(x,y,t)=J(x',y',t+Δ)

其中,所有(x,y)都向一个方向移动了(dx,dy),从而得到(x′,y′)。Among them, all (x, y) are moved in one direction (d x , d y ), thus obtaining (x′, y′).

S32:t时刻的(x,y)点在t+τ时刻为(x+dx,y+dy),所以寻求匹配的问题可化为对下式寻求最小值,需要满足:S32: The (x, y) point at time t is (x+d x , y+d y ) at time t+τ, so the problem of finding a match can be reduced to finding the minimum value of the following formula, which needs to be satisfied:

其中,wx和wy分别表示W窗口的1/2,ux和uy分别表示待匹配点的图像坐标。为了得到最佳匹配,使得ε最小,令上式导数为0,求取极小值,解得的d即为跟踪的偏移量。Among them, w x and w y respectively represent 1/2 of the W window, u x and u y represent the image coordinates of the points to be matched respectively. In order to get the best match, make ε the smallest, let the derivative of the above formula be 0, find the minimum value, and the obtained d is the tracking offset.

所述步骤S4还包括如下子步骤:Said step S4 also includes the following sub-steps:

S41:获取地标定位时从t时刻到t+N续(N+1)中的成像点;S41: Acquiring imaging points from time t to t+N (N+1) during landmark positioning;

S42:利用坐标系变换关系,得到图像地标位置,变换关系满足:S42: Use the coordinate system transformation relationship to obtain the image landmark position, and the transformation relationship satisfies:

z0=Z*cosθ1 z 0 =Z*cosθ 1

其中,(x0,y0,z0)为利用多个图像帧(位置已知)观测到的同一个地标点即可计算出此地标点的位置;Among them, (x0, y0, z0) is the same landmark point observed by using multiple image frames (the position is known), and the position of this landmark point can be calculated;

S43:对同一个地标提取多个特征点进行跟踪,最终取平均值来计算此地标的位置。S43: Extract multiple feature points for the same landmark to track, and finally take the average value to calculate the position of the landmark.

附图说明Description of drawings

图1:地标定位系统框架图;Figure 1: Framework diagram of landmark positioning system;

图2:地标点在不同时刻图像中成像图;Figure 2: The image of landmark points in images at different times;

图3:地标定位原理图。Figure 3: Schematic diagram of landmark localization.

具体实施方式Detailed ways

为了对本发明的技术特征、目的和效果有更加清楚的理解,现对照附图说明本发明的具体实施方式。In order to have a clearer understanding of the technical features, purposes and effects of the present invention, the specific implementation manners of the present invention will now be described with reference to the accompanying drawings.

本实施例中,一种融合单目视觉和差分GNSS的智能车地图地标定位方法,包括如下步骤:In this embodiment, a method for locating landmarks on a smart car map that combines monocular vision and differential GNSS includes the following steps:

S1:传感器同步得到图像对应的车辆和相机空间坐标;S1: The sensor synchronously obtains the vehicle and camera space coordinates corresponding to the image;

S2:数据处理对图像地标进行检测和提取地标特征点;S2: Data processing detects image landmarks and extracts landmark feature points;

S3:利用光流法对图像特征点跟踪;S3: Use the optical flow method to track the image feature points;

S4:计算获取地标位置并描述子计算。S4: Calculate and obtain the landmark position and describe the sub-calculation.

一种融合单目视觉和差分GNSS的智能车地图地标定位方法,S1包括如下子步骤:A method for locating landmarks on a smart car map that combines monocular vision and differential GNSS, S1 includes the following sub-steps:

S11:搭建采集系统,在车顶安装相机和差分GNSS双定位天线,使相机和GNSS天线在同一平面,相机方向和双天线方向相同,相机和定位天线距离为d;S11: Build the acquisition system, install the camera and differential GNSS dual positioning antennas on the roof, make the camera and GNSS antennas on the same plane, the direction of the camera is the same as that of the dual antennas, and the distance between the camera and the positioning antennas is d;

S12:利用不同传感器数据的时间戳进行时间最近邻同步,每一帧图像得到一组标准化数据,需要满足:S12: Use the time stamps of different sensor data to perform temporal nearest neighbor synchronization, and obtain a set of standardized data for each frame of image, which needs to meet:

ei=(Pi,Vi)e i =(P i , V i )

其中,Pi表示每帧图像对应的车辆位置和姿态,即(xi,yi,zi,αi,βi,γi),其中(xi,yi,zi)为车辆上相机坐标,(αi,βi,γi)是车辆上相机姿态的三个角度,Vi表示当前位姿对应的图像;Among them, P i represents the position and attitude of the vehicle corresponding to each frame image, namely ( xi , y i , zi , α i , β i , γ i ), where ( xi , y i , zi ) is the Camera coordinates, (α i , β i , γ i ) are the three angles of the camera pose on the vehicle, V i represents the image corresponding to the current pose;

S13:利用相机畸变参数矫正图像得到矫正后图像,通过GNSS得到的位置利用传感器间相对位置变换得到相机位置。S13: Use the camera distortion parameters to correct the image to obtain the corrected image, and use the position obtained by GNSS to obtain the camera position by using the relative position transformation between the sensors.

一种融合单目视觉和差分GNSS的智能车地图地标定位方法,S2包括如下子步骤:A method for locating landmarks on a smart car map that combines monocular vision and differential GNSS, S2 includes the following sub-steps:

S21:使用深度学习算法检测图像中的地标(如交通灯,交通标志,地面标志等),检测结果为地标类型(ID)及在图像中的二维位置,用两个像素点表达地标在图像的位置Rect(T1,T2,B1,B2)。S21: Use a deep learning algorithm to detect landmarks in the image (such as traffic lights, traffic signs, ground signs, etc.), the detection result is the landmark type (ID) and the two-dimensional position in the image, and two pixels are used to express the landmark in the image The position of Rect(T 1 ,T 2 ,B 1 ,B 2 ).

S22:对检测到地标的图像区域提取ORB特征;S22: Extracting ORB features from the image region where the landmark is detected;

一种融合单目视觉和差分GNSS的智能车地图地标定位方法,所述步骤S3包括如下子步骤:A kind of intelligent vehicle map landmark location method of fusion monocular vision and difference GNSS, described step S3 comprises following sub-steps:

S31:利用光流法进行帧间特征点跟踪,如有特征点在某一帧中出现在地标检测框外,则删除该特征点,光流法的实现基于目标在视频流中,只产生一致性的小位移,亮度恒定且相邻帧有相似运动的假设。判断出现在连续两帧图像I、J中某一局部区域是否为同一目标,需要满足:S31: Use the optical flow method to track inter-frame feature points. If a feature point appears outside the landmark detection frame in a certain frame, delete the feature point. The implementation of the optical flow method is based on the target in the video stream, and only consistent The assumption of constant small displacement, constant brightness and similar motion of adjacent frames. To judge whether a certain local area in two consecutive frames of images I and J is the same target, it needs to meet:

I(x,y,t)=J(x′,y′,t+Δ)I(x,y,t)=J(x',y',t+Δ)

其中所有(x,y)都向一个方向移动了(dx,dy),从而得到(x′,y′)。All (x, y) are moved in one direction (d x , d y ), thus obtaining (x′, y′).

S32:t时刻的(x,y)点在t+τ时刻为(x+dx,y+dy),所以寻求匹配的问题可化为对下式寻求最小值,需要满足:S32: The (x, y) point at time t is (x+d x , y+d y ) at time t+τ, so the problem of finding a match can be reduced to finding the minimum value of the following formula, which needs to be satisfied:

其中wx和wy分别表示W窗口的1/2,ux和uy分别表示待匹配点的图像坐标。为了得到最佳匹配,使得ε最小,令上式导数为0,求取极小值,解得的d即为跟踪的偏移量。Where w x and w y represent 1/2 of the W window, u x and u y respectively represent the image coordinates of the points to be matched. In order to get the best match, make ε the smallest, let the derivative of the above formula be 0, find the minimum value, and the obtained d is the tracking offset.

所述步骤S4还包括如下子步骤:Said step S4 also includes the following sub-steps:

S41:如图2所示,地标定位时从t时刻到t+N时刻连续(N+1)张图像中相同三维地标点在不同图像中的成像点,O为每时刻相机中心,Z表示三维地标点与不同时刻相机的距离(Z未知) S42:图3为地标定位原理图,根据相机小孔成像原理从图像可知地标点到相机中心向量的两个角度根据坐标系变换关系:S41: As shown in Figure 2, during landmark positioning, the imaging point of the same 3D landmark point in different images in consecutive (N+1) images from time t to time t+N, O is the camera center at each time, and Z represents three-dimensional The distance between the landmark point and the camera at different times (Z is unknown) S42: Figure 3 is the principle diagram of landmark positioning. According to the camera pinhole imaging principle, the two angles from the landmark point to the camera center vector can be known from the image According to the coordinate system transformation relationship:

z0=Z*cosθ1 z 0 =Z*cosθ 1

其中(x0,y0,z0),利用多个图像帧(位置已知)观测到的同一个地标点即可计算出此地标点的位置;Among them (x 0 , y 0 , z 0 ), the position of this landmark point can be calculated by using the same landmark point observed by multiple image frames (with known positions);

S43:对同一个地标提取多个特征点进行跟踪,最终取平均值来计算此地标的位置。S43: Extract multiple feature points for the same landmark to track, and finally take the average value to calculate the position of the landmark.

本发明提出了一种融合单目视觉和差分GNSS的智能车地图地标定位方法,能实现快速,高精度的地标定位。以上显示和描述了本发明的基本原理和主要特征和本发明的优点。本行业的技术人员应该了解,本发明不受上述实施例的限制,上述实施例和说明书中描述的只是说明本发明的原理,在不脱离本发明精神和范围的前提下,本发明还会有各种变化和改进,这些变化和改进都落入要求保护的本发明范围内。本发明要求保护范围由所附的权利要求书及其等效物界定。The invention proposes a method for locating landmarks on a smart car map that combines monocular vision and differential GNSS, which can realize fast and high-precision landmark positioning. The basic principles and main features of the present invention and the advantages of the present invention have been shown and described above. Those skilled in the industry should understand that the present invention is not limited by the above-mentioned embodiments. What are described in the above-mentioned embodiments and the description only illustrate the principle of the present invention. Without departing from the spirit and scope of the present invention, the present invention will also have Variations and improvements are possible, which fall within the scope of the claimed invention. The protection scope of the present invention is defined by the appended claims and their equivalents.

Claims (5)

1. a kind of intelligent vehicle map terrestrial reference localization method for merging monocular vision and difference GNSS, which is characterized in that including as follows Step:
S1: pretreatment carries out data acquisition according to the system built;
S2: image terrestrial reference detection carries out terrestrial reference detection according to deep learning algorithm and obtains terrestrial reference detection result;
S3: terrestrial reference feature point extraction carries out ORB feature point extraction according to above-mentioned image terrestrial reference detection result;
S4: terrestrial reference tracking carries out interframe using optical flow method and tracks to obtain previous frame characteristic point in the corresponding characteristic point of present frame;
S5: terrestrial reference positioning carries out triangulation calculation and is averaged to indicate current terrestrial reference to multiple characteristic points that a terrestrial reference extracts Position;
S6: terrestrial reference description calculates, and sub- extraction is described to landmark image and obtains the description of terrestrial reference uniqueness;
S7: terrestrial reference storage repeats S2-S6 step, determines and obtains different terrestrial reference positioning, wherein by it is described differently Calibration position is stored in landmark data library for constructing intelligent vehicle map.
2. a kind of intelligent vehicle map terrestrial reference localization method for merging monocular vision and difference GNSS according to claim 1, It is characterized in that, S1 includes following sub-step:
S11: building acquisition system, installs camera and the bis- positioning antennas of difference GNSS in roof;
S12: time arest neighbors is carried out using the timestamp of different sensors data and is synchronized, each frame image obtains one group of standardization Data need to meet:
ei=(Pi, Vi)
Wherein, PiIndicate the corresponding vehicle location of every frame image and posture, i.e. (xi, yi, zi, αi, βi, γi), wherein (xi, yi, zi) For camera coordinates on vehicle, (αi, βi, γi) be camera posture on vehicle three angles, ViIndicate the corresponding figure of current pose Picture;
S13: it converts to obtain camera position using relative position between sensor.
3. a kind of intelligent vehicle map terrestrial reference localization method for merging monocular vision and difference GNSS according to claim 1, It is characterized in that, the terrestrial reference detection result is arranged to the two-dimensional position in terrestrial reference type and detection image, the detection figure Two-dimensional position as in is expressed with two pixels.
4. a kind of intelligent vehicle map terrestrial reference localization method for merging monocular vision and difference GNSS according to claim 1, It is characterized in that, the step S4 includes following sub-step:
S41: judgement appears in whether a certain regional area in two continuous frames image I, J is same target, however, it is determined that is to need completely Foot:
I (x, y, t)=J (x ', y ', t+ Δ)
Wherein, own (x, y) and all move (d to a directionx, dy), to obtain (x ', y ');
(x, the y) point at S42:t moment is (x+d at the t+ τ momentx, y+dy), so the problem of seeking matching can turn to and seek to following formula It minimizes, needs to meet:
Wherein wxAnd wyRespectively indicate the 1/2, u of W windowxAnd uyRespectively indicate the image coordinate of point to be matched.It is best in order to obtain Matching, so that ε is minimum, enabling above formula derivative is 0, seeks minimum, and the d solved is the offset tracked.
5. a kind of intelligent vehicle map terrestrial reference localization method for merging monocular vision and difference GNSS according to claim 1, It is characterized in that, the step S5 further includes following sub-step:
S51: continue the imaging point in (N+1) when obtaining terrestrial reference positioning from t moment to t+N;
S52: coordinate system transformation relationship is utilized, image landmark locations are obtained.Transformation relation meets:
z0=Z*cos θ1
Wherein (x0, y0, z0) it is the position that this landmark point can be calculated using the same landmark point that multiple images frame observes It sets;
S53: multiple characteristic points are extracted to the same terrestrial reference and are tracked, are finally averaged to calculate this place target position.
CN201910684352.6A 2019-07-26 2019-07-26 A smart car map landmark positioning method integrating monocular vision and differential GNSS Pending CN110514212A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910684352.6A CN110514212A (en) 2019-07-26 2019-07-26 A smart car map landmark positioning method integrating monocular vision and differential GNSS

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910684352.6A CN110514212A (en) 2019-07-26 2019-07-26 A smart car map landmark positioning method integrating monocular vision and differential GNSS

Publications (1)

Publication Number Publication Date
CN110514212A true CN110514212A (en) 2019-11-29

Family

ID=68624160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910684352.6A Pending CN110514212A (en) 2019-07-26 2019-07-26 A smart car map landmark positioning method integrating monocular vision and differential GNSS

Country Status (1)

Country Link
CN (1) CN110514212A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111273674A (en) * 2020-03-12 2020-06-12 深圳冰河导航科技有限公司 Distance measurement method, vehicle operation control method and control system
CN111337950A (en) * 2020-05-21 2020-06-26 深圳市西博泰科电子有限公司 Data processing method, device, equipment and medium for improving landmark positioning accuracy
CN111611913A (en) * 2020-05-20 2020-09-01 北京海月水母科技有限公司 Human-shaped positioning technology of monocular face recognition probe
CN111856499A (en) * 2020-07-30 2020-10-30 浙江大华技术股份有限公司 Map construction method and device based on laser radar
CN113358125A (en) * 2021-04-30 2021-09-07 西安交通大学 Navigation method and system based on environmental target detection and environmental target map
CN114708482A (en) * 2022-02-24 2022-07-05 之江实验室 Topological graph scene recognition method and device based on density filtering and landmark saliency
CN114742885A (en) * 2022-06-13 2022-07-12 山东省科学院海洋仪器仪表研究所 Target consistency judgment method in binocular vision system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080167814A1 (en) * 2006-12-01 2008-07-10 Supun Samarasekera Unified framework for precise vision-aided navigation
CN105674993A (en) * 2016-01-15 2016-06-15 武汉光庭科技有限公司 Binocular camera-based high-precision visual sense positioning map generation system and method
CN108534782A (en) * 2018-04-16 2018-09-14 电子科技大学 A kind of instant localization method of terrestrial reference map vehicle based on binocular vision system
CN108801274A (en) * 2018-04-16 2018-11-13 电子科技大学 A kind of terrestrial reference ground drawing generating method of fusion binocular vision and differential satellite positioning
CN108986037A (en) * 2018-05-25 2018-12-11 重庆大学 Monocular vision odometer localization method and positioning system based on semi-direct method
CN109544636A (en) * 2018-10-10 2019-03-29 广州大学 A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN109583409A (en) * 2018-12-07 2019-04-05 电子科技大学 A kind of intelligent vehicle localization method and system towards cognitive map

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080167814A1 (en) * 2006-12-01 2008-07-10 Supun Samarasekera Unified framework for precise vision-aided navigation
CN105674993A (en) * 2016-01-15 2016-06-15 武汉光庭科技有限公司 Binocular camera-based high-precision visual sense positioning map generation system and method
CN108534782A (en) * 2018-04-16 2018-09-14 电子科技大学 A kind of instant localization method of terrestrial reference map vehicle based on binocular vision system
CN108801274A (en) * 2018-04-16 2018-11-13 电子科技大学 A kind of terrestrial reference ground drawing generating method of fusion binocular vision and differential satellite positioning
CN108986037A (en) * 2018-05-25 2018-12-11 重庆大学 Monocular vision odometer localization method and positioning system based on semi-direct method
CN109544636A (en) * 2018-10-10 2019-03-29 广州大学 A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN109583409A (en) * 2018-12-07 2019-04-05 电子科技大学 A kind of intelligent vehicle localization method and system towards cognitive map

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
李承等: "基于GPS与图像融合的智能车辆高精度定位算法", 《交通运输系统工程与信息》 *
李承等: "面向智能车定位的道路环境视觉地图构建", 《中国公路学报》 *
骆佩佩: "面向认知地图的智能车定位系统及其应用", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技II辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111273674A (en) * 2020-03-12 2020-06-12 深圳冰河导航科技有限公司 Distance measurement method, vehicle operation control method and control system
CN111611913A (en) * 2020-05-20 2020-09-01 北京海月水母科技有限公司 Human-shaped positioning technology of monocular face recognition probe
CN111337950A (en) * 2020-05-21 2020-06-26 深圳市西博泰科电子有限公司 Data processing method, device, equipment and medium for improving landmark positioning accuracy
CN111337950B (en) * 2020-05-21 2020-10-30 深圳市西博泰科电子有限公司 Data processing method, device, equipment and medium for improving landmark positioning accuracy
CN111999745A (en) * 2020-05-21 2020-11-27 深圳市西博泰科电子有限公司 Data processing method, device and equipment for improving landmark positioning precision
CN111856499A (en) * 2020-07-30 2020-10-30 浙江大华技术股份有限公司 Map construction method and device based on laser radar
CN113358125A (en) * 2021-04-30 2021-09-07 西安交通大学 Navigation method and system based on environmental target detection and environmental target map
CN113358125B (en) * 2021-04-30 2023-04-28 西安交通大学 Navigation method and system based on environment target detection and environment target map
CN114708482A (en) * 2022-02-24 2022-07-05 之江实验室 Topological graph scene recognition method and device based on density filtering and landmark saliency
CN114742885A (en) * 2022-06-13 2022-07-12 山东省科学院海洋仪器仪表研究所 Target consistency judgment method in binocular vision system
CN114742885B (en) * 2022-06-13 2022-08-26 山东省科学院海洋仪器仪表研究所 Target consistency judgment method in binocular vision system

Similar Documents

Publication Publication Date Title
CN110514212A (en) A smart car map landmark positioning method integrating monocular vision and differential GNSS
CN109579843B (en) A multi-robot cooperative localization and fusion mapping method from multiple perspectives in open space
US10909395B2 (en) Object detection apparatus
Tardif et al. Monocular visual odometry in urban environments using an omnidirectional camera
CN111882612A (en) A vehicle multi-scale localization method based on 3D laser detection of lane lines
CN109583409A (en) A kind of intelligent vehicle localization method and system towards cognitive map
CN105352509B (en) Unmanned plane motion target tracking and localization method under geography information space-time restriction
US8059887B2 (en) System and method for providing mobile range sensing
WO2017080108A1 (en) Flying device, flying control system and method
WO2017080102A1 (en) Flying device, flying control system and method
US20160238394A1 (en) Device for Estimating Position of Moving Body and Method for Estimating Position of Moving Body
CN106548173A (en) A kind of improvement no-manned plane three-dimensional information getting method based on classification matching strategy
KR101709317B1 (en) Method for calculating an object's coordinates in an image using single camera and gps
CN114332158A (en) A 3D real-time multi-target tracking method based on fusion of camera and lidar
CN114370871A (en) A tightly coupled optimization method for visible light positioning and lidar inertial odometry
CN116128966B (en) A semantic localization method based on environmental objects
CN113790728B (en) Loose coupling multi-sensor fusion positioning algorithm based on visual odometer
CN103456027B (en) Time sensitivity target detection positioning method under airport space relation constraint
KR20130034528A (en) Position measuring method for street facility
Majdik et al. Micro air vehicle localization and position tracking from textured 3d cadastral models
CN103456026A (en) Method for detecting ground moving object under road landmark constraints
CN112985388B (en) Combined navigation method and system based on large-displacement optical flow method
CN116151320A (en) Visual odometer method and device for resisting dynamic target interference
CN116259001A (en) Multi-view fusion three-dimensional pedestrian posture estimation and tracking method
CN114993293A (en) Synchronous positioning and mapping method for mobile unmanned system in indoor weak texture environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191129