[go: up one dir, main page]

CN114066972B - A UAV autonomous positioning method based on monocular vision - Google Patents

A UAV autonomous positioning method based on monocular vision Download PDF

Info

Publication number
CN114066972B
CN114066972B CN202111242626.XA CN202111242626A CN114066972B CN 114066972 B CN114066972 B CN 114066972B CN 202111242626 A CN202111242626 A CN 202111242626A CN 114066972 B CN114066972 B CN 114066972B
Authority
CN
China
Prior art keywords
image
coordinates
slam
feature
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111242626.XA
Other languages
Chinese (zh)
Other versions
CN114066972A (en
Inventor
耿虎军
关俊志
高峰
张泽勇
李晨阳
王雅涵
蔡迎哲
柴兴华
陈彦桥
彭会湘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 54 Research Institute
Original Assignee
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 54 Research Institute filed Critical CETC 54 Research Institute
Priority to CN202111242626.XA priority Critical patent/CN114066972B/en
Publication of CN114066972A publication Critical patent/CN114066972A/en
Application granted granted Critical
Publication of CN114066972B publication Critical patent/CN114066972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unmanned aerial vehicle autonomous positioning method based on monocular vision, and belongs to the field of unmanned aerial vehicle autonomous flight. According to the invention, the monocular camera is used as a unique sensor, the matching problem caused by large difference between the remote sensing image and the image shot by the unmanned aerial vehicle is solved through an image mixed registration algorithm, and the pose of the unmanned aerial vehicle under a geodetic coordinate system is calculated through a PnP algorithm after the homonymy point is obtained, so that the drift problem of monocular vision SLAM in actual operation due to error accumulation is corrected, and the high-precision autonomous positioning of the unmanned aerial vehicle in a refusing environment is realized.

Description

Unmanned aerial vehicle autonomous positioning method based on monocular vision
Technical Field
The invention belongs to the field of unmanned aerial vehicle autonomous flight, and particularly relates to an unmanned aerial vehicle autonomous positioning method based on monocular vision.
Background
Along with the rapid development of information science, unmanned aerial vehicles are widely applied in the life of people, and the unmanned aerial vehicles gradually expand application fields and research ranges, such as a plurality of fields of post-disaster search and rescue, aviation shooting, crop monitoring and the like. The positioning system is an important guarantee for the unmanned aerial vehicle to successfully complete the task.
Currently, the implementation manner of the unmanned plane positioning technology is mainly a global positioning system (Global Position System, GPS) and the like. GPS positioning has many advantages, such as: the positioning method is mature and easy to integrate, and the positioning accuracy is high under the condition of good outdoor signals. But it has one of the most major drawbacks: depending on the external signal. In the event that the GPS signal is blocked, disturbed or missing, the positioning will fail and no one will lose control or even fall. Therefore, the realization of the autonomous positioning technology is an important work for realizing real-time, accurate and autonomous positioning of the unmanned aerial vehicle.
The vision fusion inertial navigation odometer is an unmanned aerial vehicle autonomous positioning method widely used at present, and the method can obtain sparse three-dimensional reconstruction of a scene while estimating pose parameters of the unmanned aerial vehicle. However, since the method calculates the relative positions of adjacent moments, an accumulated error exists in the running process of the system, which severely restricts the accuracy of the visual fusion inertial navigation odometer.
Disclosure of Invention
In order to solve the technical problems, the invention provides an unmanned aerial vehicle autonomous positioning method based on monocular vision, which adopts a monocular camera as a unique sensor, realizes high-precision autonomous positioning based on image mixed registration and PnP, can overcome the problem of error accumulation of the existing autonomous positioning system, and has good practical value.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
An unmanned aerial vehicle autonomous positioning method based on monocular vision comprises the following steps:
(1) Controlling the unmanned aerial vehicle to climb to a plurality of different positions with different heights;
(2) Respectively carrying out image acquisition at each position, and carrying out SLAM autonomous positioning according to image information to obtain coordinates of the unmanned aerial vehicle in an SLAM coordinate system;
(3) Loading a remote sensing image of a flight area of the unmanned aerial vehicle, carrying out image mixed registration on the remote sensing image and an image acquired by the unmanned aerial vehicle, and calculating to obtain homonymous feature points;
(4) Solving the coordinates of the unmanned aerial vehicle in a geodetic coordinate system through a PnP algorithm according to longitude and latitude coordinates and elevation information of the homonymous feature points in the remote sensing image and the two-dimensional positions of the homonymous feature points in the unmanned aerial vehicle acquired image;
(5) According to the coordinates of the unmanned aerial vehicle in the SLAM coordinate system and the coordinates in the geodetic coordinate system, calculating and solving a transformation matrix and SLAM scale information between the two coordinate systems;
(6) And (3) converting the SLAM coordinates into geodetic coordinates by using a conversion matrix, and correcting the coordinates obtained by SLAM conversion by using the coordinates of the unmanned aerial vehicle in the geodetic coordinate system obtained in the step (4).
Further, the specific mode of the step (2) is as follows:
(201) Starting a camera through an image acquisition program, acquiring an image sequence, and releasing the image in the form of ROS nodes;
(202) Subscribing image information through a positioning program;
(203) Performing feature detection on the image sequence to obtain position information and descriptor information of feature points;
(204) Tracking the characteristic points in the images by utilizing a characteristic tracking method to obtain coordinates of the same characteristic point in different images;
(205) Calculating pose transformation between different images by a multi-view geometric method;
(206) And optimizing the pose of the unmanned aerial vehicle by using a beam adjustment method to obtain an SLAM positioning result, and releasing the SLAM positioning result in the form of ROS nodes.
Further, the specific mode of the step (3) is as follows:
(301) Performing multi-feature extraction on the remote sensing image and the unmanned aerial vehicle acquired image respectively, wherein the multi-feature extraction comprises SIFT feature extraction, SURF feature extraction, ORB feature extraction, edge feature extraction and descriptor extraction;
(302) Matching point features and edge features through similarity detection;
(303) Carrying out combination analysis of multiple feature points on the feature distribution condition in the unmanned aerial vehicle acquired image, and discarding the frame unmanned aerial vehicle acquired image if the feature distribution is uneven;
(304) And obtaining the homonymous characteristic points of the remote sensing image and the unmanned aerial vehicle acquired image.
Further, the specific mode of the step (4) is as follows:
(401) According to the pixel positions of the homonymous feature points in the remote sensing image, reading the three-dimensional coordinates of the geodetic feature points, and acquiring the two-dimensional position coordinates of the homonymous feature points in the unmanned aerial vehicle acquired image;
(402) And solving the position of the unmanned aerial vehicle under the geodetic coordinate system by utilizing a PnP algorithm, and releasing the position in the form of ROS nodes.
Further, the specific mode of the step (5) is as follows:
(501) Subscribing the positioning result released in the step (206) and the position released in the step (402) through a fusion program, and synchronizing according to the timestamp information;
(502) And calculating SLAM scale information and a conversion matrix between the SLAM coordinate system and the geodetic coordinate system by using a point cloud alignment algorithm.
In the step (6), coordinates obtained by SLAM conversion are corrected by adopting an extended kalman filter method.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention provides an effective unmanned aerial vehicle autonomous positioning method, which solves the matching problem caused by large difference between a remote sensing image and an image shot by an unmanned aerial vehicle through an image mixed registration algorithm, calculates the pose of the unmanned aerial vehicle under a geodetic coordinate system through a PnP algorithm after obtaining the same-name characteristic points, and corrects the drifting problem of monocular vision SLAM in actual operation due to error accumulation, thereby realizing the high-precision autonomous positioning of the unmanned aerial vehicle in a refusing environment.
2. The registration of the heterologous images is a research hot spot in the field all the time, and the invention realizes the accurate registration of the heterologous images with large difference through the effective fusion of multiple features, thereby being an important innovation of the prior art.
3. The drift problem caused by SLAM error accumulation is a troublesome problem which prevents the unmanned aerial vehicle from being applied to autonomous navigation positioning, the invention provides a method for solving the pose of the unmanned aerial vehicle through remote sensing image registration and PnP algorithm, and then fusion correction is carried out on the obtained pose and the pose obtained by visual SLAM, so that autonomous positioning under a completely refused environment is realized.
Drawings
Fig. 1 is a flowchart of an autonomous positioning method of an unmanned aerial vehicle in an embodiment of the present invention.
Detailed description of the preferred embodiments
The present invention will be further described in detail below with reference to the drawings and examples for the purpose of facilitating understanding and practicing the present invention by those of ordinary skill in the art. It should be understood that the examples described herein are for the purpose of illustrating and explaining the present invention and are not intended to limit the scope of the present invention.
The unmanned aerial vehicle autonomous positioning method based on monocular vision is realized by a plurality of programs running in an ROS robot operating system and a Ubuntu operating system based on the ROS robot operating system and the Ubuntu operating system, and comprises the following steps of:
(1) Manually operated unmanned aerial vehicle climbs to different heights and different positions.
(2) Starting an image acquisition program, releasing the image in the form of ROS nodes, subscribing the image information in real time by an autonomous positioning program, performing SLAM autonomous positioning, and finally releasing the positioning result in the form of ROS nodes.
(3) Loading the remote sensing image of the unmanned aerial vehicle flight area, subscribing the image acquired in the step (2), carrying out image mixed registration on the remote sensing image and the image shot by the unmanned aerial vehicle, and calculating to obtain the homonymous feature points.
(4) And solving the position of the unmanned aerial vehicle through a PnP (perspective-n-point) algorithm according to the longitude and latitude coordinates and the elevation information of the homonymous feature points in the remote sensing image and the two-dimensional position of the homonymous feature points in the image shot by the unmanned aerial vehicle.
(5) And calculating and solving a conversion matrix and SLAM scale information between the two coordinate systems according to the coordinates of the unmanned aerial vehicle in the SLAM coordinate system and the coordinates in the geodetic coordinate system.
(6) And converting the SLAM coordinates into geodetic coordinates by using a conversion matrix, and correcting the coordinates obtained by SLAM conversion by using the unmanned aerial vehicle geodetic coordinates obtained by image mixed registration and PnP in the positioning process due to error accumulation of SLAM in the operation process.
The specific mode of the step (2) is as follows:
(201) Starting a camera, and releasing the image in the form of ROS nodes;
(202) Subscribing image nodes by a positioning program;
(203) Performing feature detection on the image sequence of the airborne camera to obtain position information and descriptor information of feature points;
(204) Tracking the characteristic points in the images by utilizing a characteristic tracking method to obtain coordinates of the same characteristic point in different images;
(205) Calculating pose transformation between different camera images by a multi-view geometric method;
(206) And optimizing the pose and the three-dimensional point cloud coordinates of the unmanned aerial vehicle by using a beam adjustment method (BundleAdjustment), obtaining the position under the position SLAM coordinate system, and releasing the positioning result in the form of ROS nodes.
The specific mode of the step (3) is as follows:
(301) The multi-feature extraction is carried out on the remote sensing image and the image shot by the unmanned aerial vehicle respectively, and the multi-feature extraction comprises the following steps: SIFT feature extraction, SURF feature extraction, ORB feature extraction and edge feature extraction, descriptor extraction;
(302) Performing similarity detection to realize matching of point features and edge features;
(303) Through the combination analysis of the multiple feature points, a certain number of matching features in each image range are realized;
(304) And obtaining the same-name feature points.
The specific mode of the step (4) is as follows:
(401) Reading the three-dimensional coordinate of the earth according to the pixel position of the same-name feature point in the remote sensing image, and marking as (X i,Yi,Zi), wherein i is the feature point mark in the same image; the image coordinates of the same-name feature points in the image shot by the unmanned aerial vehicle are (u i,vi);
(402) And solving the position of the unmanned aerial vehicle under the geodetic coordinate system by utilizing a PnP algorithm, and releasing the position in the form of ROS nodes.
The specific mode of the step (5) is as follows:
(501) Subscribing the position node published by the SLAM program and the position node published by the step (402) by a fusion algorithm, and synchronizing according to the time stamp information;
(502) And calculating SLAM scale information and a conversion matrix between two coordinate systems by using a point cloud alignment algorithm.
The following is a more specific example:
as shown in fig. 1, an unmanned aerial vehicle autonomous positioning method based on monocular vision comprises the following steps:
step 1, manually operating an unmanned aerial vehicle to climb to different heights and different positions;
Step 2, an onboard computing unit (NVIDIANX) starts an image acquisition program, distributes images in the form of ROS nodes, subscribes image information in real time by an autonomous positioning program, performs SLAM autonomous positioning, and finally distributes positioning results in the form of ROS nodes;
Step 2.1, starting a camera acquisition program, and releasing the image in the form of ROS nodes;
Step 2.2, starting a positioning program and subscribing image nodes;
Step 2.3, carrying out feature detection on the camera image sequence to obtain position information and descriptor information of feature points;
step 2.4, tracking the characteristic points in the images by utilizing a characteristic tracking method to obtain coordinates of the same characteristic point in different images;
Step 2.5, calculating pose transformation between different camera images through a multi-view geometric method;
Step 2.6, optimizing the pose of the unmanned aerial vehicle and the three-dimensional point cloud coordinates by using BundleAdjustment beam adjustment method to obtain the position as The positioning result is issued in the form of ROS nodes.
Step 3, loading the remote sensing image of the unmanned aerial vehicle flight area, subscribing the image acquired in the step 2, carrying out image mixed registration on the remote sensing image and the image shot by the unmanned aerial vehicle, and calculating to obtain the same-name feature points:
Step 3.1, respectively carrying out multi-feature extraction on the remote sensing image and the image shot by the unmanned aerial vehicle, wherein the multi-feature extraction comprises the following steps: SIFT feature extraction, SURF feature extraction, ORB feature extraction and edge feature extraction, and descriptor extraction, wherein specific algorithms are disclosed in documents [1] to [4];
[1]David G.Lowe.Distinctive Image Features from Scale-Invariant Keypoints[J].International Journal of ComputerVision,2004,60(2):91-110.
[2]BAY H,ESS A,TUYTELAARS T,et al.Speeded-Up Robust Features(SURF)[J].Computer vision and image understanding:CVIU,2008,110(3):346-359.
[3]ETHAN RUBLEE,VINCENT RABAUD,KURT KONOLIGE,et al.ORB:an efficient alternative to SIFT or SURF[C].//2011 International Conference on Computer Vision.[v.3].:IEEE,2011:2564-2571.
[4]Canny J F.A computational Approach to Edge Detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1986,8(6):679-698.
step 3.2, similarity detection is carried out, and point feature and edge feature matching is achieved;
step 3.3, realizing that a certain number of matching features exist in each image range through the combination analysis of the multiple feature points;
and 3.4, obtaining the same-name feature points.
Step 4, solving the position of the unmanned aerial vehicle through a PnP algorithm according to longitude and latitude coordinates and elevation information of the homonymous feature points in the remote sensing image and the two-dimensional position of the homonymous feature points in the image shot by the unmanned aerial vehicle, wherein the specific algorithm is as shown in a document [5]:
[5]Vincent Lepetit,Francesc Moreno-Noguer,and Pascal Fua.Epnp:An accurate o(n)solution to the pnp problem.International journal of computer vision,81(2):155–166,2009.
Step 4.1, reading the three-dimensional coordinate of the geodetic according to the pixel position of the same-name feature point in the remote sensing image, and marking the coordinate as (X i,Yi,Zi), wherein i is the feature point mark in the same image; the image coordinates of the same feature point in the image shot by the unmanned aerial vehicle are (u i,vi);
and 4.2, solving the position of the unmanned aerial vehicle under the geodetic coordinate system by utilizing a PnP algorithm, and releasing the position in the form of ROS nodes.
Step 5, calculating and solving a conversion matrix between the two coordinate systems and SLAM scale information according to the coordinates of the unmanned aerial vehicle in the SLAM coordinate system and the coordinates in the geodetic coordinate system:
step 5.1, subscribing the position node released by the SLAM program and the position node released by the step (4.2) by a fusion algorithm, and synchronizing according to the timestamp information;
Step 5.2, calculating SLAM scale information and a conversion matrix between two coordinate systems by using a point cloud alignment algorithm, wherein the specific algorithm is disclosed in a document [6]:
[6]K.S.Arun,T.S.Huang&S.D.Blostein.Least-Squares Fitting of Two 3-D Point Sets.IEEE Transactions on PatternAnalysis and Machine Intelligence,vol.9,no.5,pages 698–700,Sept 1987
and 6, converting SLAM coordinates into geodetic coordinates by using a conversion matrix, and correcting coordinates obtained by SLAM conversion by using unmanned aerial vehicle geodetic coordinates obtained by image mixed registration and PnP in the positioning process due to error accumulation in the operation process of SLAM.
In summary, the invention provides an effective unmanned aerial vehicle autonomous positioning method, which adopts a monocular camera as a unique sensor, thereby realizing the autonomous positioning of the unmanned aerial vehicle. According to the invention, the matching problem caused by large difference between the remote sensing image and the image shot by the unmanned aerial vehicle is solved through the image mixed registration algorithm, the pose of the unmanned aerial vehicle under the geodetic coordinate system is calculated through the PnP algorithm after the homonymous characteristic points are obtained, so that the drift problem caused by error accumulation in the actual operation of the monocular vision SLAM is corrected, and the high-precision autonomous positioning of the unmanned aerial vehicle in the refusing environment is realized.
The foregoing description is only of specific embodiments of the invention and is not intended to limit the invention thereto. Any modification, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (2)

1.一种基于单目视觉的无人机自主定位方法,其特征在于,包括以下步骤:1. A method for autonomous positioning of a drone based on monocular vision, characterized by comprising the following steps: (1)操控无人机爬升到不同高度的多个不同位置;(1) Control the drone to climb to multiple locations at different altitudes; (2)在各位置处分别进行图像采集,根据图像信息进行SLAM自主定位,得到无人机在SLAM坐标系中的坐标;具体方式为:(2) Image acquisition is performed at each position, and SLAM autonomous positioning is performed based on the image information to obtain the coordinates of the UAV in the SLAM coordinate system; the specific method is as follows: (201)通过图像采集程序启动相机,采集图像序列,并将图像以ROS节点形式发布;(201) Start the camera through the image acquisition program, collect image sequences, and publish the images in the form of ROS nodes; (202)通过定位程序订阅图像信息;(202) Subscribe to image information through a positioning program; (203)对图像序列进行特征检测,得到特征点的位置信息和描述子信息;(203) performing feature detection on the image sequence to obtain location information and descriptor information of feature points; (204)利用特征跟踪方法对图像中的特征点进行跟踪,得到同一特征点在不同图像中的坐标;(204) Tracking feature points in the image using a feature tracking method to obtain coordinates of the same feature point in different images; (205)通过多视图几何方法计算不同图像之间的位姿变换;(205) Calculate the pose transformation between different images by multi-view geometry method; (206)使用光束平差法对无人机位姿进行优化,得到SLAM定位结果,并将SLAM定位结果以ROS节点形式发布;(206) Use the bundle adjustment method to optimize the UAV posture, obtain the SLAM positioning result, and publish the SLAM positioning result in the form of ROS node; (3)加载无人机飞行区域的遥感影像,将遥感影像与无人机采集的图像进行影像混合配准,计算得到同名特征点;具体方式为:(3) Load the remote sensing image of the UAV flight area, perform image hybrid registration on the remote sensing image and the image collected by the UAV, and calculate the feature points with the same name; the specific method is as follows: (301)分别对遥感影像和无人机采集图像进行多特征提取,包括SIFT特征提取、SURF特征提取、ORB特征提取、边缘特征提取、描述子提取;(301) Perform multi-feature extraction on remote sensing images and images collected by UAVs, including SIFT feature extraction, SURF feature extraction, ORB feature extraction, edge feature extraction, and descriptor extraction; (302)通过相似性检测实现点特征、边缘特征匹配;(302) Point feature and edge feature matching is achieved through similarity detection; (303)对无人机采集图像中的特征分布情况进行多特征点的组合分析,若特征分布不均匀,则将该无人机采集图像丢弃;(303) performing a combination analysis of multiple feature points on the feature distribution in the image collected by the drone, and if the feature distribution is uneven, discarding the image collected by the drone; (304)对遥感影像和无人机采集图像进行同名特征点获取;(304) Acquire feature points with the same name from remote sensing images and images collected by drones; (4)根据同名特征点在遥感影像中的经纬度坐标及高程信息,以及同名特征点在无人机采集图像中的二维位置,通过PnP算法求解无人机在大地坐标系中的坐标;具体方式为:(4) Based on the latitude and longitude coordinates and elevation information of the feature points with the same name in the remote sensing image, as well as the two-dimensional position of the feature points with the same name in the image collected by the drone, the coordinates of the drone in the geodetic coordinate system are solved by the PnP algorithm; the specific method is as follows: (401)根据同名特征点在遥感影像中的像素位置,读取其大地三维坐标,并获取同名特征点在无人机采集图像中的二维位置坐标;(401) reading the three-dimensional coordinates of the feature point with the same name according to the pixel position in the remote sensing image, and obtaining the two-dimensional position coordinates of the feature point with the same name in the image collected by the drone; (402)利用PnP算法求解无人机在大地坐标系下的位置,并将该位置以ROS节点形式发布;(402) using the PnP algorithm to solve the position of the UAV in the geodetic coordinate system, and publishing the position in the form of a ROS node; (5)根据无人机在SLAM坐标系中的坐标和大地坐标系中的坐标,计算求解两个坐标系之间的转换矩阵及SLAM尺度信息;具体方式为:(5) Based on the coordinates of the drone in the SLAM coordinate system and the coordinates in the earth coordinate system, the transformation matrix between the two coordinate systems and the SLAM scale information are calculated and solved; the specific method is: (501)通过融合程序订阅步骤(206)发布的定位结果以及步骤(402)发布的位置,并根据时间戳信息进行同步化;(501) subscribing to the positioning result published in step (206) and the position published in step (402) through a fusion program, and synchronizing them according to the timestamp information; (502)利用点云对齐算法计算SLAM尺度信息以及SLAM坐标系和大地坐标系之间的转换矩阵;(502) Calculating SLAM scale information and a transformation matrix between the SLAM coordinate system and the earth coordinate system using a point cloud alignment algorithm; (6)利用转换矩阵将SLAM坐标转换为大地坐标,并利用步骤(4)所得的无人机在大地坐标系中的坐标对SLAM转换得到的坐标进行修正。(6) Use the transformation matrix to convert the SLAM coordinates into geodetic coordinates, and use the coordinates of the UAV in the geodetic coordinate system obtained in step (4) to correct the coordinates obtained by SLAM transformation. 2.根据权利要求1所述的一种基于单目视觉的无人机自主定位方法,其特征在于,步骤(6)中,采用扩展卡尔曼滤波方式对SLAM转换得到的坐标进行修正。2. The method for autonomous positioning of an unmanned aerial vehicle based on monocular vision according to claim 1 is characterized in that, in step (6), the coordinates obtained by SLAM conversion are corrected using an extended Kalman filter.
CN202111242626.XA 2021-10-25 2021-10-25 A UAV autonomous positioning method based on monocular vision Active CN114066972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111242626.XA CN114066972B (en) 2021-10-25 2021-10-25 A UAV autonomous positioning method based on monocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111242626.XA CN114066972B (en) 2021-10-25 2021-10-25 A UAV autonomous positioning method based on monocular vision

Publications (2)

Publication Number Publication Date
CN114066972A CN114066972A (en) 2022-02-18
CN114066972B true CN114066972B (en) 2024-11-22

Family

ID=80235430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111242626.XA Active CN114066972B (en) 2021-10-25 2021-10-25 A UAV autonomous positioning method based on monocular vision

Country Status (1)

Country Link
CN (1) CN114066972B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842224B (en) * 2022-04-20 2025-07-01 中山大学 An absolute visual matching positioning method for monocular UAV based on geographic base map
CN116228853B (en) * 2022-12-13 2025-03-25 西北工业大学 A distributed visual SLAM method based on UAV platform
CN116402826B (en) * 2023-06-09 2023-09-26 深圳市天趣星空科技有限公司 Visual coordinate system correction method, device, equipment and storage medium
CN119648780A (en) * 2024-11-11 2025-03-18 南京航空航天大学 UAV visual continuous positioning method and system based on satellite remote sensing image assistance

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108303099A (en) * 2018-06-14 2018-07-20 江苏中科院智能科学技术应用研究院 Autonomous navigation method in unmanned plane room based on 3D vision SLAM
CN108369741A (en) * 2015-12-08 2018-08-03 三菱电机株式会社 Method and system for registration data

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109029417B (en) * 2018-05-21 2021-08-10 南京航空航天大学 Unmanned aerial vehicle SLAM method based on mixed visual odometer and multi-scale map
CN109211241B (en) * 2018-09-08 2022-04-29 天津大学 Autonomous positioning method of UAV based on visual SLAM
CN111145238B (en) * 2019-12-12 2023-09-22 中国科学院深圳先进技术研究院 Three-dimensional reconstruction method, device and terminal equipment of monocular endoscopic images
KR20210089300A (en) * 2020-01-07 2021-07-16 한국전자통신연구원 Vision-based drone autonomous flight device and method
CN112577493B (en) * 2021-03-01 2021-05-04 中国人民解放军国防科技大学 A method and system for autonomous positioning of unmanned aerial vehicles based on remote sensing map assistance

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108369741A (en) * 2015-12-08 2018-08-03 三菱电机株式会社 Method and system for registration data
CN108303099A (en) * 2018-06-14 2018-07-20 江苏中科院智能科学技术应用研究院 Autonomous navigation method in unmanned plane room based on 3D vision SLAM

Also Published As

Publication number Publication date
CN114066972A (en) 2022-02-18

Similar Documents

Publication Publication Date Title
CN114066972B (en) A UAV autonomous positioning method based on monocular vision
US10515458B1 (en) Image-matching navigation method and apparatus for aerial vehicles
CN103822615B (en) A kind of multi-control point extracts and the unmanned aerial vehicle target real-time location method be polymerized automatically
CN109324337B (en) Unmanned aerial vehicle route generation and positioning method and device and unmanned aerial vehicle
CN112419374B (en) Unmanned aerial vehicle positioning method based on image registration
CN105865454B (en) A kind of Navigation of Pilotless Aircraft method generated based on real-time online map
CN101598556B (en) Unmanned aerial vehicle vision/inertia integrated navigation method in unknown environment
CN102967305B (en) Multi-rotor unmanned aerial vehicle pose acquisition method based on markers in shape of large and small square
CN116718165B (en) Combined imaging system based on unmanned aerial vehicle platform and image enhancement fusion method
CN104268935A (en) Feature-based airborne laser point cloud and image data fusion system and method
CN105627991A (en) Real-time panoramic stitching method and system for unmanned aerial vehicle images
CN103822635A (en) Visual information based real-time calculation method of spatial position of flying unmanned aircraft
CN109341686B (en) Aircraft landing pose estimation method based on visual-inertial tight coupling
CN115371673A (en) A binocular camera target location method based on Bundle Adjustment in an unknown environment
CN116989772B (en) An air-ground multi-modal multi-agent collaborative positioning and mapping method
CN106249267A (en) Target positioning and tracking method and device
CN115950435B (en) Real-time positioning method for unmanned aerial vehicle inspection image
CN110058604A (en) A kind of accurate landing system of unmanned plane based on computer vision
CN108803655A (en) A kind of UAV Flight Control platform and method for tracking target
CN107063193A (en) Based on GPS Dynamic post-treatment technology Aerial Photogrammetry
CN108225273B (en) Real-time runway detection method based on sensor priori knowledge
CN109764864B (en) A method and system for indoor UAV pose acquisition based on color recognition
CN115597592A (en) Comprehensive positioning method applied to unmanned aerial vehicle inspection
WO2024093635A1 (en) Camera pose estimation method and apparatus, and computer-readable storage medium
CN113421332A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant