CN114066972B - A UAV autonomous positioning method based on monocular vision - Google Patents
A UAV autonomous positioning method based on monocular vision Download PDFInfo
- Publication number
- CN114066972B CN114066972B CN202111242626.XA CN202111242626A CN114066972B CN 114066972 B CN114066972 B CN 114066972B CN 202111242626 A CN202111242626 A CN 202111242626A CN 114066972 B CN114066972 B CN 114066972B
- Authority
- CN
- China
- Prior art keywords
- image
- coordinates
- slam
- feature
- unmanned aerial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an unmanned aerial vehicle autonomous positioning method based on monocular vision, and belongs to the field of unmanned aerial vehicle autonomous flight. According to the invention, the monocular camera is used as a unique sensor, the matching problem caused by large difference between the remote sensing image and the image shot by the unmanned aerial vehicle is solved through an image mixed registration algorithm, and the pose of the unmanned aerial vehicle under a geodetic coordinate system is calculated through a PnP algorithm after the homonymy point is obtained, so that the drift problem of monocular vision SLAM in actual operation due to error accumulation is corrected, and the high-precision autonomous positioning of the unmanned aerial vehicle in a refusing environment is realized.
Description
Technical Field
The invention belongs to the field of unmanned aerial vehicle autonomous flight, and particularly relates to an unmanned aerial vehicle autonomous positioning method based on monocular vision.
Background
Along with the rapid development of information science, unmanned aerial vehicles are widely applied in the life of people, and the unmanned aerial vehicles gradually expand application fields and research ranges, such as a plurality of fields of post-disaster search and rescue, aviation shooting, crop monitoring and the like. The positioning system is an important guarantee for the unmanned aerial vehicle to successfully complete the task.
Currently, the implementation manner of the unmanned plane positioning technology is mainly a global positioning system (Global Position System, GPS) and the like. GPS positioning has many advantages, such as: the positioning method is mature and easy to integrate, and the positioning accuracy is high under the condition of good outdoor signals. But it has one of the most major drawbacks: depending on the external signal. In the event that the GPS signal is blocked, disturbed or missing, the positioning will fail and no one will lose control or even fall. Therefore, the realization of the autonomous positioning technology is an important work for realizing real-time, accurate and autonomous positioning of the unmanned aerial vehicle.
The vision fusion inertial navigation odometer is an unmanned aerial vehicle autonomous positioning method widely used at present, and the method can obtain sparse three-dimensional reconstruction of a scene while estimating pose parameters of the unmanned aerial vehicle. However, since the method calculates the relative positions of adjacent moments, an accumulated error exists in the running process of the system, which severely restricts the accuracy of the visual fusion inertial navigation odometer.
Disclosure of Invention
In order to solve the technical problems, the invention provides an unmanned aerial vehicle autonomous positioning method based on monocular vision, which adopts a monocular camera as a unique sensor, realizes high-precision autonomous positioning based on image mixed registration and PnP, can overcome the problem of error accumulation of the existing autonomous positioning system, and has good practical value.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
An unmanned aerial vehicle autonomous positioning method based on monocular vision comprises the following steps:
(1) Controlling the unmanned aerial vehicle to climb to a plurality of different positions with different heights;
(2) Respectively carrying out image acquisition at each position, and carrying out SLAM autonomous positioning according to image information to obtain coordinates of the unmanned aerial vehicle in an SLAM coordinate system;
(3) Loading a remote sensing image of a flight area of the unmanned aerial vehicle, carrying out image mixed registration on the remote sensing image and an image acquired by the unmanned aerial vehicle, and calculating to obtain homonymous feature points;
(4) Solving the coordinates of the unmanned aerial vehicle in a geodetic coordinate system through a PnP algorithm according to longitude and latitude coordinates and elevation information of the homonymous feature points in the remote sensing image and the two-dimensional positions of the homonymous feature points in the unmanned aerial vehicle acquired image;
(5) According to the coordinates of the unmanned aerial vehicle in the SLAM coordinate system and the coordinates in the geodetic coordinate system, calculating and solving a transformation matrix and SLAM scale information between the two coordinate systems;
(6) And (3) converting the SLAM coordinates into geodetic coordinates by using a conversion matrix, and correcting the coordinates obtained by SLAM conversion by using the coordinates of the unmanned aerial vehicle in the geodetic coordinate system obtained in the step (4).
Further, the specific mode of the step (2) is as follows:
(201) Starting a camera through an image acquisition program, acquiring an image sequence, and releasing the image in the form of ROS nodes;
(202) Subscribing image information through a positioning program;
(203) Performing feature detection on the image sequence to obtain position information and descriptor information of feature points;
(204) Tracking the characteristic points in the images by utilizing a characteristic tracking method to obtain coordinates of the same characteristic point in different images;
(205) Calculating pose transformation between different images by a multi-view geometric method;
(206) And optimizing the pose of the unmanned aerial vehicle by using a beam adjustment method to obtain an SLAM positioning result, and releasing the SLAM positioning result in the form of ROS nodes.
Further, the specific mode of the step (3) is as follows:
(301) Performing multi-feature extraction on the remote sensing image and the unmanned aerial vehicle acquired image respectively, wherein the multi-feature extraction comprises SIFT feature extraction, SURF feature extraction, ORB feature extraction, edge feature extraction and descriptor extraction;
(302) Matching point features and edge features through similarity detection;
(303) Carrying out combination analysis of multiple feature points on the feature distribution condition in the unmanned aerial vehicle acquired image, and discarding the frame unmanned aerial vehicle acquired image if the feature distribution is uneven;
(304) And obtaining the homonymous characteristic points of the remote sensing image and the unmanned aerial vehicle acquired image.
Further, the specific mode of the step (4) is as follows:
(401) According to the pixel positions of the homonymous feature points in the remote sensing image, reading the three-dimensional coordinates of the geodetic feature points, and acquiring the two-dimensional position coordinates of the homonymous feature points in the unmanned aerial vehicle acquired image;
(402) And solving the position of the unmanned aerial vehicle under the geodetic coordinate system by utilizing a PnP algorithm, and releasing the position in the form of ROS nodes.
Further, the specific mode of the step (5) is as follows:
(501) Subscribing the positioning result released in the step (206) and the position released in the step (402) through a fusion program, and synchronizing according to the timestamp information;
(502) And calculating SLAM scale information and a conversion matrix between the SLAM coordinate system and the geodetic coordinate system by using a point cloud alignment algorithm.
In the step (6), coordinates obtained by SLAM conversion are corrected by adopting an extended kalman filter method.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention provides an effective unmanned aerial vehicle autonomous positioning method, which solves the matching problem caused by large difference between a remote sensing image and an image shot by an unmanned aerial vehicle through an image mixed registration algorithm, calculates the pose of the unmanned aerial vehicle under a geodetic coordinate system through a PnP algorithm after obtaining the same-name characteristic points, and corrects the drifting problem of monocular vision SLAM in actual operation due to error accumulation, thereby realizing the high-precision autonomous positioning of the unmanned aerial vehicle in a refusing environment.
2. The registration of the heterologous images is a research hot spot in the field all the time, and the invention realizes the accurate registration of the heterologous images with large difference through the effective fusion of multiple features, thereby being an important innovation of the prior art.
3. The drift problem caused by SLAM error accumulation is a troublesome problem which prevents the unmanned aerial vehicle from being applied to autonomous navigation positioning, the invention provides a method for solving the pose of the unmanned aerial vehicle through remote sensing image registration and PnP algorithm, and then fusion correction is carried out on the obtained pose and the pose obtained by visual SLAM, so that autonomous positioning under a completely refused environment is realized.
Drawings
Fig. 1 is a flowchart of an autonomous positioning method of an unmanned aerial vehicle in an embodiment of the present invention.
Detailed description of the preferred embodiments
The present invention will be further described in detail below with reference to the drawings and examples for the purpose of facilitating understanding and practicing the present invention by those of ordinary skill in the art. It should be understood that the examples described herein are for the purpose of illustrating and explaining the present invention and are not intended to limit the scope of the present invention.
The unmanned aerial vehicle autonomous positioning method based on monocular vision is realized by a plurality of programs running in an ROS robot operating system and a Ubuntu operating system based on the ROS robot operating system and the Ubuntu operating system, and comprises the following steps of:
(1) Manually operated unmanned aerial vehicle climbs to different heights and different positions.
(2) Starting an image acquisition program, releasing the image in the form of ROS nodes, subscribing the image information in real time by an autonomous positioning program, performing SLAM autonomous positioning, and finally releasing the positioning result in the form of ROS nodes.
(3) Loading the remote sensing image of the unmanned aerial vehicle flight area, subscribing the image acquired in the step (2), carrying out image mixed registration on the remote sensing image and the image shot by the unmanned aerial vehicle, and calculating to obtain the homonymous feature points.
(4) And solving the position of the unmanned aerial vehicle through a PnP (perspective-n-point) algorithm according to the longitude and latitude coordinates and the elevation information of the homonymous feature points in the remote sensing image and the two-dimensional position of the homonymous feature points in the image shot by the unmanned aerial vehicle.
(5) And calculating and solving a conversion matrix and SLAM scale information between the two coordinate systems according to the coordinates of the unmanned aerial vehicle in the SLAM coordinate system and the coordinates in the geodetic coordinate system.
(6) And converting the SLAM coordinates into geodetic coordinates by using a conversion matrix, and correcting the coordinates obtained by SLAM conversion by using the unmanned aerial vehicle geodetic coordinates obtained by image mixed registration and PnP in the positioning process due to error accumulation of SLAM in the operation process.
The specific mode of the step (2) is as follows:
(201) Starting a camera, and releasing the image in the form of ROS nodes;
(202) Subscribing image nodes by a positioning program;
(203) Performing feature detection on the image sequence of the airborne camera to obtain position information and descriptor information of feature points;
(204) Tracking the characteristic points in the images by utilizing a characteristic tracking method to obtain coordinates of the same characteristic point in different images;
(205) Calculating pose transformation between different camera images by a multi-view geometric method;
(206) And optimizing the pose and the three-dimensional point cloud coordinates of the unmanned aerial vehicle by using a beam adjustment method (BundleAdjustment), obtaining the position under the position SLAM coordinate system, and releasing the positioning result in the form of ROS nodes.
The specific mode of the step (3) is as follows:
(301) The multi-feature extraction is carried out on the remote sensing image and the image shot by the unmanned aerial vehicle respectively, and the multi-feature extraction comprises the following steps: SIFT feature extraction, SURF feature extraction, ORB feature extraction and edge feature extraction, descriptor extraction;
(302) Performing similarity detection to realize matching of point features and edge features;
(303) Through the combination analysis of the multiple feature points, a certain number of matching features in each image range are realized;
(304) And obtaining the same-name feature points.
The specific mode of the step (4) is as follows:
(401) Reading the three-dimensional coordinate of the earth according to the pixel position of the same-name feature point in the remote sensing image, and marking as (X i,Yi,Zi), wherein i is the feature point mark in the same image; the image coordinates of the same-name feature points in the image shot by the unmanned aerial vehicle are (u i,vi);
(402) And solving the position of the unmanned aerial vehicle under the geodetic coordinate system by utilizing a PnP algorithm, and releasing the position in the form of ROS nodes.
The specific mode of the step (5) is as follows:
(501) Subscribing the position node published by the SLAM program and the position node published by the step (402) by a fusion algorithm, and synchronizing according to the time stamp information;
(502) And calculating SLAM scale information and a conversion matrix between two coordinate systems by using a point cloud alignment algorithm.
The following is a more specific example:
as shown in fig. 1, an unmanned aerial vehicle autonomous positioning method based on monocular vision comprises the following steps:
step 1, manually operating an unmanned aerial vehicle to climb to different heights and different positions;
Step 2, an onboard computing unit (NVIDIANX) starts an image acquisition program, distributes images in the form of ROS nodes, subscribes image information in real time by an autonomous positioning program, performs SLAM autonomous positioning, and finally distributes positioning results in the form of ROS nodes;
Step 2.1, starting a camera acquisition program, and releasing the image in the form of ROS nodes;
Step 2.2, starting a positioning program and subscribing image nodes;
Step 2.3, carrying out feature detection on the camera image sequence to obtain position information and descriptor information of feature points;
step 2.4, tracking the characteristic points in the images by utilizing a characteristic tracking method to obtain coordinates of the same characteristic point in different images;
Step 2.5, calculating pose transformation between different camera images through a multi-view geometric method;
Step 2.6, optimizing the pose of the unmanned aerial vehicle and the three-dimensional point cloud coordinates by using BundleAdjustment beam adjustment method to obtain the position as The positioning result is issued in the form of ROS nodes.
Step 3, loading the remote sensing image of the unmanned aerial vehicle flight area, subscribing the image acquired in the step 2, carrying out image mixed registration on the remote sensing image and the image shot by the unmanned aerial vehicle, and calculating to obtain the same-name feature points:
Step 3.1, respectively carrying out multi-feature extraction on the remote sensing image and the image shot by the unmanned aerial vehicle, wherein the multi-feature extraction comprises the following steps: SIFT feature extraction, SURF feature extraction, ORB feature extraction and edge feature extraction, and descriptor extraction, wherein specific algorithms are disclosed in documents [1] to [4];
[1]David G.Lowe.Distinctive Image Features from Scale-Invariant Keypoints[J].International Journal of ComputerVision,2004,60(2):91-110.
[2]BAY H,ESS A,TUYTELAARS T,et al.Speeded-Up Robust Features(SURF)[J].Computer vision and image understanding:CVIU,2008,110(3):346-359.
[3]ETHAN RUBLEE,VINCENT RABAUD,KURT KONOLIGE,et al.ORB:an efficient alternative to SIFT or SURF[C].//2011 International Conference on Computer Vision.[v.3].:IEEE,2011:2564-2571.
[4]Canny J F.A computational Approach to Edge Detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1986,8(6):679-698.
step 3.2, similarity detection is carried out, and point feature and edge feature matching is achieved;
step 3.3, realizing that a certain number of matching features exist in each image range through the combination analysis of the multiple feature points;
and 3.4, obtaining the same-name feature points.
Step 4, solving the position of the unmanned aerial vehicle through a PnP algorithm according to longitude and latitude coordinates and elevation information of the homonymous feature points in the remote sensing image and the two-dimensional position of the homonymous feature points in the image shot by the unmanned aerial vehicle, wherein the specific algorithm is as shown in a document [5]:
[5]Vincent Lepetit,Francesc Moreno-Noguer,and Pascal Fua.Epnp:An accurate o(n)solution to the pnp problem.International journal of computer vision,81(2):155–166,2009.
Step 4.1, reading the three-dimensional coordinate of the geodetic according to the pixel position of the same-name feature point in the remote sensing image, and marking the coordinate as (X i,Yi,Zi), wherein i is the feature point mark in the same image; the image coordinates of the same feature point in the image shot by the unmanned aerial vehicle are (u i,vi);
and 4.2, solving the position of the unmanned aerial vehicle under the geodetic coordinate system by utilizing a PnP algorithm, and releasing the position in the form of ROS nodes.
Step 5, calculating and solving a conversion matrix between the two coordinate systems and SLAM scale information according to the coordinates of the unmanned aerial vehicle in the SLAM coordinate system and the coordinates in the geodetic coordinate system:
step 5.1, subscribing the position node released by the SLAM program and the position node released by the step (4.2) by a fusion algorithm, and synchronizing according to the timestamp information;
Step 5.2, calculating SLAM scale information and a conversion matrix between two coordinate systems by using a point cloud alignment algorithm, wherein the specific algorithm is disclosed in a document [6]:
[6]K.S.Arun,T.S.Huang&S.D.Blostein.Least-Squares Fitting of Two 3-D Point Sets.IEEE Transactions on PatternAnalysis and Machine Intelligence,vol.9,no.5,pages 698–700,Sept 1987
and 6, converting SLAM coordinates into geodetic coordinates by using a conversion matrix, and correcting coordinates obtained by SLAM conversion by using unmanned aerial vehicle geodetic coordinates obtained by image mixed registration and PnP in the positioning process due to error accumulation in the operation process of SLAM.
In summary, the invention provides an effective unmanned aerial vehicle autonomous positioning method, which adopts a monocular camera as a unique sensor, thereby realizing the autonomous positioning of the unmanned aerial vehicle. According to the invention, the matching problem caused by large difference between the remote sensing image and the image shot by the unmanned aerial vehicle is solved through the image mixed registration algorithm, the pose of the unmanned aerial vehicle under the geodetic coordinate system is calculated through the PnP algorithm after the homonymous characteristic points are obtained, so that the drift problem caused by error accumulation in the actual operation of the monocular vision SLAM is corrected, and the high-precision autonomous positioning of the unmanned aerial vehicle in the refusing environment is realized.
The foregoing description is only of specific embodiments of the invention and is not intended to limit the invention thereto. Any modification, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (2)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111242626.XA CN114066972B (en) | 2021-10-25 | 2021-10-25 | A UAV autonomous positioning method based on monocular vision |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111242626.XA CN114066972B (en) | 2021-10-25 | 2021-10-25 | A UAV autonomous positioning method based on monocular vision |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN114066972A CN114066972A (en) | 2022-02-18 |
| CN114066972B true CN114066972B (en) | 2024-11-22 |
Family
ID=80235430
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202111242626.XA Active CN114066972B (en) | 2021-10-25 | 2021-10-25 | A UAV autonomous positioning method based on monocular vision |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114066972B (en) |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114842224B (en) * | 2022-04-20 | 2025-07-01 | 中山大学 | An absolute visual matching positioning method for monocular UAV based on geographic base map |
| CN116228853B (en) * | 2022-12-13 | 2025-03-25 | 西北工业大学 | A distributed visual SLAM method based on UAV platform |
| CN116402826B (en) * | 2023-06-09 | 2023-09-26 | 深圳市天趣星空科技有限公司 | Visual coordinate system correction method, device, equipment and storage medium |
| CN119648780A (en) * | 2024-11-11 | 2025-03-18 | 南京航空航天大学 | UAV visual continuous positioning method and system based on satellite remote sensing image assistance |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108303099A (en) * | 2018-06-14 | 2018-07-20 | 江苏中科院智能科学技术应用研究院 | Autonomous navigation method in unmanned plane room based on 3D vision SLAM |
| CN108369741A (en) * | 2015-12-08 | 2018-08-03 | 三菱电机株式会社 | Method and system for registration data |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109029417B (en) * | 2018-05-21 | 2021-08-10 | 南京航空航天大学 | Unmanned aerial vehicle SLAM method based on mixed visual odometer and multi-scale map |
| CN109211241B (en) * | 2018-09-08 | 2022-04-29 | 天津大学 | Autonomous positioning method of UAV based on visual SLAM |
| CN111145238B (en) * | 2019-12-12 | 2023-09-22 | 中国科学院深圳先进技术研究院 | Three-dimensional reconstruction method, device and terminal equipment of monocular endoscopic images |
| KR20210089300A (en) * | 2020-01-07 | 2021-07-16 | 한국전자통신연구원 | Vision-based drone autonomous flight device and method |
| CN112577493B (en) * | 2021-03-01 | 2021-05-04 | 中国人民解放军国防科技大学 | A method and system for autonomous positioning of unmanned aerial vehicles based on remote sensing map assistance |
-
2021
- 2021-10-25 CN CN202111242626.XA patent/CN114066972B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108369741A (en) * | 2015-12-08 | 2018-08-03 | 三菱电机株式会社 | Method and system for registration data |
| CN108303099A (en) * | 2018-06-14 | 2018-07-20 | 江苏中科院智能科学技术应用研究院 | Autonomous navigation method in unmanned plane room based on 3D vision SLAM |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114066972A (en) | 2022-02-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN114066972B (en) | A UAV autonomous positioning method based on monocular vision | |
| US10515458B1 (en) | Image-matching navigation method and apparatus for aerial vehicles | |
| CN103822615B (en) | A kind of multi-control point extracts and the unmanned aerial vehicle target real-time location method be polymerized automatically | |
| CN109324337B (en) | Unmanned aerial vehicle route generation and positioning method and device and unmanned aerial vehicle | |
| CN112419374B (en) | Unmanned aerial vehicle positioning method based on image registration | |
| CN105865454B (en) | A kind of Navigation of Pilotless Aircraft method generated based on real-time online map | |
| CN101598556B (en) | Unmanned aerial vehicle vision/inertia integrated navigation method in unknown environment | |
| CN102967305B (en) | Multi-rotor unmanned aerial vehicle pose acquisition method based on markers in shape of large and small square | |
| CN116718165B (en) | Combined imaging system based on unmanned aerial vehicle platform and image enhancement fusion method | |
| CN104268935A (en) | Feature-based airborne laser point cloud and image data fusion system and method | |
| CN105627991A (en) | Real-time panoramic stitching method and system for unmanned aerial vehicle images | |
| CN103822635A (en) | Visual information based real-time calculation method of spatial position of flying unmanned aircraft | |
| CN109341686B (en) | Aircraft landing pose estimation method based on visual-inertial tight coupling | |
| CN115371673A (en) | A binocular camera target location method based on Bundle Adjustment in an unknown environment | |
| CN116989772B (en) | An air-ground multi-modal multi-agent collaborative positioning and mapping method | |
| CN106249267A (en) | Target positioning and tracking method and device | |
| CN115950435B (en) | Real-time positioning method for unmanned aerial vehicle inspection image | |
| CN110058604A (en) | A kind of accurate landing system of unmanned plane based on computer vision | |
| CN108803655A (en) | A kind of UAV Flight Control platform and method for tracking target | |
| CN107063193A (en) | Based on GPS Dynamic post-treatment technology Aerial Photogrammetry | |
| CN108225273B (en) | Real-time runway detection method based on sensor priori knowledge | |
| CN109764864B (en) | A method and system for indoor UAV pose acquisition based on color recognition | |
| CN115597592A (en) | Comprehensive positioning method applied to unmanned aerial vehicle inspection | |
| WO2024093635A1 (en) | Camera pose estimation method and apparatus, and computer-readable storage medium | |
| CN113421332A (en) | Three-dimensional reconstruction method and device, electronic equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |