Disclosure of Invention
The invention aims to overcome the defects and shortcomings in the prior art and provides a position estimation method of a satellite-free navigation unmanned aerial vehicle capable of predicting a flight path.
One embodiment of the invention provides a satellite-free navigation unmanned aerial vehicle position estimation method capable of predicting a track, comprising the following steps:
S1, acquiring a detection image of a front lower view field of the unmanned aerial vehicle;
S2, carrying out multi-source image fusion processing on the detection image to obtain a fusion image;
s3, performing perspective projection on the fusion image, performing feature matching with DEM data, and obtaining feature point information;
And S4, determining the current coordinates of the unmanned aerial vehicle based on the relative height of the unmanned aerial vehicle to the ground, the altitude of the unmanned aerial vehicle, the flight speed of the unmanned aerial vehicle and the characteristic point information, wherein the current coordinates are used for predicting the flight path of the unmanned aerial vehicle.
In some optional embodiments, the detection image is acquired by a camera of the unmanned aerial vehicle, the camera faces a front lower view field of the unmanned aerial vehicle, and an angle of a shooting direction of the camera relative to a horizontal plane can be adjusted between 0 ° and 90 °.
In some optional embodiments, the magnitude of the angle of the shooting direction of the camera relative to the horizontal plane is determined based on the relative altitude and the flying speed of the unmanned aerial vehicle.
In some alternative embodiments, the relative height is measured by a laser rangefinder.
In some alternative embodiments, the altitude is measured by an barometric altimeter.
In some alternative embodiments, the multi-source image fusion process is implemented based on Swi nFus i on universal image fusion frameworks.
In some alternative embodiments, the detection image includes at least a visible light image and an infrared image.
In some alternative embodiments, the step S3 includes:
S31, performing perspective projection on the fusion image to obtain three-dimensional point cloud data of the fusion image;
S32, performing feature matching on the three-dimensional point cloud data and the DEM data of the fusion image;
and S33, acquiring characteristic point information, wherein the characteristic point information at least comprises ground characteristic points.
In some alternative embodiments, during the step of performing S1-S4, the unmanned aerial vehicle performs a flight mission based on the primary coordinates of the unmanned aerial vehicle.
Compared with the prior art, the position estimation method of the unmanned aerial vehicle without satellite navigation of the predictable track can estimate the position coordinates of the unmanned aerial vehicle during satellite-free navigation, thereby obtaining the current coordinates of the unmanned aerial vehicle, facilitating the progress of the track prediction and autonomous navigation, providing a fault protection mechanism for the unmanned aerial vehicle when the satellite navigation signal is lost, and ensuring the smooth progress of the flight task of the unmanned aerial vehicle.
In order that the invention may be more clearly understood, specific embodiments thereof will be described below with reference to the accompanying drawings.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
Referring to fig. 1, an embodiment of the present invention provides a method for estimating a position of a satellite-free navigation unmanned aerial vehicle with a predictable track, including:
S1, acquiring a detection image of a front lower view field of the unmanned aerial vehicle;
S2, carrying out multi-source image fusion processing on the detection image to obtain a fusion image;
s3, performing perspective projection on the fusion image, performing feature matching with DEM data, and obtaining feature point information;
And S4, determining the current coordinates of the unmanned aerial vehicle based on the relative height of the unmanned aerial vehicle to the ground, the altitude of the unmanned aerial vehicle, the flight speed of the unmanned aerial vehicle and the characteristic point information, wherein the current coordinates are used for predicting the flight path of the unmanned aerial vehicle. Because the step of S1-S4 needs a certain time, the offset is required to be determined according to the flying speed, and the relative height of the unmanned aerial vehicle relative to the ground, the altitude of the unmanned aerial vehicle and the coordinates determined by the characteristic point information are corrected by the offset, so that the current coordinates of the unmanned aerial vehicle are obtained.
When the unmanned aerial vehicle flies, satellite signals can be detected through the detection device, and S1-S4 steps are carried out when the satellite signals are detected to be lost or the satellite signals are interfered, so that the current position of the unmanned aerial vehicle can be obtained without satellite navigation, track prediction and autonomous navigation are facilitated, a fault protection mechanism is provided for the unmanned aerial vehicle when the satellite navigation signals are lost, and the flight task of the unmanned aerial vehicle is guaranteed to be smoothly carried out.
It should be noted that the front downward viewing field refers to a field of view range in front of and facing downward of the unmanned aerial vehicle, and the field of view range can observe the terrain that the unmanned aerial vehicle is flying to, so that the detected image of the front downward viewing field can be used for resolving the current coordinates of the unmanned aerial vehicle.
The principle of predicting the unmanned aerial vehicle track by using the coordinates of the unmanned aerial vehicle is a technology known to those skilled in the art, and will not be described in detail herein.
In some optional embodiments, the detection image is obtained through a camera of the unmanned aerial vehicle, the camera faces a front lower view field of the unmanned aerial vehicle, and an angle of a shooting direction of the camera relative to a horizontal plane can be adjusted between 0 ° and 90 °, so that the angle of the shooting direction of the camera relative to the horizontal plane meets a requirement for positioning coordinates of the unmanned aerial vehicle.
In some optional embodiments, the angle of the shooting direction of the camera relative to the horizontal plane is determined based on the relative height and the flying speed of the unmanned aerial vehicle, wherein the faster the flying speed of the unmanned aerial vehicle is, the smaller the angle of the shooting direction of the camera relative to the horizontal plane is, so that more image information in front of the unmanned aerial vehicle can be acquired in advance, the slower the flying speed of the unmanned aerial vehicle is, the larger the angle of the shooting direction of the camera relative to the horizontal plane is, so that more accurate image information near the unmanned aerial vehicle can be acquired, the higher the relative height of the unmanned aerial vehicle is, the larger the angle of the shooting direction of the camera relative to the horizontal plane is, so that the wider the image information is acquired, and the lower the relative height of the unmanned aerial vehicle is, the smaller the angle of the shooting direction of the camera relative to the horizontal plane is, so that the camera is prevented from being too downward to acquire enough image information.
In some alternative embodiments, the relative height is measured by a laser rangefinder. Of course, the relative height may also be measured by other suitable means, without limiting this example.
In some alternative embodiments, the altitude is measured by an barometric altimeter. Of course, the altitude may also be measured by other suitable means, without being limited to this example.
The multi-source image fusion can improve the precision of feature extraction and optimize the accuracy of track prediction and positioning. In some alternative embodiments, the detection image includes at least a visible light image and an infrared image, and the combination of multiple images can improve the feature accuracy of the fusion image in a night or low-light environment.
In some alternative embodiments, the multi-source image fusion process is implemented based on Swi nFus ion universal image fusion frameworks. The step of performing multi-source image fusion processing on the detection image can be roughly divided into three steps of feature extraction, feature fusion and image reconstruction. The principles of feature extraction, feature fusion, and image reconstruction are well known to those skilled in the art, and will not be described in detail herein. In the feature extraction step, when the detected image includes a visible light image and an infrared image, the local features of the visible light image and the local features of the infrared image are extracted based on CNN extraction, and the global features of the visible light image and the global features of the infrared image are extracted based on Swi n Transformer.
Referring to fig. 2, in some alternative embodiments, the step S3 includes:
S31, performing perspective projection on the fusion image to obtain three-dimensional point cloud data of the fusion image;
And S32, performing feature matching on the three-dimensional point cloud data and the DEM data of the fusion image, wherein the feature matching algorithm adopts nearest neighbor search or RANSAC and other algorithms, so that the accuracy and the robustness of feature matching are improved.
And S33, acquiring characteristic point information, wherein the characteristic point information at least comprises ground characteristic points, and the ground characteristic points can be used for conveniently determining the horizontal plane coordinates of the unmanned aerial vehicle.
The DEM data is a pre-stored three-dimensional model containing ground elevation information, the current map position of the unmanned aerial vehicle is determined through feature matching, and then the current coordinate of the unmanned aerial vehicle can be accurately calculated by combining the altitude and the relative altitude.
In some alternative embodiments, during the step of performing S1-S4, the unmanned aerial vehicle performs a flight mission based on the primary coordinates of the unmanned aerial vehicle. The primary coordinates are the latest unmanned aerial vehicle coordinates obtained by satellite signals before the loss of satellite signals or the interference of satellite signals is detected. Because the step process of realizing S1-S4 needs a certain time, especially the time that the characteristic matching needs is longer, the unmanned aerial vehicle can not update the coordinates immediately at the moment, and therefore, the unmanned aerial vehicle flies according to the original flight task, and the task execution efficiency is prevented from being influenced. After the current coordinates of the unmanned aerial vehicle are obtained through the steps of S1-S4, the unmanned aerial vehicle continues to carry out the flight task based on the current coordinates. Before the detection device detects the satellite signal recovery, the steps S1-S4 are needed to be circularly carried out, so that the current coordinates of the unmanned aerial vehicle are continuously updated, the unmanned aerial vehicle track prediction and autonomous navigation requirements are met, and the unmanned aerial vehicle can accurately execute the flight task.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.