CN114111774B - Vehicle positioning method, system, equipment and computer readable storage medium - Google Patents
Vehicle positioning method, system, equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN114111774B CN114111774B CN202111481005.7A CN202111481005A CN114111774B CN 114111774 B CN114111774 B CN 114111774B CN 202111481005 A CN202111481005 A CN 202111481005A CN 114111774 B CN114111774 B CN 114111774B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- parking area
- data
- map
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000003860 storage Methods 0.000 title claims abstract description 27
- 238000004590 computer program Methods 0.000 claims description 12
- 238000012937 correction Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000007499 fusion processing Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 4
- 238000004519 manufacturing process Methods 0.000 abstract description 4
- 238000012545 processing Methods 0.000 description 16
- 239000011159 matrix material Substances 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 238000001914 filtration Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 102000027426 receptor tyrosine kinases Human genes 0.000 description 6
- 108091008598 receptor tyrosine kinases Proteins 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000012216 screening Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000012423 maintenance Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 239000000835 fiber Substances 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1652—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
- G01C21/32—Structuring or formatting of map data
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Navigation (AREA)
Abstract
The invention provides a vehicle positioning method, a system, equipment and a computer readable storage medium, wherein the vehicle positioning method comprises the following steps: acquiring current geographic information of a vehicle, and searching and determining map data of a parking area to be accessed by the vehicle through the current geographic information of the vehicle; when a vehicle enters the parking area, panoramic image data of the parking area are obtained; extracting feature information from the panoramic image data of the parking area, and matching and positioning the extracted feature information with map data of the parking area to obtain pose data of the vehicle in the map data of the parking area; and taking the mileage positioning data of the vehicle calculated in real time as prediction input, and updating and solving the pose of the vehicle under a map coordinate system by utilizing the pose data in the map data of the vehicle in the parking area. The invention has strong scene adaptability, and the positioning precision meets the precision requirement of unmanned passenger parking of the passenger car in the parking lot, and meets the mass production requirement of the autonomous passenger parking system of the passenger car.
Description
Technical Field
The invention belongs to the field of automobile electronics and the technical field of computers, relates to a positioning method and a positioning system, and particularly relates to a positioning method, a positioning system, positioning equipment and a computer readable storage medium of a vehicle.
Background
With the development of society, the number of household passenger cars is increased, and the parking space is also increased. The problems of difficult parking, difficult vehicle finding and the like are increasingly highlighted. In order to solve the parking problem, intelligent bus-substituting parking technology has been developed. Currently, smart parking technologies can be classified into automatic parking assistance (APA, automated PARKING ASSIST), home memory parking (Homezone Parking Pilot), and Autonomous passenger parking (AVP, autonomous VALET PARKING) from a degree of intellectualization. In the AVP parking technology, a high-precision positioning function is a basis and premise. There are currently a variety of high-precision positioning techniques for passenger cars, such as: outdoor Real-time differential positioning (RTK, real-TIME KINEMATIC), lidar positioning, inertial navigation positioning, camera vision positioning, etc. But the AVP function is mainly applied to an indoor large-scale parking lot, the scene has no RTK signal, the light change is obvious, the scene features are scarce or the feature texture similarity is higher, and the like. However, the existing positioning technologies such as outdoor Real-time differential positioning (RTK, real-TIME KINEMATIC), laser radar positioning, inertial navigation positioning, camera visual positioning and the like have the defects of high hardware equipment price, high use and maintenance cost, weak positioning scene adaptability and the like, and restrict the application and development of autonomous passenger parking technologies.
Therefore, how to provide a vehicle positioning method, system, device and computer readable storage medium to solve the defects of high hardware device price, high use and maintenance cost, weak positioning scene adaptability and the like existing in the existing positioning technology, so that the application and development of the autonomous passenger parking technology are restricted, and the positioning precision does not meet the precision requirement of unmanned passenger parking of a passenger vehicle in a parking lot, which is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, an object of the present invention is to provide a vehicle positioning method, system, device and computer readable storage medium, which are used for solving the problems of the prior art that the hardware device is expensive, the use and maintenance costs are high, the adaptability of the positioning scene is not strong, and the like, resulting in the limitation of the application and development of the autonomous passenger parking technology, and the positioning precision does not meet the precision requirement of unmanned passenger parking of the passenger vehicle in the parking lot.
To achieve the above and other related objects, an aspect of the present invention provides a positioning method of a vehicle, including: acquiring current geographic information of a vehicle, and searching and determining map data of a parking area to be accessed by the vehicle through the current geographic information of the vehicle; when a vehicle enters the parking area, panoramic image data of the parking area are obtained; extracting feature information from the panoramic image data of the parking area, and matching and positioning the extracted feature information with map data of the parking area to obtain pose data of the vehicle in the map data of the parking area; and taking the mileage positioning data of the vehicle estimated in real time as prediction input, and updating the pose of the vehicle under a map coordinate system by utilizing the pose data in the map data of the vehicle in the parking area.
In an embodiment of the present invention, the map data of the parking area includes a semantic layer of the parking area and a feature layer of the parking area; extracting feature information from the panoramic image data of the parking area, and matching and positioning the extracted feature information with map data of the parking area to obtain pose data of the vehicle in the map data of the parking area, wherein the step of obtaining pose data of the vehicle in the map data of the parking area comprises the following steps: extracting semantic information from the panoramic image data of the parking area, carrying out semantic matching positioning on the extracted semantic information and a semantic layer of the parking area to obtain pose data of the vehicle in the semantic layer, and/or extracting image feature points from the panoramic image data of the parking area, and carrying out feature matching positioning on the provided image feature points and a feature layer of the parking area to obtain pose data of the vehicle in the feature layer.
In an embodiment of the present invention, the step of extracting semantic information from panoramic image data of the parking area, and performing semantic matching positioning on the extracted semantic information and a semantic layer of the parking area to obtain pose data of the vehicle in the semantic layer includes: detecting pixel points of each semantic element of the vehicle under a vehicle body coordinate system at the current moment from panoramic image data of the parking area; the position information of the vehicle at the current moment is searched through the mileage positioning data of the vehicle calculated in real time; according to the position information of the vehicle at the current moment, projecting the pixel points of each semantic element onto a semantic layer of a parking area, and searching semantic map pixel points closest to the pixel points of each semantic element on the semantic layer; acquiring pose data of the vehicle in the semantic layer based on pixel points of each semantic element of the vehicle under a vehicle body coordinate system and map pixel points of the matched vehicle on the semantic layer; the pose data of the vehicle in the semantic map layer comprises a first pose quantity and a first position quantity from a vehicle body coordinate system to a semantic map coordinate system.
In an embodiment of the present invention, the step of obtaining pose data of the vehicle in the semantic layer includes:
Based on pixel point coordinates of each semantic element of the vehicle under a vehicle body coordinate system, a first attitude quantity from the vehicle body coordinate system to a semantic map coordinate system and a first position quantity, determining a correlation between map pixel points matched with the pixel points of each semantic element and the first three; projecting the semantic element pixel points extracted from the semantic image into a semantic map to obtain projection pixel point coordinates, minimizing errors between the projection pixel point coordinates and the map pixel point coordinates, and obtaining a first posture amount and a first position amount from a corresponding vehicle body coordinate system to a map coordinate system under the minimized errors.
In an embodiment of the present invention, the step of extracting image feature information from panoramic image data of the parking area, and performing feature matching positioning on the provided image feature points and a feature layer of the parking area to obtain pose data of the vehicle in the feature layer includes: sequentially carrying out matching degree calculation on the image characteristic points extracted from the panoramic image data of the parking area and the three-dimensional map points in the characteristic map layer of the parking area; calculating a three-dimensional map point with the maximum matching degree, and considering that the image characteristic points extracted from the panoramic image data of the parking area are matched with the known three-dimensional map points in the corresponding characteristic map layer; acquiring pose data of a vehicle in a feature layer based on image feature points extracted from panoramic image data of the parking area and map points matched with the image feature points on the feature layer; the pose data of the vehicle in the feature map layer comprises a second pose quantity and a second position quantity from a vehicle body coordinate system to a feature map coordinate system.
In an embodiment of the present invention, the step of acquiring the second attitude and the second position of the vehicle body coordinate system to the feature map coordinate system includes: according to the image feature points, the second attitude quantity from the vehicle body coordinate system to the feature map coordinate system and the second position quantity extracted from the panoramic image data of the parking area, determining the correlation between the three matched three-dimensional map points in the feature map layer of the parking area; and re-projecting the three-dimensional map points in the feature map layer onto the feature detection image to obtain re-projected pixel coordinates, and minimizing errors between the re-projected pixel coordinates and the matched image feature point coordinates so as to obtain second attitude amounts and second position amounts from the corresponding vehicle body coordinate system to the feature map coordinate system under the condition of minimized errors.
In an embodiment of the present invention, the step of updating and solving the pose of the vehicle under the map coordinate system by using the pose data of the vehicle in the semantic layer or the pose data of the vehicle in the feature layer by using the mileage positioning data of the vehicle calculated in real time as a prediction input includes: acquiring a vehicle pose prediction result at the current moment through mileage positioning data of a vehicle calculated in real time; calculating the credibility of the vehicle pose prediction result at the current moment; generating correction parameters for correcting the vehicle pose prediction result at the current moment based on the credibility of the vehicle pose prediction result at the current moment; and carrying out fusion processing on pose data of the vehicle in the semantic layer or pose data of the vehicle in the feature layer by using the correction parameters so as to correct a vehicle pose prediction result at the current moment and form a pose of the vehicle under a map coordinate system.
In an embodiment of the present invention, the step of performing fusion processing on pose data of the vehicle in the semantic layer or pose data of the vehicle in the feature layer by using the correction parameters further includes: according to the preset receiving frequency of the pose data of the vehicle in the semantic layer or the pose data of the vehicle in the feature layer, receiving the pose data of the vehicle in the semantic layer or the pose data of the vehicle in the feature layer; the method comprises the steps that pose data of vehicles in a semantic layer can generate matching weights according to matching distances when the semantic layers are matched; when the matching weight is greater than the weight threshold, receiving pose data of the vehicle in the semantic layer; the vehicle generates feature point matching quantity when the pose data in the feature layers are matched; and when the number of the matched feature points is greater than the number threshold, receiving pose data of the vehicle in the feature layer.
Another aspect of the present invention provides a positioning system of a vehicle, comprising: the first acquisition module is used for acquiring the current geographic information of the vehicle, searching and determining map data of a parking area to be accessed by the vehicle through the current geographic information of the vehicle; the second acquisition module is used for acquiring panoramic image data of the parking area when the vehicle enters the parking area; the matching module is used for extracting characteristic information from the panoramic image data of the parking area, and matching and positioning the extracted characteristic information with map data of the parking area so as to obtain pose data of the vehicle in the map data of the parking area; and the prediction and update module is used for taking the mileage positioning data of the vehicle calculated in real time as prediction input, and updating and solving the pose of the vehicle under a map coordinate system by utilizing the pose data in the map data of the vehicle in the parking area.
Still another aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of locating a vehicle.
A final aspect of the present invention provides a positioning apparatus for a vehicle, comprising: a processor and a memory; the memory is used for storing a computer program, and the processor is used for executing the computer program stored by the memory so as to enable the positioning device of the vehicle to execute the positioning method of the vehicle.
As described above, the vehicle positioning method, system, device and computer-readable storage medium of the present invention have the following beneficial effects:
The vehicle positioning method, system, equipment and computer readable storage medium have strong scene adaptability, only four low-cost looking-around fisheye cameras and IMU are needed when positioning data are acquired, common expensive sensors such as laser radar and the like are avoided, constraints of external environment on sensor performance such as RTK and the like are avoided, positioning precision meets the precision requirement of unmanned passenger parking of a passenger vehicle in a parking lot, and the mass production requirement of an autonomous passenger parking system of the passenger vehicle is met.
Drawings
Fig. 1 is a flow chart of a vehicle positioning method according to an embodiment of the invention.
Fig. 2 shows a schematic flow chart of an implementation of S13 of the present invention.
Fig. 3 shows a schematic flow chart of another implementation of S13 of the present invention.
Fig. 4 shows a schematic diagram of the principle of S14 of the present invention.
Fig. 5 shows a flow chart of S14 of the present invention.
Fig. 6 is a schematic structural diagram of a positioning system of a vehicle according to an embodiment of the invention.
Description of element reference numerals
6. Positioning system for vehicle
61. First acquisition module
62. Second acquisition module
63. Matching module
64. Prediction and update module
631. Semantic matching positioning unit
632. Feature matching positioning unit
S11 to S15 steps
S131 to S134 steps
S131 '-S133' step
S141 to S144 steps
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
The technical principles of the vehicle positioning method, system, equipment and computer readable storage medium of the invention are as follows:
The vehicle terminal receives global navigation satellite system (GNSS, global Navigation SATELLITE SYSTEM) data, compares the data with a map database containing GNSS information in the cloud, acquires a high-precision map of a corresponding parking lot, and downloads the high-precision map to the vehicle terminal. The high-precision map comprises a semantic layer and a feature layer. The wheel speed pulse of the vehicle is combined with an inertial measurement unit (IMU, inertial measurement unit) to obtain the positioning data of the odometer through pose recurrence. The vehicle-mounted terminal uses a low-cost four-way looking-around fisheye camera to extract semantic perception information and visual characteristic information, and adopts a map semantic matching and characteristic matching algorithm to realize the positioning of the vehicle in the map. The odometer location data, semantic layer location data, feature layer location data are input into an Extended kalman filter (EKF, extended KALMAN FILTER). In the extended Kalman filtering, the odometer positioning data is used as prediction input, the semantic layer positioning data and the feature layer positioning data are used as observation input, the filter can smooth the positioning observables of the two layers, meanwhile, the position data is still output when no observables exist in a short time, and the vehicle position under a final high-precision map coordinate system is fused and output.
Example 1
The embodiment provides a positioning method of a vehicle, which includes:
acquiring current geographic information of a vehicle, and searching and determining map data of a parking area to be accessed by the vehicle through the current geographic information of the vehicle;
When a vehicle enters the parking area, panoramic image data of the parking area are obtained;
extracting feature information from the panoramic image data of the parking area, and matching and positioning the extracted feature information with map data of the parking area to obtain pose data of the vehicle in the map data of the parking area;
and taking the mileage positioning data of the vehicle calculated in real time as prediction input, and updating and solving the pose of the vehicle under a map coordinate system by utilizing the pose data in the map data of the vehicle in the parking area.
The positioning method of the vehicle provided by the present embodiment will be described in detail below with reference to the drawings. Referring to fig. 1, a flow chart of a vehicle positioning method in an embodiment is shown. As shown in fig. 1, the vehicle positioning method specifically includes the following steps:
s11, acquiring current geographic information of the vehicle, and searching and determining map data of a parking area to be accessed by the vehicle through the current geographic information of the vehicle.
In this embodiment, map data of a parking area where the vehicle is to enter may be searched within a preset searching range by comparing current geographic information of the vehicle with a map database that includes GNSS information in the cloud.
Under the condition of no GNSS data, map data of a parking area to be accessed by a corresponding vehicle can be autonomously determined on a vehicle-to-machine man-machine interaction interface.
In this embodiment, the map data of the parking area includes a semantic layer of the parking area and a feature layer of the parking area. The map paths of the two layers and the related information of the parking lot are completely consistent, and the independent positioning results of the two layers can be ensured to be consistent based on the same map coordinate system and the output positioning results are ensured to be consistent.
And S12, when the vehicle enters the parking area, acquiring panoramic image data of the parking area.
In this embodiment, four fisheye cameras installed around the vehicle are adopted, and the fisheye cameras are calibrated to form a panoramic image system (AVM, around View Monitor), so as to collect panoramic image data of the parking area, so that the vehicle end can obtain the panoramic image data of the parking area.
And S13, extracting characteristic information from the panoramic image data of the parking area, and matching and positioning the extracted characteristic information with map data of the parking area to obtain pose data of the vehicle in the map data of the parking area.
In this embodiment, the feature information extracted from the panoramic image data of the parking area includes semantic information and/or image feature points.
Specifically, the S13 includes: semantic information is extracted from the panoramic image data of the parking area, and semantic matching positioning is carried out on the extracted semantic information and the semantic layer of the parking area so as to obtain pose data of the vehicle in the semantic layer. And/or extracting image feature points from the panoramic image data of the parking area, and carrying out feature matching positioning on the provided image feature points and the feature image layer of the parking area so as to obtain pose data of the vehicle in the feature image layer.
Referring to fig. 2, an implementation flow chart shown as S13 is shown, in which the step of extracting semantic information from panoramic image data of the parking area in S13, and performing semantic matching positioning on the extracted semantic information and a semantic layer of the parking area to obtain pose data of the vehicle in the semantic layer includes:
s131, detecting pixel points of all semantic elements of the vehicle under a vehicle body coordinate system at the current moment from the panoramic image data of the parking area.
In this embodiment, semantic elements including a parking space line, a deceleration strip, a road edge, and the like are detected in panoramic image data of the parking area by using a deep learning technique.
S132, locating information of the vehicle at the current moment, namely an ith semantic element pixel point coordinate X under a vehicle body coordinate system, is found through mileage locating data of the vehicle estimated in real time i.
In the present embodiment, the mileage positioning data of the vehicle estimated by the rotational angle data of the vehicle in three directions can be provided by the position data of the travel distance, the travel speed, and the like provided by the wheel speed pulse of the vehicle and the IMU.
Specifically, the mileage positioning data of the vehicle includes horizontal plane position data (x, y) of the vehicle movement and heading angle data θ. Wherein, the calculation formula of the horizontal plane position data (x, y) and the course angle data theta of the vehicle is as follows:
xk=xk-1+ds×cos(θk-1+0.5×da);
yk=yk-1+ds×sin(θk-1+0.5×da);
θk=θk-1+da;
Wherein x is k,yk,θk The position coordinates and the course angle, x of the vehicle at the current moment are respectively k-1,yk-1,θk-1 For the position coordinates and heading angle of the vehicle at the previous moment, ds is the displacement change value of the vehicle at two moments provided by the wheel speed pulse, and da is the heading change value of the vehicle between the two moments provided by the IMU.
S133, according to the position information of the vehicle at the current moment, a pixel point X of each semantic element is obtained i Projecting onto a semantic layer of a parking area, and searching a map pixel point x nearest to the pixel point on the semantic layer i.
S134, acquiring pose data of the vehicle in the semantic layer based on pixel points of all semantic elements of the vehicle under a vehicle body coordinate system and map pixel points of the matched vehicle on the semantic layer. The pose data of the vehicle in the semantic layer comprises a first pose quantity from a vehicle body coordinate system to a semantic map coordinate system And a first position quantity/>
In this embodiment, the step S134 includes the following steps:
Based on the pixel points of all the semantic elements of the vehicle under the vehicle body coordinate system, the first attitude quantity from the vehicle body coordinate system to the semantic map coordinate system and the first position quantity, determining the correlation between the map pixel points matched with the pixel points of all the semantic elements and the first three;
In this embodiment, the correlation between the map pixel point matched with the pixel point of each semantic element and the first three is: x i=RXi+t.
Projecting the semantic element pixel points extracted from the semantic image into a semantic map to obtain projection pixel point coordinates, and minimizing errors between the projection pixel point coordinates and the matched map pixel point coordinates so as to obtain a first posture amount and a first position amount from a corresponding vehicle body coordinate system to a map coordinate system under the condition of minimizing the errors.
In this embodiment, the error between the projected pixel coordinates and the map pixel coordinates on the matching is: e i=xi-(RXi +t), assuming that all matched pixel point sets at that time are S, sum all observed matched pixel point errors at that time and minimize it as shown in the following equation:
wherein X is i The pixel point coordinate, x, of the ith semantic element in the vehicle body coordinate system detected in panoramic image data of the parking area iIs X i searching matched map pixel points closest to the pixel points of each semantic element on the semantic layer, wherein S is a set of all matched semantic element pixel points, And/> the first attitude quantity and the first position quantity from the vehicle body coordinate system to the map coordinate system to be solved are respectively. The optimal solution of the semantic graph layer position R, t, namely/>, can be obtained by iterative solution through minimizing error sum through a nonlinear optimization method
In this embodiment, by the vehicle positioning of the semantic layer, the output positioning data is the pose of the vehicle under the map coordinate system, including the position data and the pose data. The layer positioning method is less influenced by illumination of the parking lot, and the used semantic information is the inherent semantic element information of the parking lot, such as: deceleration strips, parking spaces, road edges, etc. The requirement of Long-term positioning (Long-term Localization) can be met, the map does not need to be updated frequently, and the positioning only needs four low-cost around-the-eye cameras around the vehicle body, so that the drawing and positioning cost is greatly reduced.
Referring to fig. 3, another implementation flow chart shown as S13 is shown, in which the step of extracting image feature points from panoramic image data of the parking area in S13, and performing feature matching positioning on the provided image feature points and feature layers of the parking area to obtain pose data of the vehicle in the feature layers includes:
and S131', carrying out matching degree calculation on the image characteristic points extracted from the panoramic image data of the parking area and the three-dimensional map points in the characteristic layer of the parking area in sequence.
In the present embodiment, panoramic image data of a parking area is acquired by using four fisheye cameras installed around a vehicle, and after the panoramic image data is subjected to distortion correction, image feature points in the panoramic image data are extracted. In the feature layer, the image feature points refer to pixel corner points, edge contour pixel points and the like with more remarkable changes in the image.
Specifically, according to the gray value change of the pixels in the image, the embodiment extracts the corner points and the contour points of the image, and describes the gray change of the pixels around the corner points or the contour points by using binary codes of a certain number.
And S132', calculating a three-dimensional map point with the highest matching degree, selecting the three-dimensional map point with the highest matching degree, and considering that the image characteristic points extracted from the panoramic image data of the parking area are matched with the corresponding three-dimensional map points in the characteristic map layer.
Specifically, matching the image characteristic points extracted from the image with map three-dimensional points with same bit number binary codes pre-stored in the characteristic image layer, if the number of characters of the equal length binary codes compared with the corresponding binary number position is smaller, indicating that the matching degree of the pixel characteristic points and the map three-dimensional points is higher, and finding out the highest matching degree point of the image characteristic points and the map points in the characteristic image layer, namely the corresponding map points matched with the image characteristic points.
S133', acquiring pose data of the vehicle in the feature layer based on the image feature points extracted from the panoramic image data of the parking area and the three-dimensional map points matched with the image feature points on the feature layer. The pose data of the vehicle in the feature map layer comprises a second pose quantity and a second position quantity from a vehicle body coordinate system to a feature map coordinate system.
In this embodiment, the step of obtaining the second posture amount and the second position amount from the vehicle body coordinate system to the feature map coordinate system in S133' includes:
and determining the correlation between the three matched three-dimensional map points in the characteristic map layer of the parking area according to the image characteristic points, the second posture quantity from the vehicle body coordinate system to the characteristic map coordinate system and the second position quantity extracted from the panoramic image data of the parking area.
For example, x j X is an image characteristic point extracted from the image at the current moment j is equal to x j The three-dimensional map points are matched, and the correlation between the image characteristic points extracted from the characteristic map layer of the parking area and the image characteristic points extracted from the panoramic image data of the parking area, the second posture amount from the vehicle body coordinate system to the characteristic map coordinate system and the second position amount is as follows:
xj=K(RXj+t)
K is a projection matrix from a three-dimensional map point to an image characteristic point under a known camera coordinate system, the projection matrix is a camera internal reference matrix, and the camera internal reference matrix is calibrated in advance.
And re-projecting the three-dimensional map points in the feature map layer onto the feature detection image to obtain re-projected pixel coordinates, and minimizing errors between the re-projected pixel coordinates and the matched image feature point coordinates so as to obtain second attitude amounts and second position amounts from the corresponding vehicle body coordinate system to the feature map coordinate system under the condition of minimized errors.
In this embodiment, the error between the determined image feature point in the feature layer and the image feature point in the matched feature layer is: e j=xj-K(RXj +t), where S is the set of all matching feature points at the current time.
The optimal solution of the feature map horizon can be iteratively solved through minimizing errors by a nonlinear optimization method and as shown in the following formula, namely, a second attitude quantity And a second position quantity/>
In this embodiment, the matching positioning of the feature layer may implement the initial positioning of the vehicle in the global map, where the initial position is particularly important, and may provide an initial value for the semantic layer positioning described above, or may provide an initial value for the subsequent filter positioning. Meanwhile, the calculated amount of the feature layer is small, the speed is high, and the positioning cost is low.
S14, using the mileage positioning data of the vehicle estimated in real time as prediction input, and updating and solving the pose of the vehicle under a map coordinate system by utilizing the pose data in the map data of the vehicle in the parking area.
In this embodiment, as shown in fig. 4, the odometer positioning data, the pose data of the vehicle in the semantic layer, and the pose data of the vehicle in the feature layer are input into an Extended kalman filter (EKF, extended KALMAN FILTER). The extended Kalman filter can solve the problem of nonlinear system state, and the nonlinear system state is optimally estimated through system input data.
Referring to fig. 5, a flow chart of S45 is shown. As shown in fig. 5, the step S14 specifically includes the following steps:
s141, acquiring a vehicle pose prediction result at the current moment through the mileage positioning data of the vehicle calculated in real time.
In this embodiment, if the state quantity of the system to be estimated, i.e. the vehicle pose data, is o, the state at the previous time k-1 can be calculated by the formula after the arrival of the odometer data Predicting the predicted pose result of the vehicle at the current k moment, Comprises the positioning conversion recurrence relation of the odometer :xk=xk-1+ds×cos(θk-1+0.5×da);yk=yk-1+ds×sin(θk-1+0.5×da);θk=θk-1+da.
S142, calculating a vehicle pose prediction result at the current moment Reliability/> Vehicle pose prediction result/>, at current moment Reliability/> The calculation formula of (2) is/> Wherein/> And obtaining the covariance matrix through calculating the pose posterior estimation at the last moment, wherein the covariance matrix expresses the correlation between state quantities. R k the predicted noise level of the odometer can be directly given according to the inherent attribute of the sensor or engineering experience, and F is/> the conversion function is relative to the jacobian matrix derived from pose variables, and the calculation formula is as follows: /(I) Wherein v is k-1 For the forward speed of the vehicle at the previous moment, θ k-1 and delta t is the time difference between the current moment and the last moment, and is the heading angle in the pose of the vehicle at the last moment.
S143, predicting result of vehicle pose based on current moment Reliability/> Generating correction parameter K for correcting predicted result of vehicle pose at current moment k.
In practical application, correction parameter K for correcting predicted result of vehicle pose at current moment k The calculation formula of the Kalman gain in the Kalman filter is as follows: Qk For a given observed quantity noise matrix, the observed quantity noise matrix can be directly given according to the inherent attribute of the used sensor or engineering experience, H is a conversion matrix of the observed quantity state quantity, and because the observed input quantity of the vehicle is the same as the state quantity to be estimated, H is a unit matrix/>
S144, using the correction parameters to perform fusion processing on pose data of the vehicle in the semantic layer or pose data of the vehicle in the feature layer so as to update and correct the predicted result of the pose of the vehicle at the current moment and form the pose of the vehicle under the map coordinate system
In the present embodiment, the pose of the vehicle in the map coordinate system By calculating the formula/> And obtaining to achieve the purpose of updating and correcting. Wherein/> The function contains the identity matrix transformation relationship, i.e. without any conversion, the result is still/>
In the smoothing process, S144 includes a filtering policy of pose data of the vehicle in the semantic layer or pose data of the vehicle in the feature layer.
Specifically, the S144 includes:
According to the preset receiving frequency of the pose data of the vehicle in the semantic layer or the pose data of the vehicle in the feature layer, receiving the pose data of the vehicle in the semantic layer or the pose data of the vehicle in the feature layer;
on the basis of the preset receiving frequency, the observation data are further screened according to the following strategies:
The pose data of the vehicle in the semantic layer generates a matching weight according to the matching distance when the semantic layer is matched, and the formula is as follows: where the constant e is the base of the natural logarithmic function, d is the matching distance, and σ can be given according to the inherent properties of the sensor or engineering experience. When the matching weight is greater than the weight threshold, the pose data of the vehicle in the semantic layer is received through screening;
and counting the matching quantity of feature points when the pose data in the feature layers of the vehicle are matched with the feature layers, and receiving the pose data of the vehicle in the feature layers through screening when the matching quantity of the feature points of the current image is larger than a threshold value.
And then taking the pose data at the current moment k as the pose at the last moment, taking the pose at the moment k+1 as the current pose to be solved, continuing to circularly calculate S151-S154, and updating and correcting the whole filtering and positioning process as a circular recursion.
The above filtering positioning process also requires setting an initial value, i.e. the initial time In this embodiment, S13 is directly assigned to/>, through a map data positioning result of the filtering policy at the initial time According to engineering experience setting To perform the subsequent recursive updating process described above. The whole flow meets the requirement of closed loop solution.
In this embodiment, the kalman filter is used to achieve the purpose of multi-sensor data fusion positioning, and the positioning observables of the semantic layer and the feature layer can be smoothed to ensure that the positioning result does not generate larger jump, and meanwhile, position data is still present in a short time without observables, and the positions under a final high-precision map coordinate system are fused and output.
The vehicle positioning method has strong scene adaptability, only four low-cost looking-around fisheye cameras and IMU are needed when positioning data are acquired, common expensive sensors such as laser radar and the like are avoided, constraints of external environment on sensor performance such as RTK and the like are avoided, positioning accuracy meets the accuracy requirement of unmanned passenger parking of a passenger vehicle in a parking lot, and the mass production requirement of an autonomous passenger parking system of the passenger vehicle is met.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of positioning a vehicle as described in fig. 1.
The present application may be a system, method and/or computer program product at any possible level of technical details. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present application.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device. Computer program instructions for carrying out operations of the present application may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, integrated circuit configuration data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as SMALLTALK, C ++ or the like and a procedural programming language such as the "C" language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present application are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which can execute the computer readable program instructions.
Example two
The present embodiment provides a positioning system of a vehicle, including:
The first acquisition module is used for acquiring the current geographic information of the vehicle, searching and determining map data of a parking area to be accessed by the vehicle through the current geographic information of the vehicle;
the second acquisition module is used for acquiring panoramic image data of the parking area when the vehicle enters the parking area;
the matching module is used for extracting characteristic information from the panoramic image data of the parking area, and matching and positioning the extracted characteristic information with map data of the parking area so as to obtain pose data of the vehicle in the map data of the parking area;
And the prediction and update module is used for taking the mileage positioning data of the vehicle calculated in real time as prediction input, and updating and solving the pose of the vehicle under a map coordinate system by utilizing the pose data in the map data of the vehicle in the parking area.
The positioning system of the vehicle provided by the present embodiment will be described in detail below with reference to the drawings. Referring to fig. 6, a schematic structural diagram of a positioning system of a vehicle in an embodiment is shown. As shown in fig. 6, the positioning system 6 of the vehicle includes: the device comprises a first acquisition module 61, a second acquisition module 62, a matching module 63 and a prediction and update module 64, wherein the matching module 63 comprises a semantic matching locating unit 631 and a feature matching locating unit 632.
The information obtaining module 61 is configured to obtain current geographic information of a vehicle, and find and determine map data of a parking area into which the vehicle is to enter according to the current geographic information of the vehicle.
In this embodiment, the map data of the parking area includes a semantic layer of the parking area and a feature layer of the parking area. The map paths of the two layers and the related information of the parking lot are completely consistent, and the independent positioning results of the two layers can be ensured to be consistent based on the same map coordinate system and the output positioning results are ensured to be consistent.
The second obtaining module 62 is configured to obtain panoramic image data of the parking area when the vehicle enters the parking area.
In this embodiment, four fisheye cameras mounted around the vehicle are used, and the fisheye camera set is calibrated to form a panoramic image system (AVM, around View Monitor) to collect panoramic image data of the parking area.
The matching module 63 is configured to extract feature information from panoramic image data of the parking area, and perform matching positioning on the extracted feature information and map data of the parking area to obtain pose data of the vehicle in the map data of the parking area.
In this embodiment, the feature information extracted from the panoramic image data of the parking area includes semantic information and/or image feature points.
Specifically, the semantic matching and positioning unit 631 is configured to extract semantic information from panoramic image data of the parking area, and perform semantic matching and positioning on the extracted semantic information and a semantic layer of the parking area, so as to obtain pose data of the vehicle in the semantic layer.
In this embodiment, the semantic matching location unit 631 is configured to detect pixels of each semantic element of the vehicle under the vehicle body coordinate system at the current moment from the panoramic image data of the parking area; the position information of the vehicle at the current moment is searched through the mileage positioning data of the vehicle calculated in real time; according to the position information of the vehicle at the current moment, projecting the pixel points of each semantic element onto a semantic layer of a parking area, and searching map pixel points closest to the pixel points of each semantic element on the semantic layer; acquiring pose data of the vehicle in the semantic layer based on pixel points of each semantic element of the vehicle under a vehicle body coordinate system and map pixel points of the matched vehicle on the semantic layer; the pose data of the vehicle in the semantic map layer comprises a first pose quantity and a first position quantity from a vehicle body coordinate system to a semantic map coordinate system.
The semantic matching and positioning unit 631 determines a correlation between the map pixel point matched with the pixel point of each semantic element and the first three points by based on the pixel point of each semantic element in the vehicle body coordinate system, the first gesture amount from the vehicle body coordinate system to the semantic map coordinate system and the first position amount; projecting the semantic element pixel points extracted from the semantic image into a semantic map to obtain projection pixel point coordinates, and minimizing errors between the projection pixel point coordinates and the matched map pixel point coordinates to obtain first attitude amounts and first position amounts of a corresponding vehicle body coordinate system to a map coordinate system under the condition of minimizing errors so as to obtain the pose data of the vehicle in the semantic map layer.
The feature matching and positioning unit 632 is configured to extract image feature points from panoramic image data of the parking area, and perform feature matching and positioning on the provided image feature points and a feature layer of the parking area, so as to obtain pose data of the vehicle in the feature layer.
In this embodiment, the feature matching and positioning unit 632 is configured to sequentially perform matching degree calculation on the image feature points extracted from the panoramic image data of the parking area and the feature points extracted from the feature layer of the parking area; calculating a three-dimensional map point with the highest matching degree, selecting the three-dimensional map point with the highest matching degree, and considering that the image characteristic points extracted from the panoramic image data of the parking area are matched with the characteristic points extracted from the characteristic map layer; acquiring pose data of a vehicle in a feature layer based on the image feature points extracted from the panoramic image data of the parking area and the image feature points matched with the image feature points on the feature layer; the pose data of the vehicle in the feature map layer comprises a second pose quantity and a second position quantity from a vehicle body coordinate system to a feature map coordinate system.
The feature matching and positioning unit 632 determines the correlation between the three-dimensional map points matched in the feature layer of the parking area and the three points according to the second pose amount and the second position amount from the image feature points, the vehicle body coordinate system to the feature map coordinate system extracted from the panoramic image data of the parking area; and re-projecting the three-dimensional map points in the feature map layer onto the feature detection image to obtain re-projected pixel coordinates, and minimizing errors between the re-projected pixel coordinates and the matched image feature point coordinates to obtain second pose quantities and second position quantities of the corresponding vehicle body coordinate system to the feature map coordinate system under the condition of minimizing the errors so as to obtain pose data of the vehicle body coordinate system to the feature map coordinate system.
The prediction and update module 64 is configured to update and solve the pose of the vehicle in the map coordinate system by using the pose data of the vehicle in the semantic layer or the pose data of the vehicle in the feature layer, with the mileage positioning data of the vehicle calculated in real time as a prediction input, and the process includes: acquiring a vehicle pose prediction result at the current moment through mileage positioning data of a vehicle calculated in real time; calculating the credibility of the vehicle pose prediction result at the current moment; generating correction parameters for correcting the vehicle pose prediction result at the current moment based on the credibility of the vehicle pose prediction result at the current moment; and carrying out fusion processing on pose data of the vehicle in the semantic layer or pose data of the vehicle in the feature layer by utilizing the correction parameters so as to update and correct a vehicle pose prediction result at the current moment and form a pose of the vehicle under a map coordinate system.
The feature matching localization unit 632 is based on a filtering policy of pose data of the vehicle in the semantic layer or pose data of the vehicle in the feature layer.
The screening strategy comprises the steps of receiving pose data of a vehicle in a semantic layer or pose data of a vehicle in a feature layer according to preset receiving frequency of the pose data of the vehicle in the semantic layer or the pose data of the vehicle in the feature layer; on the basis of the preset receiving frequency, the observation data are further screened according to the following strategies: when the matching weight is greater than a weight threshold value, the pose data of the vehicle in the semantic layer is received through screening; and counting the matching quantity of feature points when the pose data in the feature layers of the vehicle are matched with the feature layers, and receiving the pose data of the vehicle in the feature layers through screening when the matching quantity of the feature points of the current image is larger than a threshold value.
It should be noted that, it should be understood that the division of the modules of the above system is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. The modules can be realized in a form of calling the processing element through software, can be realized in a form of hardware, can be realized in a form of calling the processing element through part of the modules, and can be realized in a form of hardware. For example: the x module may be a processing element which is independently set up, or may be implemented in a chip integrated in the system. The x module may be stored in the memory of the system in the form of program codes, and the functions of the x module may be called and executed by a certain processing element of the system. The implementation of the other modules is similar. All or part of the modules can be integrated together or can be implemented independently. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form. The above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more Application SPECIFIC INTEGRATED Circuits (ASIC), one or more microprocessors (DIGITAL SINGNAL processor DSP), one or more field programmable gate arrays (Field Programmable GATE ARRAY FPGA), etc. When the above module is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. These modules may be integrated together and implemented in the form of a System-on-a-chip (SOC) for short.
Example III
The present embodiment provides a positioning apparatus of a vehicle including: a processor, memory, transceiver, communication interface, or/and system bus; the memory and the communication interface are connected to the processor and the transceiver via the system bus and perform communication with each other, the memory is used for storing a computer program, the communication interface is used for communicating with other devices, and the processor and the transceiver are used for running the computer program to enable the positioning device of the vehicle to execute the steps of the positioning method of the vehicle as described above.
The system bus mentioned above may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, or the like. The system bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus. The communication interface is used for realizing communication between the database access device and other devices (such as a client, a read-write library and a read-only library). The memory may include random access memory (Random Access Memory, RAM) and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, abbreviated as CPU), a network processor (Network Processor, abbreviated as NP), etc.; but may also be a digital signal processor (DIGITAL SIGNAL processing, DSP), application SPECIFIC INTEGRATED circuit, ASIC, field programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
The protection scope of the vehicle positioning method of the present invention is not limited to the execution sequence of the steps listed in the present embodiment, and all the solutions implemented by the steps of increasing or decreasing and step replacing in the prior art according to the principles of the present invention are included in the protection scope of the present invention.
The invention also provides a vehicle positioning system, which can realize the vehicle positioning method according to the invention, but the device for realizing the vehicle positioning method according to the invention comprises but is not limited to the structure of the vehicle positioning system listed in the embodiment, and all the structural modifications and substitutions of the prior art according to the principles of the invention are included in the protection scope of the invention.
In summary, the method, the system, the equipment and the computer readable storage medium for positioning the vehicle have strong scene adaptability, only four low-cost looking-around fisheye cameras and IMU are needed when positioning data are acquired, common expensive sensors such as laser radars and the like are avoided, constraints of external environments on sensor performance such as RTKs and the like are avoided, positioning accuracy meets the accuracy requirement of unmanned passenger parking of a passenger vehicle in a parking lot, and the mass production requirement of an autonomous passenger parking system of the passenger vehicle is met. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, it is intended that all equivalent modifications and variations of the invention be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.
Claims (9)
1. A method of positioning a vehicle, comprising:
acquiring current geographic information of a vehicle, and searching and determining map data of a parking area to be accessed by the vehicle through the current geographic information of the vehicle;
When a vehicle enters the parking area, panoramic image data of the parking area are obtained;
Extracting feature information from panoramic image data of a parking area, and matching and positioning the extracted feature information with map data of the parking area to obtain pose data of a vehicle in the map data of the parking area, wherein the map data of the parking area comprises a semantic layer of the parking area and a feature layer of the parking area; extracting feature information from the panoramic image data of the parking area, and matching and positioning the extracted feature information with map data of the parking area to obtain pose data of the vehicle in the map data of the parking area, wherein the step of obtaining pose data of the vehicle in the map data of the parking area comprises the following steps:
Extracting semantic information from panoramic image data of the parking area, carrying out semantic matching positioning on the extracted semantic information and a semantic layer of the parking area to obtain pose data of the vehicle in the semantic layer, and/or extracting image feature points from the panoramic image data of the parking area, carrying out feature matching positioning on the provided image feature points and a feature layer of the parking area to obtain pose data of the vehicle in the feature layer, extracting image feature information from the panoramic image data of the parking area, and carrying out feature matching positioning on the provided image feature points and the feature layer of the parking area to obtain pose data of the vehicle in the feature layer, wherein the step of obtaining the pose data of the vehicle in the feature layer comprises the following steps: sequentially carrying out matching degree calculation on the image characteristic points extracted from the panoramic image data of the parking area and the three-dimensional map points in the characteristic map layer of the parking area; calculating a three-dimensional map point with the maximum matching degree, and considering that the image characteristic points extracted from the panoramic image data of the parking area are matched with the known three-dimensional map points in the corresponding characteristic map layer; acquiring pose data of a vehicle in a feature layer based on image feature points extracted from panoramic image data of the parking area and map points matched with the image feature points on the feature layer; the pose data of the vehicle in the feature map layer comprises a second pose quantity and a second position quantity from a vehicle body coordinate system to a feature map coordinate system;
And taking the mileage positioning data of the vehicle calculated in real time as prediction input, and updating and solving the pose of the vehicle under a map coordinate system by utilizing the pose data in the two map data of the vehicle in the parking area.
2. The method for locating a vehicle according to claim 1, wherein the step of extracting semantic information from the panoramic image data of the parking area, and performing semantic matching location on the extracted semantic information and the semantic layer of the parking area to obtain pose data of the vehicle in the semantic layer comprises:
Detecting pixel points of each semantic element of the vehicle under a vehicle body coordinate system at the current moment from panoramic image data of the parking area;
The position information of the vehicle at the current moment is searched through the mileage positioning data of the vehicle calculated in real time;
According to the position information of the vehicle at the current moment, projecting the pixel points of each semantic element onto a semantic layer of a parking area, and searching semantic map pixel points closest to the pixel points of each semantic element on the semantic layer;
acquiring pose data of the vehicle in the semantic layer based on pixel points of each semantic element of the vehicle under a vehicle body coordinate system and map pixel points of the matched vehicle on the semantic layer; the pose data of the vehicle in the semantic map layer comprises a first pose quantity and a first position quantity from a vehicle body coordinate system to a semantic map coordinate system.
3. The method of claim 2, wherein the step of obtaining pose data of the vehicle in the semantic layer comprises:
Based on pixel point coordinates of each semantic element of the vehicle under a vehicle body coordinate system, a first attitude quantity from the vehicle body coordinate system to a semantic map coordinate system and a first position quantity, determining a correlation between map pixel points matched with the pixel points of each semantic element and the first three;
Projecting the semantic element pixel points extracted from the semantic image into a semantic map to obtain projection pixel point coordinates, minimizing errors between the projection pixel point coordinates and the map pixel point coordinates, and obtaining a first posture amount and a first position amount from a corresponding vehicle body coordinate system to a map coordinate system under the minimized errors.
4. A positioning method of a vehicle according to claim 3, wherein the step of acquiring the second attitude amount and the second position amount of the vehicle body coordinate system to the feature map coordinate system includes:
According to the image feature points, the second attitude quantity from the vehicle body coordinate system to the feature map coordinate system and the second position quantity extracted from the panoramic image data of the parking area, determining the correlation between the three matched three-dimensional map points in the feature map layer of the parking area;
And re-projecting the three-dimensional map points in the feature map layer onto the feature detection image to obtain re-projected pixel coordinates, and minimizing errors between the re-projected pixel coordinates and the matched image feature point coordinates so as to obtain second attitude amounts and second position amounts from the corresponding vehicle body coordinate system to the feature map coordinate system under the condition of minimized errors.
5. The method according to claim 4, wherein the step of updating and solving the pose of the vehicle in the map coordinate system using the pose data of the vehicle in the semantic layer or the pose data of the vehicle in the feature layer using the mileage positioning data of the vehicle estimated in real time as a prediction input comprises:
acquiring a vehicle pose prediction result at the current moment through mileage positioning data of a vehicle calculated in real time;
calculating the credibility of the vehicle pose prediction result at the current moment;
generating correction parameters for correcting the vehicle pose prediction result at the current moment based on the credibility of the vehicle pose prediction result at the current moment;
and carrying out fusion processing on pose data of the vehicle in the semantic layer or pose data of the vehicle in the feature layer by using the correction parameters so as to correct a vehicle pose prediction result at the current moment and form a pose of the vehicle under a map coordinate system.
6. The method according to claim 5, wherein the step of performing fusion processing on pose data of the vehicle in the semantic layer or pose data of the vehicle in the feature layer using the correction parameters further comprises:
According to the preset receiving frequency of the pose data of the vehicle in the semantic layer or the pose data of the vehicle in the feature layer, receiving the pose data of the vehicle in the semantic layer or the pose data of the vehicle in the feature layer;
On the basis of the preset receiving frequency, the pose data of the two layers are received according to the following strategies:
The method comprises the steps that pose data of vehicles in a semantic layer are matched according to matching weights generated by matching distances when the semantic layers are matched;
when the matching weight is greater than the weight threshold, receiving pose data of the vehicle in the semantic layer;
The vehicle is characterized in that the pose data in the feature layer is used for counting the feature point matching quantity when the feature layer is matched; and when the number of the matched feature points is greater than the number threshold, receiving pose data of the vehicle in the feature layer.
7. A positioning system of a vehicle, comprising:
The first acquisition module is used for acquiring the current geographic information of the vehicle, searching and determining map data of a parking area to be accessed by the vehicle through the current geographic information of the vehicle;
the second acquisition module is used for acquiring panoramic image data of the parking area when the vehicle enters the parking area;
The matching module is used for extracting characteristic information from the panoramic image data of the parking area, matching and positioning the extracted characteristic information with map data of the parking area to obtain pose data of a vehicle in the map data of the parking area, wherein the map data of the parking area comprises a semantic layer of the parking area and a characteristic layer of the parking area; extracting feature information from the panoramic image data of the parking area, and matching and positioning the extracted feature information with map data of the parking area to obtain pose data of the vehicle in the map data of the parking area, wherein the step of obtaining pose data of the vehicle in the map data of the parking area comprises the following steps:
Extracting semantic information from panoramic image data of the parking area, carrying out semantic matching positioning on the extracted semantic information and a semantic layer of the parking area to obtain pose data of the vehicle in the semantic layer, and/or extracting image feature points from the panoramic image data of the parking area, carrying out feature matching positioning on the provided image feature points and a feature layer of the parking area to obtain pose data of the vehicle in the feature layer, extracting image feature information from the panoramic image data of the parking area, and carrying out feature matching positioning on the provided image feature points and the feature layer of the parking area to obtain pose data of the vehicle in the feature layer, wherein the step of obtaining the pose data of the vehicle in the feature layer comprises the following steps: sequentially carrying out matching degree calculation on the image characteristic points extracted from the panoramic image data of the parking area and the three-dimensional map points in the characteristic map layer of the parking area; calculating a three-dimensional map point with the maximum matching degree, and considering that the image characteristic points extracted from the panoramic image data of the parking area are matched with the known three-dimensional map points in the corresponding characteristic map layer; acquiring pose data of a vehicle in a feature layer based on image feature points extracted from panoramic image data of the parking area and map points matched with the image feature points on the feature layer; the pose data of the vehicle in the feature map layer comprises a second pose quantity and a second position quantity from a vehicle body coordinate system to a feature map coordinate system;
And the prediction and update module is used for taking the mileage positioning data of the vehicle calculated in real time as prediction input, and updating and solving the pose of the vehicle under a map coordinate system by utilizing the pose data in the map data of the vehicle in the parking area.
8. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processor, implements the method of positioning a vehicle according to any one of claims 1 to 6.
9. A positioning apparatus of a vehicle, characterized by comprising: a processor and a memory;
the memory is configured to store a computer program, and the processor is configured to execute the computer program stored in the memory, to cause the positioning apparatus of the vehicle to execute the positioning method of the vehicle according to any one of claims 1 to 6.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111481005.7A CN114111774B (en) | 2021-12-06 | 2021-12-06 | Vehicle positioning method, system, equipment and computer readable storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111481005.7A CN114111774B (en) | 2021-12-06 | 2021-12-06 | Vehicle positioning method, system, equipment and computer readable storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN114111774A CN114111774A (en) | 2022-03-01 |
| CN114111774B true CN114111774B (en) | 2024-04-16 |
Family
ID=80367150
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202111481005.7A Active CN114111774B (en) | 2021-12-06 | 2021-12-06 | Vehicle positioning method, system, equipment and computer readable storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114111774B (en) |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114427863A (en) * | 2022-04-01 | 2022-05-03 | 天津天瞳威势电子科技有限公司 | Vehicle positioning method and system, automatic parking method and system, and storage medium |
| CN114684149B (en) * | 2022-04-27 | 2025-01-10 | 广州文远知行科技有限公司 | Parking evaluation method, device, equipment and storage medium |
| CN114659531B (en) * | 2022-05-16 | 2022-09-23 | 苏州挚途科技有限公司 | Map positioning method and device of vehicle and electronic equipment |
| CN116358573B (en) * | 2023-05-31 | 2023-08-29 | 小米汽车科技有限公司 | Map building method, map building device, storage medium and vehicle |
| CN119022922A (en) * | 2024-10-29 | 2024-11-26 | 青岛英飞凌电子技术有限公司 | Combined positioning method and device between buildings |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2018097356A1 (en) * | 2016-11-24 | 2018-05-31 | 충북대학교 산학협력단 | Vehicle parking assist device and method therefor |
| CN110147094A (en) * | 2018-11-08 | 2019-08-20 | 北京初速度科技有限公司 | A kind of vehicle positioning method and car-mounted terminal based on vehicle-mounted viewing system |
| CN110580325A (en) * | 2019-08-28 | 2019-12-17 | 武汉大学 | A method and system for multi-source fusion of ubiquitous positioning signals |
| CN111486840A (en) * | 2020-06-28 | 2020-08-04 | 北京云迹科技有限公司 | Robot positioning method and device, robot and readable storage medium |
| CN111551186A (en) * | 2019-11-29 | 2020-08-18 | 福瑞泰克智能系统有限公司 | Vehicle real-time positioning method and system and vehicle |
| CN111968132A (en) * | 2020-07-28 | 2020-11-20 | 哈尔滨工业大学 | Panoramic vision-based relative pose calculation method for wireless charging alignment |
| CN112102646A (en) * | 2019-06-17 | 2020-12-18 | 北京初速度科技有限公司 | Parking lot entrance positioning method and device in parking positioning and vehicle-mounted terminal |
| CN112304302A (en) * | 2019-07-26 | 2021-02-02 | 北京初速度科技有限公司 | Multi-scene high-precision vehicle positioning method and device and vehicle-mounted terminal |
| CN113220818A (en) * | 2021-05-27 | 2021-08-06 | 南昌智能新能源汽车研究院 | Automatic mapping and high-precision positioning method for parking lot |
-
2021
- 2021-12-06 CN CN202111481005.7A patent/CN114111774B/en active Active
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2018097356A1 (en) * | 2016-11-24 | 2018-05-31 | 충북대학교 산학협력단 | Vehicle parking assist device and method therefor |
| CN110147094A (en) * | 2018-11-08 | 2019-08-20 | 北京初速度科技有限公司 | A kind of vehicle positioning method and car-mounted terminal based on vehicle-mounted viewing system |
| CN112102646A (en) * | 2019-06-17 | 2020-12-18 | 北京初速度科技有限公司 | Parking lot entrance positioning method and device in parking positioning and vehicle-mounted terminal |
| CN112304302A (en) * | 2019-07-26 | 2021-02-02 | 北京初速度科技有限公司 | Multi-scene high-precision vehicle positioning method and device and vehicle-mounted terminal |
| CN110580325A (en) * | 2019-08-28 | 2019-12-17 | 武汉大学 | A method and system for multi-source fusion of ubiquitous positioning signals |
| CN111551186A (en) * | 2019-11-29 | 2020-08-18 | 福瑞泰克智能系统有限公司 | Vehicle real-time positioning method and system and vehicle |
| CN111486840A (en) * | 2020-06-28 | 2020-08-04 | 北京云迹科技有限公司 | Robot positioning method and device, robot and readable storage medium |
| CN111968132A (en) * | 2020-07-28 | 2020-11-20 | 哈尔滨工业大学 | Panoramic vision-based relative pose calculation method for wireless charging alignment |
| CN113220818A (en) * | 2021-05-27 | 2021-08-06 | 南昌智能新能源汽车研究院 | Automatic mapping and high-precision positioning method for parking lot |
Non-Patent Citations (2)
| Title |
|---|
| 利用球心投影与线特征的点云与全景影像配准;岳明宇;康志忠;;遥感信息;20170215(第01期);13-19 * |
| 面向动态高遮挡环境的移动机器人自适应位姿跟踪算法;王勇;陈卫东;王景川;肖鹏;;机器人(第01期);114-123 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114111774A (en) | 2022-03-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN114111774B (en) | Vehicle positioning method, system, equipment and computer readable storage medium | |
| EP3875985B1 (en) | Method, apparatus, computing device and computer-readable storage medium for positioning | |
| CN112639502B (en) | Robot pose estimation | |
| CN110388924B (en) | System and method for radar-based vehicle positioning in connection with automatic navigation | |
| Suhr et al. | Sensor fusion-based low-cost vehicle localization system for complex urban environments | |
| CN115077541B (en) | Positioning method, device, electronic device and storage medium for autonomous driving vehicle | |
| CN109211251B (en) | Instant positioning and map construction method based on laser and two-dimensional code fusion | |
| CN109435955B (en) | Performance evaluation method, device and equipment for automatic driving system and storage medium | |
| CN114264301B (en) | Vehicle-mounted multi-sensor fusion positioning method, device, chip and terminal | |
| US11158065B2 (en) | Localization of a mobile unit by means of a multi hypothesis kalman filter method | |
| CN113822944B (en) | External parameter calibration method and device, electronic equipment and storage medium | |
| US10706617B2 (en) | 3D vehicle localizing using geoarcs | |
| US12085403B2 (en) | Vehicle localisation | |
| EP3447729A1 (en) | 2d vehicle localizing using geoarcs | |
| CN114677663B (en) | Vehicle positioning method, device, electronic device, and computer-readable storage medium | |
| CN114325634A (en) | Method for extracting passable area in high-robustness field environment based on laser radar | |
| Sung et al. | What if there was no revisit? Large-scale graph-based SLAM with traffic sign detection in an HD map using LiDAR inertial odometry | |
| El Farnane Abdelhafid et al. | Visual and light detection and ranging-based simultaneous localization and mapping for self-driving cars | |
| CN114252897B (en) | Positioning method, device, electronic device and computer storage medium | |
| JP2025517575A (en) | SYSTEM AND METHOD FOR CONTROLLING A DEVICE USING A COMPOSITE PROBABILITY FILTERS - Patent application | |
| CN113390422B (en) | Automobile positioning method and device and computer storage medium | |
| CN115200569A (en) | Reliable high-precision positioning method, device, equipment and medium | |
| CN115512051A (en) | A method and device for constructing a point cloud map | |
| CN118405152B (en) | Target positioning method, device, electronic device and storage medium | |
| CN114323020B (en) | Vehicle positioning method, system, equipment and computer readable storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |