CN119437202B - An automatic indoor map construction method integrating multi-source trajectories and mobile phone images - Google Patents
An automatic indoor map construction method integrating multi-source trajectories and mobile phone images Download PDFInfo
- Publication number
- CN119437202B CN119437202B CN202411570254.7A CN202411570254A CN119437202B CN 119437202 B CN119437202 B CN 119437202B CN 202411570254 A CN202411570254 A CN 202411570254A CN 119437202 B CN119437202 B CN 119437202B
- Authority
- CN
- China
- Prior art keywords
- indoor
- map
- image
- mobile phone
- plane structure
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3841—Data obtained from two or more sources, e.g. probe vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
- Navigation (AREA)
Abstract
The invention discloses an automatic indoor map construction method integrating a crowd-sourced track and a mobile phone image, which relates to the technical field of indoor map construction and comprises the following steps that S1, a smart phone is utilized to collect corridor, indoor image and multi-sensor data; S2, constructing an indoor navigation map, S3, extracting an indoor plane structure, and S4, fusing and optimizing a navigation path and the indoor plane structure. According to the indoor map automatic construction method integrating the crowd-sourced track and the mobile phone image, the indoor image acquisition method is guided in a standardized manner in the crowd-sourced track acquisition process, and the plane structure characteristics in the image are extracted, so that the space information of the map is further enriched. The advantages of the whole and partial parts and the frame and the details are complementary, and on one hand, the inefficiency of the crowd source data acquisition is effectively relieved. On the other hand, the low-cost data acquisition mode is continued, the quality of data is ensured, and the drawing precision is improved, so that the availability of the position service is greatly improved.
Description
Technical Field
The invention relates to the technical field of indoor map construction, in particular to an automatic indoor map construction method integrating a crowd source track and a mobile phone image.
Background
With the high-speed promotion of urban construction, indoor space is continuously expanded, research about indoor space information application is increasingly focused by academia and industry, and the market scale of indoor application with location service as a core is rapidly growing. As a data base of indoor Location application, due to the influence of complex signal environment, space layout, topology variability and the like, an indoor two-dimensional map has the problems of data starvation, drawing precision, low efficiency and the like, and becomes a main factor for limiting the development of indoor Location service (Location-basedServices, LBS). The traditional indoor two-dimensional planar map construction can be mainly divided into two modes based on manual drawing and professional sensors according to a data acquisition mode. The manual drawing method is based on the construction of measuring tape, steel rule, hand-held laser range finder, total station and other basic measuring instrument and drawing software such as AutoCAD. While map accuracy can be effectively ensured, manual modeling requires a lot of time and effort. The method based on the professional sensor is to collect indoor space information by using a laser scanner or a camera and the like so as to generate an indoor two-dimensional map. However, professional drawing equipment has high requirements on acquisition personnel, and has high hardware cost, so that the drawing equipment is difficult to popularize and popularize in application.
At present, the hardware of the smart phone is extremely fast in changing speed, and various sensors (an accelerometer, a gyroscope, a magnetometer, a barometer, a GPS (global positioning system) and the like) are built in the smart phone, so that the smart phone not only has stronger computing capacity, but also can acquire multi-source data (indoor moving tracks, behavior data, images and the like), and is an ideal platform for indoor drawing. Various indoor drawing methods are explored by students at home and abroad aiming at different data sources. Wherein, based on SLAM
(Simu ltaneousLoca l i zat ionandMappi ng, SLAM) smart phone indoor mapping methods gradually mature, and students have also explored a lot and have promoted the development of SLAM technology in mapping accuracy, scale, reliability, etc. However, such methods require the user to turn on the camera at any time to aim at the surrounding environment, which does not conform to the use habit of the user. In addition, the algorithm is seriously dependent on image feature points in a scene, the situation of mismatching exists widely, and the requirements on the memory and the computing capacity of the mobile phone in use are high. In addition, the intelligent mobile phone indoor drawing method based on the crowd source data effectively reduces the cost and the technical threshold of data acquisition, but lacks of effective management of time, efficiency and quality of data acquisition. Due to the non-professionality of data collection, it is difficult to guarantee the accuracy of map construction, and a great deal of effort is required for data screening and effective information extraction.
Disclosure of Invention
The invention aims to provide an indoor map automatic construction method integrating a crowd source track and a mobile phone image, which can realize accurate construction of an indoor map by utilizing crowd source track data acquired by a smart phone platform and acquiring the indoor image according to a formulated photographing rule and provides a composition method with low cost, high precision and high efficiency for indoor plane map construction.
In order to achieve the above purpose, the invention provides an indoor map automatic construction method integrating a crowd source track and a mobile phone image, which comprises the following steps:
s1, acquiring corridor, indoor images and multi-sensor data by using a smart phone;
s2, constructing an indoor navigation map;
S3, extracting an indoor plane structure;
and S4, fusing and optimizing the navigation path and the indoor plane structure.
Preferably, S2 comprises the steps of:
S21, obtaining crowd source track data through pedestrian dead reckoning of multi-sensor data acquired by a smart phone platform;
S22, obtaining a behavior landmark through a behavior recognition algorithm, and clustering the behavior landmark through a clustering algorithm based on Wi-F i fingerprints;
s23, calculating the relative distance between the behavioral landmarks to obtain an indoor navigation map.
Preferably, S3 comprises the steps of:
S31, inputting an indoor space monocular image acquired by the smart phone, and extracting a three-dimensional layout structure of the monocular image through an indoor three-dimensional scene understanding network model;
S32, mapping the indoor 3D scene into a 2D image, and performing projection transformation;
s33, recovering the space structure proportion and obtaining the indoor plane structure.
Preferably, S32 comprises the steps of:
S321, extracting and solving camera parameters based on vanishing points by utilizing VPBC technology, wherein a main point of a camera is positioned in the center of an image, and a camera internal reference matrix K is in the form of:
f is the focal length, which can be found from vanishing points;
S322, setting the coordinate of vanishing points in an image coordinate system as vp0(xi0,yi0),vp1(xi1,yi1),vp2(xi2,yi2),, setting the center of gravity of a triangle formed by three vanishing points as an intersection point p (ox, oy) of a camera optical axis and an imaging plane, setting the coordinate of vp 0 in the camera coordinate system as (x c0,yc0, f), wherein x c0=xi0-ox,yc0=-(yi0 -oy), obtaining the other two vanishing points by the same principle, and obtaining a focal length and a rotation matrix by using an orthogonal relation between the vanishing points, wherein the focal length and the rotation matrix are respectively as follows:
s323, calculating a projective transformation matrix H ij by using camera parameters, and carrying out 'orthographic projection' correction on a plane determined by vanishing points vp i and vp j, wherein the solution of H ij is as follows:
Hij=K·Rij·K-1;
Reprojecting 4 intersection points by using transformation matrices H 01 and H 02 of the front wall and the floor to obtain coordinates of each point in the front view after eliminating distortion
Preferably, the real proportional relation of the length, width and height of the indoor space is as follows:
preferably, S4 comprises the steps of:
S41, constructing a corridor, namely performing position matching on the corrected track route and a corridor plane structure extracted from the mobile phone image, and realizing map construction of a corridor area;
S42, matching the room plane structure extracted from the image with inertial navigation data to draw the room structure in the indoor map.
Preferably, the map accuracy is further optimized and guaranteed by constructing an energy function, and an energy equation Cheng Ruxia is to be constructed:
minx={xf,R,t}El(x)+Ec(x)+Eb(x);
Where E l (x) represents the complexity of the layout, E c (x) represents the closeness, E b (x) represents the boundary similarity between adjacent segments, { x f, R, t } represents the rotation and displacement of all corridor paths and room layout at the time of connection.
Preferably, the conditions limiting the uniqueness of the hallway and room layout occurrences are:
preferably, the image acquisition process follows the rule that the indoor corridor shooting needs to shoot all the view fields with the inflection points in the clockwise sequence in the turning process, and the indoor room shooting needs to shoot from two sides of the room to the opposite sides.
Therefore, the indoor map automatic construction method integrating the crowd source track and the mobile phone image has the following beneficial effects:
(1) The method for extracting the planar structure of the indoor map element has the advantages that the image shot by the mobile phone is used as a data source, the geometric structure of the indoor three-dimensional model is not required to be reconstructed, professional measuring equipment is also not required, the intelligent mobile phone is used for shooting the monocular image of the indoor scene, the planar structure of the indoor map element is obtained through a visual scene understanding technology, and the reality and the accuracy of the indoor map expression are effectively improved.
(2) And the high-precision construction of the indoor map is realized by integrating the crowd-sourced track with the standardized mobile phone image acquisition only through different types of data collected by the smart mobile phone and utilizing a new geometric feature extraction method. The indoor space characteristics of the whole and partial, frame and detail are fully developed and complemented, a user is not required to walk through all positions in the indoor environment, the labor cost of indoor map drawing is effectively saved, and the time and the calculated amount are reduced.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of a relative distance estimation according to the present invention;
FIG. 3 is a graph showing the results of step detection and angle measurement between behavioral landmarks of the present invention;
FIG. 4 shows the image acquisition rule of the corridor, wherein a is a right angle turn, b is a four-fork corner, and c is a T-shaped corner;
fig. 5 is a rule of image acquisition of a room according to the present invention.
Detailed Description
The technical scheme of the invention is further described below through the attached drawings and the embodiments.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. The terms "first," "second," and the like, as used herein, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
Examples
Referring to fig. 1-5, the invention provides an indoor map automatic construction method integrating a crowd source track and a mobile phone image, which can realize accurate construction of an indoor map by using three modules of indoor navigation map construction, indoor plane structure extraction and indoor map construction integrating a navigation path and a plane structure only by utilizing crowd source track data acquired by a smart mobile phone platform and acquiring an indoor image according to a formulated photographing rule, and provides a composition method with low cost, high precision and high efficiency for indoor plane map construction.
An automatic indoor map construction method integrating a crowd-sourced track and a mobile phone image comprises the following steps:
S1, collecting corridor, indoor images and multi-sensor data by using a smart phone.
S2, constructing an indoor navigation map.
S21, multi-sensor data acquired by the smart phone platform are subjected to dead reckoning to obtain crowd source track data. Many behavioral landmarks exist in the crowd-sourced data, wherein a part of behavioral landmarks are collected at the same node, and in order to construct an indoor map, the behavioral landmarks collected by the same node need to be clustered at first.
S22, obtaining a behavior landmark through a behavior recognition algorithm, and clustering the behavior landmark through a clustering algorithm based on Wi-Fi fingerprints.
When a new track is obtained, a behavior landmark is obtained by using a behavior recognition algorithm, and meanwhile, wi-Fi fingerprints at the moment of behavior occurrence are extracted to be used as the characteristics of the behavior landmark. A behavior landmark sequence NAL 1,NAL2...NALm is obtained according to the time sequence. Wherein m behavioural landmarks are included (NAL represents the behavioural landmark extracted in the new upload track). M behavioral landmarks are clustered using a Wi-Fi fingerprint-based clustering algorithm, where there may be n behavioral landmarks with the same Wi-F i fingerprint characteristics, where (0 n m).
And if the track is the first track, sequentially adding the behavior landmarks contained in the track into the node database. If the track is not the first track data, the image dataset of the existing garbage can be collected in a large quantity in the track through a clustering algorithm based on Wi-F i characteristics, and the image dataset is used for constructing a training dataset.
S23, calculating the relative distance between the behavioral landmarks to obtain an indoor navigation map.
After the behavioral landmarks are clustered, all the behavioral landmarks are clustered into different classes, each class represents one node of the indoor map, and the two nodes are directly adjacent, so that the distance between the two nodes can be obtained through pedestrian dead reckoning. As shown in fig. 4 (a), the track acquired by one smart phone (obtained by dead reckoning of the pedestrian) is shown, A, B, C and D are four behavioral landmarks (turns). Where AB, BC and CD are adjacent behavioral landmarks (known by the time sequence of landmark detection), the relative distances between them can be directly obtained by step detection and step estimation. For non-adjacent landmarks, such as a and C, it is necessary to obtain the length of AB and BC and the angular information of angle B to calculate the relative distance between the landmarks. The distance and angle information between the behavioral landmarks in fig. 4 (a) can be obtained through inertial data acquired by a smart phone, the distance information can be acquired through step detection by accelerometer data, and the angle information is acquired by a gyroscope and an electronic compass, and fig. 3 shows the step detection result and angle change information between the behavioral landmarks.
Based on the behavioral landmark clustering and the relative distance calculation, the points for constructing the indoor navigation map and the relative distances among all the points are obtained to form a relative distance matrix. And calculating the relative spatial relationship by using the relative distance between the points as a non-similarity measurement parameter by using a multidimensional scale technology, and constructing the indoor navigation map.
S3, extracting the indoor plane structure.
S31, inputting an indoor space monocular image acquired by the smart phone, and extracting a three-dimensional layout structure of the monocular image through an indoor three-dimensional scene understanding network model.
The initial frame is trained through convolutional neural network (Convo l ut i ona l Neura l Networks, CNN) learning, and is used as a feature vector to be input into a structured support vector machine, so that the extraction accuracy of indoor three-dimensional space features is improved. The indoor layout estimation based on CNN is to directly train the corner points (key points) and the room types of the room, estimate the ordered set of the key points of the indoor layout, and connect them according to a specific order, thereby obtaining the indoor layout frame.
S32, mapping the indoor 3D scene into a 2D image, and performing projection transformation.
Because the three-dimensional space layout length, width and height structural information extracted from the mobile phone image is represented by pixel distances, the image can be distorted to different degrees in the process of mapping the indoor 3D scene to the 2D image, so that the proportional distortion based on the space structure is caused. Therefore, the real proportional relation among the length, the width and the height in the indoor space needs to be restored by eliminating the influence of image imaging distortion.
Extracting and solving camera parameters based on vanishing points by VPBC technology, wherein a main point of a camera is positioned in the center of an image, and a camera internal reference matrix K is in the form of:
f is focal length, which can be obtained by vanishing points, wherein the vanishing points are set to be vp0(xi0,yi0),vp1(xi1,yi1),vp2(xi2,yi2), in an image coordinate system, the center of gravity of a triangle formed by three vanishing points is set to be an intersection point p (ox, oy) of a camera optical axis and an imaging plane, vp 0 is set to be (x c0,yc0, f) in the camera coordinate system, wherein x c0=xi0-ox,yc0=-(yi0 -oy), the other two vanishing points are obtained in the same way, and the focal length and the rotation matrix can be obtained by utilizing the orthogonal relation among the vanishing points respectively:
Using the camera parameters to calculate the projective transformation matrix H ij, the plane determined by vanishing points vp i and vp j can be "orthoprojected" corrected, and the solution for H ij is as follows:
Hij=K·Rij·K-1;
Reprojecting 4 intersection points by using transformation matrices H 01 and H 02 of the front wall and the floor to obtain coordinates of each point in the front view after eliminating distortion
S33, recovering the space structure proportion and obtaining the indoor plane structure.
Based on after re-projectionThe coordinates calculate the relative length of the length, width and height respectively, and finally the real proportional relation of the length, width and height of the indoor space is solved:
And setting the interlayer spacing of the indoor rooms as a mean value H, and calculating a ratio value by taking the building height H as a reference for the length, width and height extracted from the image, so that the scale consistency of the map is ensured.
And S4, fusing and optimizing the navigation path and the indoor plane structure.
S41, constructing a corridor, namely firstly, in the image acquisition process, orderly photographing all the view fields with inflection points according to a clockwise sequence in the turning process, wherein the following rule is required. As shown in fig. 4, when the user passes through one corner, a picture is taken on the corridor in one direction and then rotated to take a picture in the other direction.
The gyroscope and accelerometer record both angle and acceleration. Fluctuations in gyroscope readings (up or down) indicate that the user is turning left or right at this location. During both undulations, the user takes a picture of the corridor, and the location of the user at the corner can be accounted for by inertial data. And according to the image acquisition rule and the analysis of shooting behaviors, performing position matching on the corrected track route and the corridor plane structure extracted from the mobile phone image, and realizing map construction of the corridor area.
S42, building rooms, wherein rules are shown in FIG. 5 when room images are acquired, and users shoot the rooms from two sides to the opposite sides. And then matching the room plane structure extracted from the image with inertial navigation data to draw the room structure in the indoor map.
After the track data is matched with the position of the shot image, the map precision is further ensured by constructing an energy function in a further optimization mode, and the energy equation to be constructed is as follows
minx={xf,R,t}El(x)+Ec(x)+Eb(x);
Where E l (x) represents the complexity of the layout, E c (x) represents the closeness, E b (x) represents the boundary similarity between adjacent segments, { x f, R, t } represents the rotation and displacement of all corridor paths and room layout at the time of connection.
The uniqueness of the hallway and room layout appearance is limited by the following formula.
Therefore, the invention adopts the indoor map automatic construction method integrating the crowd source track and the mobile phone image, and provides a high-efficiency, low-cost and independent indoor map element plane structure extraction method, pictures shot by the mobile phone are taken as data sources, the geometric structure of an indoor three-dimensional model is not required to be reconstructed, professional measuring equipment is also not required, only the intelligent mobile phone is used for shooting indoor scene monocular pictures, the acquisition of the indoor map element plane structure is realized through a visual scene understanding technology, and the authenticity and accuracy of indoor map expression are effectively improved. And the high-precision construction of the indoor map is realized by integrating the crowd-sourced track with the standardized mobile phone image acquisition only through different types of data collected by the smart mobile phone and utilizing a new geometric feature extraction method. The indoor space characteristics of the whole and partial, frame and detail are fully developed and complemented, a user is not required to walk through all positions in the indoor environment, the labor cost of indoor map drawing is effectively saved, and the time and the calculated amount are reduced.
It should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present invention may be modified or substituted by the same, and the modified or substituted technical solution may not deviate from the spirit and scope of the technical solution of the present invention.
Claims (5)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411570254.7A CN119437202B (en) | 2024-11-05 | 2024-11-05 | An automatic indoor map construction method integrating multi-source trajectories and mobile phone images |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411570254.7A CN119437202B (en) | 2024-11-05 | 2024-11-05 | An automatic indoor map construction method integrating multi-source trajectories and mobile phone images |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119437202A CN119437202A (en) | 2025-02-14 |
| CN119437202B true CN119437202B (en) | 2025-06-20 |
Family
ID=94511034
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411570254.7A Active CN119437202B (en) | 2024-11-05 | 2024-11-05 | An automatic indoor map construction method integrating multi-source trajectories and mobile phone images |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119437202B (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114494582A (en) * | 2021-12-30 | 2022-05-13 | 重庆交通大学 | A dynamic update method of 3D model based on visual perception |
| CN114739410A (en) * | 2022-03-31 | 2022-07-12 | 浙江大学 | Pedestrian indoor positioning and AR navigation method based on computer vision and PDR |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150092048A1 (en) * | 2013-09-27 | 2015-04-02 | Qualcomm Incorporated | Off-Target Tracking Using Feature Aiding in the Context of Inertial Navigation |
| EP3783312A1 (en) * | 2019-08-19 | 2021-02-24 | HERE Global B.V. | Matching of crowdsourced building floors with the ground level |
| CN114095853B (en) * | 2020-07-30 | 2023-04-14 | 华为技术有限公司 | A method and device for generating an indoor map |
| US11997578B2 (en) * | 2021-07-19 | 2024-05-28 | At&T Intellectual Property I, L.P. | Method and apparatus for indoor mapping and location services |
| CN117168430A (en) * | 2022-05-25 | 2023-12-05 | 华为技术有限公司 | Method and server for constructing road network topological map |
| CN115824221A (en) * | 2022-12-01 | 2023-03-21 | 华南理工大学 | Indoor autonomous navigation method, device, equipment and medium based on V-SLAM algorithm |
| CN118518099A (en) * | 2024-05-22 | 2024-08-20 | 浙江工业大学 | A method for constructing indoor floor plans based on crowdsourced inertial navigation and ultra-wideband data |
-
2024
- 2024-11-05 CN CN202411570254.7A patent/CN119437202B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114494582A (en) * | 2021-12-30 | 2022-05-13 | 重庆交通大学 | A dynamic update method of 3D model based on visual perception |
| CN114739410A (en) * | 2022-03-31 | 2022-07-12 | 浙江大学 | Pedestrian indoor positioning and AR navigation method based on computer vision and PDR |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119437202A (en) | 2025-02-14 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Toft et al. | Long-term visual localization revisited | |
| CN109166149B (en) | Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU | |
| CN107967457B (en) | A method and system for location recognition and relative positioning that adapts to changes in visual features | |
| Lin et al. | Topology aware object-level semantic mapping towards more robust loop closure | |
| CN108564616B (en) | Fast robust RGB-D indoor three-dimensional scene reconstruction method | |
| CN105843223B (en) | A kind of mobile robot three-dimensional based on space bag of words builds figure and barrier-avoiding method | |
| CN103712617B (en) | A kind of creation method of the multilamellar semanteme map of view-based access control model content | |
| CN113168717A (en) | A point cloud matching method and device, navigation method and device, positioning method, and lidar | |
| CN112634451A (en) | Outdoor large-scene three-dimensional mapping method integrating multiple sensors | |
| CN107392964A (en) | The indoor SLAM methods combined based on indoor characteristic point and structure lines | |
| CN106940186A (en) | A kind of robot autonomous localization and air navigation aid and system | |
| CN109596121B (en) | A method for automatic target detection and spatial positioning of a mobile station | |
| CN109059895A (en) | A kind of multi-modal indoor ranging and localization method based on mobile phone camera and sensor | |
| CN106595659A (en) | Map merging method of unmanned aerial vehicle visual SLAM under city complex environment | |
| CN111915517B (en) | Global positioning method suitable for RGB-D camera under indoor illumination unfavorable environment | |
| CN112833892A (en) | Semantic mapping method based on track alignment | |
| CN111595334A (en) | Indoor autonomous positioning method based on tight coupling of visual point-line characteristics and IMU (inertial measurement Unit) | |
| CN111161334B (en) | Semantic map construction method based on deep learning | |
| CN117671022B (en) | Mobile robot vision positioning system and method in indoor weak texture environment | |
| CN115597592A (en) | Comprehensive positioning method applied to unmanned aerial vehicle inspection | |
| CN117635651A (en) | A dynamic environment SLAM method based on YOLOv8 instance segmentation | |
| Ma et al. | Location and 3-D visual awareness-based dynamic texture updating for indoor 3-D model | |
| CN112284390B (en) | Indoor high-precision positioning navigation method based on VSLAM | |
| CN116698017B (en) | Object-level environment modeling method and system for indoor large-scale complex scene | |
| CN117109568B (en) | Inertial/multi-dimensional vision joint positioning method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |