CN114693520A - Method, system and storage medium for corresponding panorama and source map - Google Patents
Method, system and storage medium for corresponding panorama and source map Download PDFInfo
- Publication number
- CN114693520A CN114693520A CN202210193896.4A CN202210193896A CN114693520A CN 114693520 A CN114693520 A CN 114693520A CN 202210193896 A CN202210193896 A CN 202210193896A CN 114693520 A CN114693520 A CN 114693520A
- Authority
- CN
- China
- Prior art keywords
- image
- matrix
- source
- panoramic
- panoramic image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
The present application relates to the field of image processing technologies, and in particular, to a method, a system, and a storage medium for mapping a panorama to a source map. The method for corresponding the panoramic image and the source image comprises the following steps: carrying out panoramic stitching to obtain a panoramic picture; calculating a projection mapping matrix of each source image to the panoramic image; performing projection mapping on the mark matrix of each source image through the projection mapping matrix to obtain a serial number mark matrix of the panoramic image; and responding to the click command of the panoramic image to obtain the serial number of the source image corresponding to the clicked point position. The corresponding relation between the panoramic image and the source image is obtained through the method in lessons saving mode, and when the position of the local area needs to be located subsequently, the target local area can be located through clicking the panoramic image.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, a system, and a storage medium for mapping a panorama to a source map.
Background
With the development of image technology, the public demands for the visual quality of images are higher and higher. For the seen images, people always want to see panoramic images with higher resolution and wider viewing angles, so how to implement panoramic image stitching is very important.
In the prior art, various panoramic stitching technologies exist, and these panoramic stitching technologies can only acquire a final panoramic image but cannot acquire a corresponding relationship between the panoramic image and a source image, so that inconvenience is brought to subsequent positioning of a local area position, and how to acquire the corresponding relationship between the panoramic image and the source image becomes a technical problem to be solved urgently.
Disclosure of Invention
In view of the above problems, the present application provides a method for mapping a panorama to a source map, so as to solve the technical problem that the prior art cannot directly obtain a mapping relationship between the panorama and the source map. The specific technical scheme is as follows:
a method for mapping a panorama to a source map comprises the following steps:
carrying out panoramic stitching to obtain a panoramic picture;
calculating a projection mapping matrix of each source image to the panoramic image;
performing projection mapping on the mark matrix of each source image through the projection mapping matrix to obtain a serial number mark matrix of the panoramic image;
and responding to the click command of the panoramic image to obtain the serial number of the source image corresponding to the clicked point position.
Further, the "calculating a projection mapping matrix from each source image to the panorama" specifically includes the steps of:
extracting the characteristic points of each source image and extracting the characteristic points of the panoramic image;
matching the characteristic points to obtain target matching points;
and calculating a projection mapping matrix of each source image to the panoramic image.
Further, the "performing projection mapping on the mark matrix of each source image through the projection mapping matrix to obtain the sequence number mark matrix of the panoramic image" specifically includes the steps of:
and projecting and mapping the mark matrix of each source image through the projection mapping matrix to obtain the mark matrix of each source image in the panoramic image, and multiplying the mark matrix by the corresponding source image sequence number to obtain the sequence number mark matrix of each source image in the panoramic image.
Further, the "responding to the panoramic image click command to obtain the sequence number of the source image corresponding to the clicked point position" specifically includes the following steps:
responding to the panoramic image clicking instruction to obtain the corresponding coordinate position of the clicked point in the panoramic image;
obtaining a marking matrix of the source graph corresponding to the clicked point according to the corresponding coordinate position;
and obtaining the serial number of the source graph corresponding to the clicked point position according to the mark matrix.
Further, the 'panoramic stitching to obtain a panoramic view' specifically includes the steps of:
acquiring a target scene picture and storing the target scene picture into a preset file;
reading an image from the preset file;
and carrying out panoramic stitching on the images to obtain a panoramic image.
Further, the "storing the target scene picture in a preset file" specifically includes the steps of:
storing the target scene picture in a preset file in a uniform format;
the method for reading the image sequence from the preset file specifically comprises the following steps:
batch reads were performed by the GLOB function.
Further, the "splicing the images in a panoramic manner to obtain a panoramic image" specifically includes the steps of:
detecting the characteristic points of the images, matching the characteristic points of the images by a nearest neighbor method, storing the optimal matching confidence coefficient, and storing the homography matrix of the matched characteristic points of the two images;
deleting the matching between the images with lower confidence coefficient, and determining a mosaic set of the matched images through a parallel-searching algorithm;
performing camera parameter estimation on all the images of the spliced set to obtain a rotation matrix, and further adjusting the rotation matrix by a light beam averaging method;
performing horizontal or vertical waveform correction;
projection splicing, illumination compensation and multi-band fusion of an original image to a specified panoramic image.
In order to solve the technical problem, a computer-readable storage medium is also provided, and the specific technical scheme is as follows:
a computer-readable storage medium, on which a computer program is stored, which program is executed by a processor for performing any of the above mentioned methods of correspondence of a panorama and a source map.
In order to solve the technical problem, a corresponding system of the panoramic image and the source image is also provided, and the specific technical scheme is as follows:
a system for mapping a panorama to a source map, comprising: a server and a client;
the server is used for: carrying out panoramic stitching to obtain a panoramic picture; calculating a projection mapping matrix of each source image to the panoramic image; performing projection mapping on the mark matrix of each source image through the projection mapping matrix to obtain a serial number mark matrix of the panoramic image;
the client is used for: sending a panorama clicking instruction to the server;
the server is also used for: and responding to the panoramic image clicking instruction of the client to obtain the serial number of the source image corresponding to the clicked point position.
Further, the method also comprises the following steps: a camera device;
the image pickup apparatus is configured to: and shooting a target picture and sending the target picture to a server.
The invention has the beneficial effects that: a method for mapping a panorama to a source map comprises the following steps: carrying out panoramic stitching to obtain a panoramic picture; calculating a projection mapping matrix of each source image to the panoramic image; performing projection mapping on the mark matrix of each source image through the projection mapping matrix to obtain a serial number mark matrix of the panoramic image; and responding to the click command of the panoramic image to obtain the serial number of the source image corresponding to the clicked point position. The corresponding relation between the panoramic image and the source image can be obtained through the method, and when the position of the local area needs to be located subsequently, the target local area can be located through clicking the panoramic image.
The above description of the present invention is only an overview of the technical solutions of the present application, and in order to make the technical solutions of the present application more clearly understood by those skilled in the art, the present invention may be further implemented according to the content described in the text and drawings of the present application, and in order to make the above objects, other objects, features, and advantages of the present application more easily understood, the following description is made in conjunction with the detailed description of the present application and the drawings.
Drawings
The drawings are only for purposes of illustrating the principles, implementations, applications, features, and effects of particular embodiments of the present application, as well as others related thereto, and are not to be construed as limiting the application.
In the drawings of the specification:
FIG. 1 is a first flowchart illustrating a method for mapping a panorama to a source graph according to an embodiment;
FIG. 2 is a second flowchart illustrating a method for mapping a panorama to a source map according to an embodiment;
FIG. 3 is a flow chart of a method for mapping a panorama to a source graph according to a third embodiment;
FIG. 4 is a block diagram of a computer-readable storage medium according to an embodiment;
fig. 5 is a block diagram of a system for mapping a panorama to a source map according to an embodiment.
The reference numerals referred to in the above figures are explained below:
400. a computer-readable storage medium having stored thereon,
500. a system of correspondence of the panoramic view and the source view,
501. the server-side is used for transmitting the data,
502. and (4) a client.
Detailed Description
In order to explain in detail possible application scenarios, technical principles, practical embodiments, and the like of the present application, the following detailed description is given with reference to the accompanying drawings in conjunction with the listed embodiments. The embodiments described herein are merely for more clearly illustrating the technical solutions of the present application, and therefore, the embodiments are only used as examples, and the scope of the present application is not limited thereby.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase "an embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or related to other embodiments specifically defined. In principle, in the present application, the technical features mentioned in the embodiments can be combined in any manner to form a corresponding implementable technical solution as long as there is no technical contradiction or conflict.
Unless defined otherwise, technical terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the use of relational terms herein is intended to describe specific embodiments only and is not intended to limit the present application.
In the description of the present application, the term "and/or" is a expression for describing a logical relationship between objects, meaning that three relationships may exist, for example a and/or B, meaning: there are three cases of A, B, and both A and B. In addition, the character "/" herein generally indicates that the former and latter associated objects are in a logical relationship of "or".
In this application, terms such as "first" and "second" are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Without further limitation, in this application, the use of "including," "comprising," "having," or other similar expressions in phrases and expressions of "including," "comprising," or "having," is intended to cover a non-exclusive inclusion, and such expressions do not exclude the presence of additional elements in a process, method, or article that includes the recited elements, such that a process, method, or article that includes a list of elements may include not only those elements but also other elements not expressly listed or inherent to such process, method, or article.
As is understood in the "review guidelines," in this application, the terms "greater than," "less than," "more than," and the like are to be understood as excluding the number; the expressions "above", "below", "within" and the like are understood to include the present numbers. In addition, in the description of the embodiments of the present application, "a plurality" means two or more (including two), and expressions related to "a plurality" similar thereto are also understood, for example, "a plurality of groups", "a plurality of times", and the like, unless specifically defined otherwise.
Just as the prior art mentioned in the background art mentioned above only mentions the panorama stitching technology, but cannot obtain the corresponding relationship between the panoramic image and the source image, the present application provides a new method for corresponding the panoramic image and the source image, so as to obtain the serial number of the source image corresponding to the corresponding position no matter the user clicks any position in the panoramic image.
With reference to fig. 1 to 3, a technical solution of a corresponding method of a panoramic image and a source image is explained as follows:
fig. 1 shows a first flowchart of a method for mapping a panorama to a source map, which includes steps S101 to S104.
In step S101, a panorama is obtained by panorama stitching. As shown in fig. 3, step S101 further includes step S301 to step S303.
In step S301, a target scene picture is captured and stored in a preset file. In this embodiment, a target scene picture is acquired by a camera. The camera is preferably suspended at any fixed position in the scene, the whole scene can be overlooked, a series of scene pictures can be collected and stored in a designated folder by rotating the camera, and the camera can be at the fixed position and can also be collected in a movable manner.
The step of "storing the target scene picture in a preset file" specifically includes the steps of: and storing the target scene picture into a preset file in a uniform format. Such as: bmp format.
In step S302, an image is read from the preset file. Specifically, the batch reading may be performed by a GLOB function. The pictures belong to the same scene and are acquired at the same time.
In step S303, the images are panoramically stitched to obtain a panoramic image. The method specifically comprises the following steps:
SURF or ORB feature point detection is carried out on the images, feature points of the images are matched through a nearest neighbor method, the optimal matching confidence coefficient is stored, and homography matrixes of the two matched feature points of the images are stored at the same time; deleting the matching between the images with lower confidence coefficient, and determining a mosaic set of the matched images through a parallel-searching algorithm; performing camera parameter rough estimation on all the image of the splicing set, then solving a rotation matrix, and further accurately estimating the rotation matrix by a light beam average method; performing horizontal or vertical waveform correction; projection splicing, illumination compensation and multi-band fusion of an original image to a specified panoramic image.
After obtaining the panoramic view, in step S102, a projection mapping matrix from each source view to the panoramic view is calculated. The method specifically comprises the following steps: extracting the characteristic points of each source image and extracting the characteristic points of the panoramic image; matching the characteristic points to obtain target matching points; and calculating a projection mapping matrix of each source image to the panoramic image.
In this embodiment, a SURF or ORB algorithm is used for extracting feature points for each source image, a SURF or ORB algorithm is used for extracting feature points for a panorama, a nearest neighbor method is used for matching the feature points, a Lowe's algorithm is used to obtain excellent matching points (i.e. target matching points so as to exclude key points without matching relationship due to image occlusion and background clutter), and the excellent matching points are used as parameters of a findhomograph function to calculate a projection mapping matrix H from each source image to the panorama.
In step S103, the tag matrix of each source graph is projection mapped through the projection mapping matrix to obtain a serial number tag matrix of the panorama. The method specifically comprises the following steps:
and projecting and mapping the mark matrix of each source image through the projection mapping matrix H to obtain the mark matrix of each source image in the panoramic image, and multiplying the mark matrix by the corresponding source image sequence number to obtain the sequence number mark matrix of each source image in the panoramic image. The marking matrix of each source graph is the same as the size of the source graph, and the numerical value of the marking matrix is 1.
The method specifically comprises the following steps: and performing projection mapping transformation on the mark matrix of each source image by using a warPerspecive function to obtain the mark matrix of each source image in the panoramic image, wherein the mark matrix only contains elements of 0 and 1, and then multiplying the elements by the corresponding source image sequence number to finally obtain the sequence number mark matrix of each source image in the panoramic image, wherein the sequence number mark matrix is 0 in the no-scene part and is from the sequence number 1 of the corresponding source image to the total number n of the source images in the scene part.
Such as: the source map marking matrix is a, the source map marking matrix is a matrix with the same size as the source map and with the total number of 1, the panorama sequence number marking matrix is B, the projection matrix is H, and the source map sequence number is N, where N is 1, 2.
In step S104, in response to the panorama click command, the sequence number of the source map corresponding to the clicked point position is obtained. As shown in fig. 2, step S104 further includes step S201 to step S203.
In step S201, in response to the panorama clicking command, the clicked point is obtained at the corresponding coordinate position of the panorama. And the user clicks the panoramic image at the client randomly, and the corresponding coordinate position of the point in the panoramic image is obtained through a clicking action.
In step S202, a label matrix of the source graph corresponding to the clicked point is obtained according to the corresponding coordinate position.
In step S203, the sequence number of the source graph corresponding to the clicked point position is obtained according to the mark matrix. If the point is located at the overlapping part of the multiple source graphs, multiple sequence number results appear, the sequence number with the minimum sequence number can be selected as the final result, and the corresponding multiple source graph sequence numbers can also be output, and the point can be determined according to the actual situation.
A method for mapping a panorama to a source map comprises the following steps: carrying out panoramic stitching to obtain a panoramic picture; calculating a projection mapping matrix of each source image to the panoramic image; performing projection mapping on the mark matrix of each source image through the projection mapping matrix to obtain a serial number mark matrix of the panoramic image; and responding to the click command of the panoramic image to obtain the serial number of the source image corresponding to the clicked point position. The corresponding relation between the panoramic image and the source image can be obtained through the method, and when the position of the local area needs to be located subsequently, the target local area can be located through clicking the panoramic image.
Referring now to FIG. 4, a computer-readable storage medium 400 is described in detail:
a computer-readable storage medium 400, on which a computer program is stored, which program is executed by a processor for:
and S101, carrying out panoramic stitching to obtain a panoramic image. As shown in fig. 3, step S101 further includes step S301 to step S303.
Step S301, collecting a target scene picture and storing the target scene picture in a preset file. In this embodiment, a target scene picture is acquired by a camera. The camera is preferably suspended at any fixed position in the scene, the whole scene can be overlooked, a series of scene pictures can be collected and stored in a designated folder by rotating the camera, and the camera can be at the fixed position and can also be collected in a movable manner.
The step of "storing the target scene picture in a preset file" specifically includes the steps of: and storing the target scene picture into a preset file in a uniform format. Such as: bmp format.
Step S302, reading an image from the preset file. Specifically, batch reading may be performed by a GLOB function. The pictures belong to the same scene and are acquired at the same time.
And step S303, carrying out panoramic stitching on the images to obtain a panoramic image. The method specifically comprises the following steps:
performing SURF or ORB feature point detection on the images, matching the feature points of the images by a nearest neighbor method, storing the optimal matching confidence and simultaneously storing homography matrixes of the two matched feature points of the images; deleting the matching between the images with lower confidence coefficient, and determining a mosaic set of the matched images through a parallel-searching algorithm; performing camera parameter rough estimation on all the spliced set images, then solving a rotation matrix, and further accurately estimating the rotation matrix by a light beam averaging method; performing horizontal or vertical waveform correction; projection splicing, illumination compensation and multi-band fusion of an original image to a specified panoramic image.
After obtaining the panoramas, in step S102, a projection mapping matrix from each source map to the panoramas is calculated. The method specifically comprises the following steps: extracting the characteristic points of each source image and extracting the characteristic points of the panoramic image; matching the characteristic points to obtain target matching points; and calculating a projection mapping matrix of each source image to the panoramic image.
In this embodiment, a SURF or ORB algorithm is used for extracting feature points for each source image, a SURF or ORB algorithm is used for extracting feature points for a panorama, a nearest neighbor method is used for matching the feature points between the SURF or ORB algorithm and the panorama, a Lowe's algorithm is used to obtain excellent matching points (i.e. target matching points so as to exclude key points without matching relation due to image occlusion and background clutter), and then a findHomography function is used to calculate a projection mapping matrix H from each source image to the panorama.
In step S103, the tag matrix of each source graph is projection mapped through the projection mapping matrix to obtain a serial number tag matrix of the panorama. The method specifically comprises the following steps:
and projecting and mapping the mark matrix of each source image through the projection mapping matrix to obtain the mark matrix of each source image in the panoramic image, and multiplying the mark matrix by the corresponding source image sequence number to obtain the sequence number mark matrix of each source image in the panoramic image. The marking matrix of each source graph is the same as the size of the source graph, and the numerical value of the marking matrix is 1.
The method specifically comprises the following steps: and performing projection mapping transformation on the mark matrix of each source image by using a warPerspecive function to obtain the mark matrix of each source image in the panoramic image, wherein the mark matrix only contains elements of 0 and 1, and then multiplying the elements by the corresponding source image sequence number to finally obtain the sequence number mark matrix of each source image in the panoramic image, wherein the sequence number mark matrix is 0 in the no-scene part and is from the sequence number 1 of the corresponding source image to the total number n of the source images in the scene part.
Such as: the source map marking matrix is a, the source map marking matrix is a matrix with the same size as the source map and with the total number of 1, the panorama sequence number marking matrix is B, the projection matrix is H, and the source map sequence number is N, where N is 1, 2.
In step S104, in response to the panorama click command, the sequence number of the source map corresponding to the clicked point position is obtained. As shown in fig. 2, step S104 further includes step S201 to step S203.
In step S201, in response to the panorama clicking command, the clicked point is obtained at the corresponding coordinate position of the panorama. And the user clicks the panoramic image at the client randomly, and the corresponding coordinate position of the point in the panoramic image is obtained through a clicking action.
In step S202, a mark matrix of the source map corresponding to the clicked point is obtained according to the corresponding coordinate position.
In step S203, the sequence number of the source graph corresponding to the clicked point position is obtained according to the mark matrix. If the point is located at the overlapping part of the multiple source maps, multiple sequence number results appear, the sequence number with the minimum sequence number can be selected as the final result, and the corresponding multiple source map sequence numbers can also be output, and the point can be specifically determined according to the actual situation requirements.
The computer-readable storage medium 400 can obtain the corresponding relationship between the panorama and the source map by performing the above steps, and then can locate the target local area by clicking the panorama when the position of the local area needs to be located subsequently.
Referring now to fig. 5, a system 500 for mapping a panoramic view to a source view is described in detail:
a system 500 for panorama to source map correspondence, comprising: a server 501 and a client 502; the server 501 is configured to: carrying out panoramic stitching to obtain a panoramic picture; calculating a projection mapping matrix of each source image to the panoramic image; performing projection mapping on the mark matrix of each source image through the projection mapping matrix to obtain a serial number mark matrix of the panoramic image;
the client 502 is configured to: sending a panorama click instruction to the server 501;
the server 501 is further configured to: and responding to the panoramic image clicking instruction of the client 502 to obtain the sequence number of the source image corresponding to the clicked point position.
Further, the method also comprises the following steps: a camera device;
the image pickup apparatus is configured to: shooting a target picture and sending the target picture to the server 501. The camera device is preferably suspended at any fixed position in a scene, the whole scene can be overlooked, a series of scene pictures can be collected and stored in a designated folder by rotating the camera, and the camera can be at the fixed position and can also be collected in a movable manner. The target scene pictures are stored in a preset file in a uniform format, such as: bmp format.
Wherein, the server 501 is further configured to: SURF or ORB feature point detection is carried out on the images, feature points of the images are matched through a nearest neighbor method, the optimal matching confidence coefficient is stored, and homography matrixes of the two matched feature points of the images are stored at the same time; deleting the matching between the images with lower confidence coefficient, and determining a mosaic set of the matched images through a parallel-searching algorithm; performing camera parameter rough estimation on all the image of the splicing set, then solving a rotation matrix, and further accurately estimating the rotation matrix by a light beam average method; performing horizontal or vertical waveform correction; projection splicing from an original image to a specified panoramic image, illumination compensation and multi-band fusion.
After obtaining the panoramic image, extracting the characteristic points of each source image, and extracting the characteristic points of the panoramic image; matching the characteristic points to obtain target matching points; and calculating a projection mapping matrix of each source image to the panoramic image.
In this embodiment, a SURF or ORB algorithm is used for extracting feature points for each source image, a SURF or ORB algorithm is used for extracting feature points for a panorama, a nearest neighbor method is used for matching the feature points between the SURF or ORB algorithm and the panorama, a Lowe's algorithm is used to obtain excellent matching points (i.e. target matching points so as to exclude key points without matching relation due to image occlusion and background clutter), and then a findHomography function is used to calculate a projection mapping matrix H from each source image to the panorama.
And projecting and mapping the mark matrix of each source image through the projection mapping matrix H to obtain the mark matrix of each source image in the panoramic image, and multiplying the mark matrix by the corresponding source image sequence number to obtain the sequence number mark matrix of each source image in the panoramic image. The marking matrix of each source graph is the same as the size of the source graph, and the numerical value of the marking matrix is 1.
The method specifically comprises the following steps: and performing projection mapping transformation on the mark matrix of each source image by using a warPerspecive function to obtain the mark matrix of each source image in the panoramic image, wherein the mark matrix only contains elements of 0 and 1, and then multiplying the elements by the corresponding source image sequence number to finally obtain the sequence number mark matrix of each source image in the panoramic image, wherein the sequence number mark matrix is 0 in the no-scene part and is from the sequence number 1 of the corresponding source image to the total number n of the source images in the scene part.
Such as: the source map marking matrix is a, the source map marking matrix is a matrix with the same size as the source map and with the total number of 1, the panorama sequence number marking matrix is B, the projection matrix is H, and the source map sequence number is N, where N is 1, 2.
And responding to any click command of the client 502 on the panoramic image, and obtaining the corresponding coordinate position of the point on the panoramic image through a click action. And obtaining a marking matrix of the source graph corresponding to the clicked point according to the corresponding coordinate position. And obtaining the serial number of the source graph corresponding to the clicked point position according to the mark matrix. If the point is located at the overlapping part of the multiple source graphs, multiple sequence number results appear, the sequence number with the minimum sequence number can be selected as the final result, and the corresponding multiple source graph sequence numbers can also be output, and the point can be determined according to the actual situation.
Through the system 500, the corresponding relation between the panoramic image and the source image can be obtained, and when the position of the local area needs to be located subsequently, the target local area can be located by clicking the panoramic image.
Finally, it should be noted that, although the above embodiments have been described in the text and drawings of the present application, the scope of the patent protection of the present application is not limited thereby. All technical solutions which are generated by replacing or modifying the equivalent structure or the equivalent flow according to the contents described in the text and the drawings of the present application, and which are directly or indirectly implemented in other related technical fields, are included in the scope of protection of the present application.
Claims (10)
1. A method for mapping a panorama to a source map, comprising the steps of:
carrying out panoramic stitching to obtain a panoramic picture;
calculating a projection mapping matrix of each source image to the panoramic image;
performing projection mapping on the mark matrix of each source image through the projection mapping matrix to obtain a serial number mark matrix of the panoramic image;
and responding to the click command of the panoramic image to obtain the serial number of the source image corresponding to the clicked point position.
2. The method according to claim 1, wherein the step of calculating the projection mapping matrix from each source map to the panorama further comprises the steps of:
extracting the characteristic points of each source image and extracting the characteristic points of the panoramic image;
matching the characteristic points to obtain target matching points;
and calculating a projection mapping matrix of each source image to the panoramic image.
3. The method according to claim 1, wherein the step of performing projection mapping on the tag matrix of each source map by using the projection mapping matrix to obtain the tag matrix of the sequence number of the panorama includes the steps of:
and projecting and mapping the mark matrix of each source image through the projection mapping matrix to obtain the mark matrix of each source image in the panoramic image, and multiplying the mark matrix by the corresponding source image sequence number to obtain the sequence number mark matrix of each source image in the panoramic image.
4. The method according to claim 1, wherein the "obtaining a sequence number of a source map corresponding to a clicked point position in response to a panoramic map click command" further comprises:
responding to the panoramic image clicking instruction to obtain the corresponding coordinate position of the clicked point in the panoramic image;
obtaining a marking matrix of the source graph corresponding to the clicked point according to the corresponding coordinate position;
and obtaining the serial number of the source graph corresponding to the clicked point position according to the mark matrix.
5. The method according to any one of claims 1 to 4, wherein the "panoramic stitching to obtain a panoramic image" specifically includes the steps of:
acquiring a target scene picture and storing the target scene picture into a preset file;
reading an image from the preset file;
and carrying out panoramic stitching on the images to obtain a panoramic picture.
6. The method according to claim 5, wherein the step of storing the target scene picture in a preset file comprises the steps of:
storing the target scene picture in a preset file in a uniform format;
the method for reading the image sequence from the preset file specifically comprises the following steps:
batch reads were performed by the GLOB function.
7. The method according to claim 5, wherein the step of panorama stitching the images to obtain the panorama includes the steps of:
detecting the characteristic points of the images, matching the characteristic points of the images by a nearest neighbor method, storing the optimal matching confidence coefficient, and storing the homography matrix of the matched characteristic points of the two images;
deleting the matching between the images with lower confidence coefficient, and determining a mosaic set of the matched images through a parallel set searching algorithm;
performing camera parameter estimation on all the images of the spliced set to obtain a rotation matrix, and further adjusting the rotation matrix by a light beam averaging method;
performing horizontal or vertical waveform correction;
projection splicing, illumination compensation and multi-band fusion of an original image to a specified panoramic image.
8. A computer-readable storage medium having stored thereon a computer program, characterized in that,
the program when executed by a processor implementing the steps of any of claims 1 to 7.
9. A system for mapping a panorama to a source map, comprising: a server and a client;
the server is used for: carrying out panoramic stitching to obtain a panoramic picture; calculating a projection mapping matrix of each source image to the panoramic image; performing projection mapping on the mark matrix of each source image through the projection mapping matrix to obtain a serial number mark matrix of the panoramic image;
the client is used for: sending a panorama clicking instruction to the server;
the server is further configured to: and responding to the panoramic image clicking instruction of the client to obtain the serial number of the source image corresponding to the clicked point position.
10. The system for mapping panorama images to source images according to claim 9, further comprising: a camera device;
the image pickup apparatus is configured to: and shooting a target picture and sending the target picture to a server.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210193896.4A CN114693520A (en) | 2022-03-01 | 2022-03-01 | Method, system and storage medium for corresponding panorama and source map |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210193896.4A CN114693520A (en) | 2022-03-01 | 2022-03-01 | Method, system and storage medium for corresponding panorama and source map |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN114693520A true CN114693520A (en) | 2022-07-01 |
Family
ID=82137920
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210193896.4A Pending CN114693520A (en) | 2022-03-01 | 2022-03-01 | Method, system and storage medium for corresponding panorama and source map |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114693520A (en) |
-
2022
- 2022-03-01 CN CN202210193896.4A patent/CN114693520A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2021115071A1 (en) | Three-dimensional reconstruction method and apparatus for monocular endoscope image, and terminal device | |
| WO2021227360A1 (en) | Interactive video projection method and apparatus, device, and storage medium | |
| CN111612696B (en) | Image stitching method, device, medium and electronic equipment | |
| CN110568447A (en) | Visual positioning method, device and computer readable medium | |
| US9311756B2 (en) | Image group processing and visualization | |
| JP2018091667A (en) | Information processing apparatus, information processing apparatus control method, and program | |
| CN108965742A (en) | Abnormity screen display method, apparatus, electronic equipment and computer readable storage medium | |
| JP2014071850A (en) | Image processing apparatus, terminal device, image processing method, and program | |
| CN114494388B (en) | Three-dimensional image reconstruction method, device, equipment and medium in large-view-field environment | |
| CN113298871B (en) | Map generation method, positioning method, system thereof, and computer-readable storage medium | |
| CN112686953B (en) | Visual positioning method, device and electronic device based on inverse depth parameter | |
| WO2023102724A1 (en) | Image processing method and system | |
| CN114140771A (en) | A method and system for automatic labeling of image depth datasets | |
| CN112598571B (en) | Image scaling method, device, terminal and storage medium | |
| CN116452641A (en) | A document image registration data synthesis method, system, device and medium | |
| CN103327246A (en) | Multimedia shooting processing method, device and intelligent terminal | |
| CN113344789B (en) | Image splicing method and device, electronic equipment and computer readable storage medium | |
| Schaffland et al. | An interactive web application for the creation, organization, and visualization of repeat photographs | |
| WO2022111461A1 (en) | Recognition method and apparatus, and electronic device | |
| CN114693520A (en) | Method, system and storage medium for corresponding panorama and source map | |
| CN114971996B (en) | Method, device and electronic device for determining feature points of watermark image | |
| CN114155147B (en) | A wide-area ocean panoramic fusion method and system based on multi-source data | |
| CN113297344B (en) | Three-dimensional remote sensing image-based ground linear matching method and device and ground object target position positioning method | |
| Abdelhafiz et al. | Automatic texture mapping mega-projects | |
| CN111124862B (en) | Intelligent device performance testing method and device and intelligent device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |