CN106851092B - A kind of infrared video joining method and device - Google Patents
A kind of infrared video joining method and device Download PDFInfo
- Publication number
- CN106851092B CN106851092B CN201611259450.8A CN201611259450A CN106851092B CN 106851092 B CN106851092 B CN 106851092B CN 201611259450 A CN201611259450 A CN 201611259450A CN 106851092 B CN106851092 B CN 106851092B
- Authority
- CN
- China
- Prior art keywords
- image
- mrow
- camera
- module
- calculating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/682—Vibration or motion blur correction
- H04N23/684—Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time
- H04N23/6845—Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time by combination of a plurality of images sequentially taken
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of infrared video joining method and device, methods described to include:Extract the characteristic point of each image in infrared video, the matching characteristic point of image two-by-two is established to list, obtain the eigenmatrix and attitude matrix of video camera, processing and exposure compensating processing are stretched to image carry out level, each image is projected in same spheric coordinate system, carried out in overlapping region with multilayer hybrid algorithm seamless spliced, generate spliced panorama sketch.The present invention realizes the multichannel infrared video panoramic mosaic with automatic straightening and real-time exposure compensating function by carrying out brightness correction operation and real-time exposure compensating to video.
Description
Technical Field
The invention relates to the technical field of video processing, in particular to an infrared video splicing method and device.
Background
In many fields of domestic security protection, the infrared camera has been widely used and has been monitored in all weather, but because the field of vision scope of one way camera is narrower, generally enlarge the scope of control through rotatory revolving stage, long-time round sweep not only be unfavorable for observing, there is the control blind area moreover, bring very big potential safety hazard for the security protection trade, through adopting the infrared real time monitoring device that can the large scene at present, solve the narrow and small problem of control range.
For videos acquired by infrared cameras, splicing processing is often required to be performed on videos acquired by multiple paths of cameras, in the prior art, multiple different algorithms are adopted in the splicing process, the calculation is complex, the calculation amount of real-time calculation is huge, compared with picture splicing, video splicing has extremely high requirements on real-time performance, the splicing calculation of 25 frames of N paths of images must be completed within 1 second, otherwise, frames are easy to lose, so that pictures are not smooth or discontinuous, and even memory overflow causes machine crash.
In the prior art, a scheme for video splicing by bilinear splicing is adopted, the video is spliced by a bilinear interpolation method, linear interpolation calculation is respectively carried out in the x direction and the y direction, however, according to the scheme, a double image phenomenon occurs in an overlapping area after splicing, and a curve is jagged and unsmooth.
Disclosure of Invention
The embodiment of the invention provides an infrared video splicing method, which comprises the following steps:
extracting the characteristic points of each image in the infrared video, and calculating the descriptor of each characteristic point;
matching two images according to the descriptors of the feature points and a random sampling consistency algorithm, and establishing a matching feature point pair list of the two images;
calculating an affine matrix of a camera according to the matched feature points, solving an upward vector by using a minimum eigen method according to the affine matrix of the camera, and horizontally straightening the image according to the upward vector;
calculating the sum of the light intensity and gain product difference values of all images in the video in the overlapping area, solving to obtain the gain coefficient of each image, and carrying out exposure compensation on each image according to the gain coefficient;
projecting each image to the same spherical coordinate system, and performing seamless splicing in an overlapping area by using a Multi-band blending algorithm to generate a spliced panoramic image;
and performing exposure compensation after the image is horizontally straightened.
The embodiment of the invention also provides an infrared video splicing device, which comprises a correction module, a matching calculation module, a horizontal straightening module, an exposure compensation module and a splicing module;
the correction module is used for calculating a correction coefficient of each pixel point, performing correction preprocessing on each image in the video according to the correction coefficient, and extracting a feature point of each image in the video;
the matching calculation module is used for calculating the descriptor of each feature point extracted by the extraction module, matching two images according to the descriptor of the feature point and a random sampling consistency algorithm, and establishing a matching feature point pair list of the two images;
the horizontal straightening module is used for calculating an affine matrix of the camera according to the matched feature points obtained by the matching calculation module, solving an upward vector by using a minimum eigen method according to the affine matrix of the camera, and horizontally straightening the image according to the upward vector;
the exposure compensation module is used for calculating the sum of the light intensity and gain product difference values of all images in the video in an overlapping area, solving to obtain a gain coefficient of each image, and performing exposure compensation on each image according to the gain coefficient;
and the splicing module is used for projecting each image obtained by processing of the horizontal straightening module and the exposure compensation module into the same spherical coordinate system, and performing seamless splicing in an overlapping area by using a multilayer hybrid algorithm to generate a spliced panoramic image.
The beneficial effects are as follows:
according to the video splicing scheme provided by the invention, through carrying out correction operation and real-time exposure compensation on the video, the problems of different image brightness intensities and uneven distribution caused by the sensitivity difference of probes among a plurality of infrared cameras are automatically corrected, meanwhile, the problem of uneven image brightness caused by the change of illumination intensity and object radiation infrared intensity along with time is subjected to exposure compensation, spliced images have no splicing seams and double images, the splicing effect is optimal, and the multi-path infrared video panoramic splicing with the automatic correction and real-time exposure compensation functions is really realized.
Drawings
Specific embodiments of the present invention will now be described with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of a method for splicing mid-infrared video according to an embodiment of the present invention;
FIG. 2 is a flowchart of a mid-infrared video stitching method according to a second embodiment of the present invention;
fig. 3a and fig. 3b are schematic diagrams illustrating feature points respectively extracted by two images by using an accelerated robust feature (SURF) algorithm in a second embodiment of the present invention;
FIG. 4a is a diagram illustrating feature point matching obtained by searching through a KD tree according to a second embodiment of the present invention;
FIG. 4b is a feature point matching graph after maximum number random sampling matching processing in the second embodiment of the present invention;
FIG. 5a shows a perspective view of a second embodiment of the present invention before horizontal straightening;
FIG. 5b shows a horizontal straightened out panorama according to embodiment two of the present invention;
FIG. 6 is a diagram showing the effect of exposure compensation processing according to the second embodiment of the present invention;
FIG. 7a is a panoramic view of a second embodiment of the present invention before brightness correction;
FIG. 7b is a panoramic view of the second embodiment of the present invention after brightness correction;
FIG. 8 is a schematic structural diagram of an infrared video stitching apparatus according to a third embodiment of the present invention;
fig. 9 is a diagram showing a stitching effect of the infrared video stitching device in the third embodiment of the present invention.
Detailed Description
In order to make the technical solutions and advantages of the present invention more apparent, the following further detailed description of exemplary embodiments of the present invention is provided with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and are not exhaustive of all embodiments. And the embodiments and features of the embodiments in the present description may be combined with each other without conflict.
In the actual use process, because the sensitivities of different infrared camera probes are different and the images of different optical lenses are also different, the same scene is shot, the brightness of the images collected by different cameras is different, and even the brightness is uneven in the vertical direction and the horizontal direction in the image collected by one camera, so that the overall panoramic effect is greatly influenced by the factors.
During infrared real-time monitoring, the intensity of infrared rays reflected and radiated by an object in a monitoring area is slowly changed along with the change of the irradiation intensity and angle of sunlight, and the brightness of images acquired by a camera is changed along with the change of the intensity and the angle of the infrared rays radiated by the object.
The single-path camera acquires 25 frames of standard clear images per second, the size of each frame of image is 704 multiplied by 576, after the infrared camera is subjected to pseudo-color processing, each pixel point needs 3 bytes of RGB components, and the data volume transmitted by the N paths of cameras is N times of that of the single path, so that the 6 paths of cameras need to process 1.8 hundred million bytes of data per second in splicing mode. In the splicing process, various different algorithms are adopted, the calculation is complex, and the calculation amount of real-time calculation is huge.
Example one
As shown in fig. 1, the present invention provides an infrared video stitching method, which includes:
step 101: extracting the characteristic points of each image in the video;
step 102: calculating a descriptor of each feature point, matching two images according to the descriptor of the feature point and a random sampling consistency algorithm, and establishing a matching feature point pair list of the two images;
step 103: calculating an affine matrix of the camera according to the matched feature points, solving an upward vector by using a minimum eigen method according to the affine matrix of the camera, and horizontally straightening the image according to the upward vector;
step 104: calculating the sum of the light intensity and gain product difference of all images in the video in the overlapping area, solving to obtain the gain coefficient of each image, and carrying out exposure compensation on each image according to the gain coefficient;
step 105: projecting each image into the same spherical coordinate system, and performing seamless splicing in an overlapping area by using a multilayer hybrid algorithm to generate a spliced panoramic image;
in practical application, the image may be subjected to exposure compensation after horizontal straightening, or the image may be subjected to exposure compensation before horizontal straightening, that is, the above steps 103 and 104 are not in sequence.
The infrared video splicing method provided by the invention provides a mode for carrying out correction operation and real-time exposure compensation on videos, automatically corrects the problems of different image brightness intensities and uneven distribution caused by the sensitivity difference of probes among a plurality of infrared cameras, and carries out exposure compensation on the problems of uneven image brightness caused by the change of illumination intensity and object radiation infrared intensity along with time, thereby really realizing multi-path infrared video panoramic splicing with the functions of automatic correction and real-time exposure compensation.
Example two
Referring to fig. 2, an embodiment of the present invention provides an infrared video stitching method, where the method includes:
step 201: carrying out correction pretreatment on each image in the video;
due to the difference of sensitivity among sensors, the difference of monitoring background environments and the strength of the radiation and infrared ray reflection capability of various objects, the contrast and brightness among images in different paths are greatly different and even the phenomenon of uneven pictures occurs, so that the embodiment of the invention preprocesses the original images before splicing.
Aiming at the problem of uneven infrared brightness, the invention corrects the brightness of each pixel point, and the corrected image brightness I (x, y) is the original image brightness I0(x, y) is multiplied by a correction coefficient a, the correction formula being a logarithmic tangent type function:
wherein A is a correction coefficient, (x, y) represents the position of the pixel point in the image, (x)c,yc) Indicating the location of the center point of the gaussian distribution,representing the variance in the horizontal and vertical directions, the larger the variance the larger the adjustment region. In general (x)c,yc) Setting the brightness I of the original image as the center of the image0Multiplying (x, y) by the correction coefficient A to obtain the corrected image brightness I (x, y). Specifically, reference may be made to fig. 7a and 7b, which illustrate a panorama before luminance correction and a panorama after luminance correction, respectively.
In addition, in order to improve the operation speed, the invention sets a cache with the same size as the image, the correction coefficient of each pixel point is calculated in advance and stored in the cache when the program is initialized, each frame of image is only required to be multiplied by the correction coefficient without repeated calculation, and the operation speed is greatly improved. The correction coefficients of the pixels are different, the correction coefficients of the pixels are related to the distance between Gaussian distribution center points, and in practical application, the correction coefficients are determined by (x-xc) and (y-yc) in the formula 1.
Step 202: extracting characteristic points of each image in the infrared video;
specifically, the method adopts a rapid robust feature (SURF) algorithm to extract feature points of each image, firstly convolves the images by second-order partial derivatives of different variance Gaussian functions to obtain integral images, calculates determinant values of Hessian matrixes of pixel points of the images, takes extreme points exceeding a threshold as feature points, then calculates the main direction of the point, rotates an original image to the main direction to generate 64-dimensional descriptor vectors of the feature points, and stores the 64-dimensional descriptor vectors and point coordinates to provide important basis for the next image matching.
Fig. 3a and fig. 3b are schematic diagrams of feature points extracted from two images respectively by using SURF algorithm.
Step 203: establishing a matching characteristic point pair list of every two images;
the method specifically comprises the following steps: and calculating a descriptor of each feature point, matching every two images according to the descriptor of the feature point and a random sample consensus (RANSAC) algorithm, and establishing a matching feature point pair list of every two images.
And (2) rotating the original image to the main direction by using the main direction of the feature points obtained by the calculation in the step (202) to generate 64-dimensional descriptor vectors of the feature points, establishing a KD tree by using the feature point descriptors extracted by the SURF algorithm, wherein the KD tree is a short name of a k-dimensional tree and is a data structure for dividing a k-dimensional data space, then quickly searching out the matched feature points between the two images by using a BBF (Best-Bin-First) nearest neighbor query method, and establishing a matched feature point pair list between the two images.
Preferably, for a small number of mismatching cases, the invention also adopts a maximum number of random sample consensus (RANSAC) algorithm to eliminate the mismatching, and obtains the maximum number of matched and consistent feature point pairs. Fig. 4a is a feature point matching graph obtained by using KD tree search, and fig. 4b is a feature point matching graph after maximum number random sampling matching processing.
Step 204: calculating an affine matrix of the camera according to the matched feature points;
specifically, the step of matching the feature points to calculate an affine matrix of the camera specifically includes: acquiring an image central point of the matched characteristic point, and acquiring a focal length of the image in the horizontal direction or the vertical direction; calculating an intrinsic matrix of the camera according to the central point and the focal length of the image; acquiring an Euler angle of the image matched with the feature points, and acquiring an attitude matrix of the camera according to the Euler angle; and obtaining an affine matrix of the camera according to the intrinsic matrix of the camera and the attitude matrix of the camera.
Calculating a camera matrix including an eigen matrix and an attitude matrix according to the matched characteristic point pair parameters, and if the conditions of lens deformity, optical axis deviation center and the like are not considered, fixing the image center point (u)0,v0) The focal lengths in the horizontal and vertical directions are both fiThen the eigen matrix is represented as
If the camera pan is not considered, the attitude matrix uses three Euler angles (θ)i1,θi2,θi3) Describing the rotation of the camera, the attitude matrix is
From camera affine matrixCalculating the position of the k-th feature point mapped from the graph j to the graph iThen subtracting the position of the point of the graph i to obtain a residual errorAnd finally, iteratively adjusting the parameters of the camera by adopting a Levenberg-Marquardt algorithm to minimize the total residual error so as to obtain the optimal camera matrix parameters.
Step 205: performing horizontal straightening processing on the image;
in the step, an upward vector is solved by using a minimum eigen method according to the affine matrix of the camera, and then the image is subjected to horizontal straightening processing according to the upward vector. The method for horizontally straightening the image according to the upward vector specifically comprises the following steps: calculating a global rotation matrix according to the upward vector; and multiplying the attitude matrix of each camera by the rotation matrix to obtain a horizontally straightened panoramic image.
In practical application, if splicing is directly carried out according to a camera matrix (KR), the panoramic image will have the phenomenon of wave fluctuation, and the X axis of the camera needs to be rotated to enable the Y axis to be kept vertically upward.
Assuming that the number of images is n and the ith camera horizontal axis vector is Xi, the upward vector u must satisfy the condition:
and solving by using a least square method to obtain an upward vector u, calculating a global rotation matrix by using u, and multiplying the attitude matrix of each camera by the rotation matrix to straighten the panoramic image in the horizontal direction. Fig. 5a is a panorama image before horizontal straightening, and fig. 5b is a panorama image after horizontal straightening.
Step 206: carrying out exposure compensation on the image;
in the step, the sum of the difference of the product of the light intensity and the gain of all the images in the overlapping area in the video is calculated, the gain coefficient of each image is obtained through solving, and then exposure compensation is carried out on each image according to the gain coefficient.
The sum of the differences between the light intensity and the gain product in the overlapping region of all the images is
Wherein R (i, j) is a phaseRegion where two adjacent images overlap, giAnd gjIs the gain of the ith and jth images,andis the average intensity of the ith and jth images. In order to minimize the error e, a least square method is adopted to solve the gain coefficient g of each image, and then the light intensity of each image is multiplied by the corresponding gain coefficient to realize the exposure compensation function.
As time goes by, the angle of solar light irradiation changes, and the brightness of each image changes significantly, and it is necessary to constantly adjust the gain factor. Therefore, the gain coefficient of each image is recalculated every several seconds, the automatic exposure compensation function of video splicing is realized, and the brightness of each part of the compensated panoramic image is basically consistent. The effect after the exposure compensation process is shown in fig. 6.
Step 207: carrying out multilayer mixed splicing operation on the overlapped area to generate a spliced panoramic image;
specifically, each image is projected to the same spherical coordinate system, and seamless splicing is performed in an overlapping area by using a Multi-Band Blending (Multi-Band Blending) algorithm to generate a spliced panorama.
Carrying out convolution on the light intensity and the weight of the image in the overlapping region by adopting Gaussian functions with different standard deviations sigma to obtain a plurality of layers (different sigma) of blurred images and weights, wherein the mixed light intensity of pixel points in the overlapping region is
WhereinThe mixed weight with the standard deviation of k sigma for the ith image,the mixed light intensity difference of the standard deviation k sigma and the standard deviation (k-1) sigma of the ith image is as follows:
in addition, the embodiment of the invention also performs clipping operation on the frame of the panoramic image.
Specifically, each image uniformly projected into the spherical coordinate system is deformed, and the pitching heights in the panoramic image are different, so that a black frame appears in the image, and the overall visual effect of the panoramic image is affected. And obtaining the maximum inscribed rectangle of the panorama by comparing the positions of all the images, and then cutting according to the width and the height of the inscribed rectangle to obtain the panorama with full pictures.
The infrared video splicing method provided by the invention provides a mode for carrying out correction operation and real-time exposure compensation on videos, automatically corrects the problems of different image brightness intensities and uneven distribution caused by the sensitivity difference of probes among a plurality of infrared cameras, and carries out exposure compensation on the problems of uneven image brightness caused by the change of illumination intensity and object radiation infrared intensity along with time, thereby really realizing multi-path infrared video panoramic splicing with the functions of automatic correction and real-time exposure compensation.
EXAMPLE III
Referring to fig. 8, an embodiment of the present invention provides an infrared video stitching apparatus, where the apparatus includes a correction module 301, a matching calculation module 302, a horizontal straightening module 303, an exposure compensation module 304, and a stitching module 305;
the correction module 301 is configured to calculate a correction coefficient of each pixel, perform correction preprocessing on each image in the video according to the correction coefficient, and extract a feature point of each image in the video;
a matching calculation module 302, configured to calculate a descriptor of each feature point extracted by the correction module 301, match two images according to the descriptor of the feature point and a random sampling consistency algorithm, and establish a matching feature point pair list of the two images;
a horizontal straightening module 303, configured to calculate an affine matrix of the camera according to the matching feature points obtained by the matching calculation module 302, solve an upward vector by using a minimum eigen method according to the affine matrix of the camera, and perform horizontal straightening processing on the image according to the upward vector;
the exposure compensation module 304 is used for calculating the sum of the difference values of the light intensity and the gain product of all the images in the video in the overlapping area, solving to obtain the gain coefficient of each image, and performing exposure compensation on each image according to the gain coefficient;
a stitching module 305, configured to project each image obtained through processing by the horizontal straightening module 303 and the exposure compensation module 305 into the same spherical coordinate system, perform seamless stitching in an overlapping area by using a multilayer hybrid algorithm, and generate a stitched panorama, such as the stitched panorama shown in fig. 9.
In addition, the infrared video splicing device further comprises a storage module, wherein the storage module is used for storing the correction coefficient of each pixel point calculated by the correction module, and for other images in the video, only the correction coefficient needs to be multiplied without repeated calculation, so that the operation speed is greatly increased.
According to the infrared video splicing device provided by the invention, the correction operation and the real-time exposure compensation are carried out on the video through the correction module and the optical compensation module, the automatic brightness correction is carried out on the problems of different image brightness intensities and uneven distribution caused by the sensitivity difference of the probes among a plurality of infrared cameras, meanwhile, the exposure compensation is carried out on the problem of uneven image brightness caused by the change of illumination intensity and object radiation infrared intensity along with time, and the multi-path infrared video panoramic splicing with the automatic correction and real-time exposure compensation functions is really realized.
Finally, it should be pointed out that: the above examples are only for illustrating the technical solutions of the present invention, and are not limited thereto. Those of ordinary skill in the art will understand that: modifications can be made to the technical solutions described in the foregoing embodiments, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
Claims (9)
1. An infrared video stitching method, characterized in that the method comprises:
extracting the feature points of each image in the infrared video, and calculating the descriptor of each feature point, wherein the extracting the feature points of each image in the infrared video comprises the following steps: performing convolution on each image by using second-order partial derivatives of a plurality of variance Gaussian functions to obtain an integral image of each image; calculating determinant values of pixel point matrixes in the integral image, and extracting extreme points exceeding a threshold as feature points;
matching two images according to the descriptors of the feature points and a random sampling consistency algorithm, and establishing a matching feature point pair list of the two images;
calculating an affine matrix of a camera according to the matched feature points, solving an upward vector by using a minimum eigen method according to the affine matrix of the camera, and horizontally straightening the image according to the upward vector;
calculating the sum of the light intensity and gain product difference values of all images in the video in the overlapping area, solving to obtain the gain coefficient of each image, and carrying out exposure compensation on each image according to the gain coefficient;
projecting each image into the same spherical coordinate system, and performing seamless splicing in an overlapping area by using a multilayer hybrid algorithm to generate a spliced panoramic image;
and carrying out exposure compensation after the image is subjected to horizontal straightening processing, or carrying out horizontal straightening processing after the image is subjected to exposure compensation.
2. The method according to claim 1, wherein before extracting the feature points of each image in the video, the method further comprises performing brightness correction preprocessing on each image in the video, specifically:
the corrected image brightness I (x, y) is the original image brightness I0(x, y) is multiplied by a correction coefficient A,
<mrow> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>A</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>I</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>I</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mn>1</mn> <mo>+</mo> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mrow> <mo>(</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>x</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msubsup> <mi>&sigma;</mi> <mi>x</mi> <mn>2</mn> </msubsup> </mfrac> <mo>+</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mi>y</mi> <mo>-</mo> <msub> <mi>y</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msubsup> <mi>&sigma;</mi> <mi>y</mi> <mn>2</mn> </msubsup> </mfrac> <mo>)</mo> </mrow> </mrow> </msup> </mrow> </mfrac> </mrow>
wherein, (x, y) is the position of the pixel point in the image, (x)c,yc) For the position of the center point of each image,is the variance of the original image in the horizontal and vertical directions.
3. The method according to claim 2, wherein the matching of two images according to the descriptors of the feature points and a random sampling consensus algorithm to establish a matching feature point pair list of the two images comprises:
calculating the main direction of the feature points, and rotating the original image to the main direction to obtain 64-dimensional descriptor vectors of the feature points;
establishing a KD tree by using a feature point descriptor extracted by an accelerated robust feature algorithm;
and quickly searching feature points matched with every two images in the KD tree by using a nearest neighbor query method, and establishing a matching feature point pair list of every two images.
4. The method according to claim 2, wherein calculating an affine matrix of the camera from the matched feature points comprises:
acquiring an image central point of the matched characteristic point, and acquiring a focal length of the image in the horizontal direction or the vertical direction;
calculating an intrinsic matrix of the camera according to the image center point and the focal length;
acquiring an Euler angle of the image of the matched feature point, and acquiring an attitude matrix of the camera according to the Euler angle;
and obtaining an affine matrix of the camera according to the intrinsic matrix of the camera and the attitude matrix of the camera.
5. The method of claim 4, wherein the affine matrix of said camera is
Wherein,is an intrinsic matrix of the camera and is,being a video cameraAttitude matrix, (u)0,v0) As the center point of the image, fiThe focal lengths are in the horizontal and vertical directions.
6. The method according to claim 2, wherein the horizontally straightening the image according to the upward vector comprises:
calculating a global rotation matrix according to the upward vector;
and multiplying the attitude matrix of each camera by the rotation matrix to obtain a horizontally straightened panoramic image.
7. The method of claim 5, wherein the seamlessly splicing is performed in the overlapping area by using a multi-layer blending algorithm, and the generating of the spliced panorama specifically comprises:
calculating the mixed light intensity of the pixels in the overlapped region
Wherein, thereinThe mixed weight with the standard deviation of k sigma for the ith image,the mixed light intensity difference of the standard deviation k sigma and the standard deviation (k-1) sigma of the ith image
And carrying out multilayer mixed splicing on the overlapped area according to the mixed light intensity to generate a spliced panoramic image.
8. An infrared video splicing device is characterized by comprising a correction module, a matching calculation module, a horizontal straightening module, an exposure compensation module and a splicing module;
the correction module is used for calculating a correction coefficient of each pixel point, performing correction preprocessing on each image in the video according to the correction coefficient, and performing convolution on each image by using second-order partial derivatives of a plurality of variance Gaussian functions to obtain an integral image of each image; calculating determinant values of pixel point matrixes in the integral image, and extracting extreme points exceeding a threshold in the video as feature points;
the matching calculation module is used for calculating the descriptor of each feature point extracted by the correction module, matching two images according to the descriptor of the feature point and a random sampling consistency algorithm, and establishing a matching feature point pair list of the two images;
the horizontal straightening module is used for calculating an affine matrix of the camera according to the matched feature points obtained by the matching calculation module, solving an upward vector by using a minimum eigen method according to the affine matrix of the camera, and horizontally straightening the image according to the upward vector;
the exposure compensation module is used for calculating the sum of the light intensity and gain product difference values of all images in the video in an overlapping area, solving to obtain a gain coefficient of each image, and performing exposure compensation on each image according to the gain coefficient;
and the splicing module is used for projecting each image obtained by processing of the horizontal straightening module and the exposure compensation module into the same spherical coordinate system, and performing seamless splicing in an overlapping area by using a multilayer hybrid algorithm to generate a spliced panoramic image.
9. The apparatus according to claim 8, further comprising a storage module connected to said correction module for storing the correction coefficient calculated by said correction module for each pixel.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201611259450.8A CN106851092B (en) | 2016-12-30 | 2016-12-30 | A kind of infrared video joining method and device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201611259450.8A CN106851092B (en) | 2016-12-30 | 2016-12-30 | A kind of infrared video joining method and device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN106851092A CN106851092A (en) | 2017-06-13 |
| CN106851092B true CN106851092B (en) | 2018-02-09 |
Family
ID=59113804
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201611259450.8A Expired - Fee Related CN106851092B (en) | 2016-12-30 | 2016-12-30 | A kind of infrared video joining method and device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN106851092B (en) |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107257443A (en) * | 2017-07-24 | 2017-10-17 | 中科创达软件科技(深圳)有限公司 | The method and its device, terminal device of a kind of anti-vignetting of stitching image |
| CN107341827B (en) * | 2017-07-27 | 2023-01-24 | 腾讯科技(深圳)有限公司 | Video processing method, device and storage medium |
| CN109272442B (en) * | 2018-09-27 | 2023-03-24 | 百度在线网络技术(北京)有限公司 | Method, device and equipment for processing panoramic spherical image and storage medium |
| WO2020133412A1 (en) * | 2018-12-29 | 2020-07-02 | 深圳市大疆创新科技有限公司 | Panoramic image generation method, panoramic image generation device, and unmanned aerial vehicle |
| CN109981985A (en) * | 2019-03-29 | 2019-07-05 | 上海智觅智能科技有限公司 | A kind of continuous stitching algorithm of double vision frequency |
| CN110796597B (en) * | 2019-10-10 | 2024-02-02 | 武汉理工大学 | Vehicle-mounted all-round image splicing device based on space-time compensation |
| CN111445416B (en) * | 2020-03-30 | 2022-04-26 | 南京泓众电子科技有限公司 | Method and device for generating high-dynamic-range panoramic image |
| CN113222878B (en) * | 2021-06-04 | 2023-09-05 | 杭州海康威视数字技术股份有限公司 | Image stitching method |
| CN117670667B (en) * | 2023-11-08 | 2024-05-28 | 广州成至智能机器科技有限公司 | Unmanned aerial vehicle real-time infrared image panorama stitching method |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103955888A (en) * | 2014-05-12 | 2014-07-30 | 中国人民解放军空军预警学院监控系统工程研究所 | High-definition video image mosaic method and device based on SIFT |
| US9196071B2 (en) * | 2013-12-03 | 2015-11-24 | Huawei Technologies Co., Ltd. | Image splicing method and apparatus |
-
2016
- 2016-12-30 CN CN201611259450.8A patent/CN106851092B/en not_active Expired - Fee Related
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9196071B2 (en) * | 2013-12-03 | 2015-11-24 | Huawei Technologies Co., Ltd. | Image splicing method and apparatus |
| CN103955888A (en) * | 2014-05-12 | 2014-07-30 | 中国人民解放军空军预警学院监控系统工程研究所 | High-definition video image mosaic method and device based on SIFT |
Also Published As
| Publication number | Publication date |
|---|---|
| CN106851092A (en) | 2017-06-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN106851092B (en) | A kind of infrared video joining method and device | |
| US9811946B1 (en) | High resolution (HR) panorama generation without ghosting artifacts using multiple HR images mapped to a low resolution 360-degree image | |
| US11398053B2 (en) | Multispectral camera external parameter self-calibration algorithm based on edge features | |
| US9019426B2 (en) | Method of generating image data by an image device including a plurality of lenses and apparatus for generating image data | |
| CN106952225B (en) | Panoramic splicing method for forest fire prevention | |
| CN107016646A (en) | One kind approaches projective transformation image split-joint method based on improved | |
| CN111815517B (en) | Self-adaptive panoramic stitching method based on snapshot pictures of dome camera | |
| WO2023134103A1 (en) | Image fusion method, device, and storage medium | |
| CN109886883A (en) | Real-time polarized fog-penetrating imaging image enhancement processing method | |
| CN107784632A (en) | A kind of infrared panorama map generalization method based on infra-red thermal imaging system | |
| Alomran et al. | Feature-based panoramic image stitching | |
| US11538177B2 (en) | Video stitching method and device | |
| CN109509148B (en) | Panoramic all-around image stitching and fusion method and device | |
| CN105931185A (en) | Automatic splicing method of multiple view angle image | |
| CN111723801A (en) | Method and system for detecting and correcting target in fisheye camera picture | |
| CN110009567A (en) | Image stitching method and device for fisheye lens | |
| CN114022562A (en) | A panoramic video stitching method and device for maintaining pedestrian integrity | |
| CN103793891A (en) | Low-complexity panoramic image stitching method | |
| CN120013921A (en) | A method and related device for detecting hidden dangers of distribution network equipment based on large-field-of-view spliced video data | |
| CN118941753A (en) | Image stitching method, device, terminal and computer-readable storage medium | |
| CN113850905A (en) | A real-time stitching method of panoramic images for a cycle-scanning photoelectric warning system | |
| CN113744133A (en) | Image splicing method, device and equipment and computer readable storage medium | |
| CN116823863A (en) | An infrared image contour extraction method and device | |
| CN106709942B (en) | Panorama image mismatching elimination method based on characteristic azimuth angle | |
| CN103310448B (en) | Camera head pose estimation and the real-time method generating composite diagram for DAS |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180209 Termination date: 20181230 |