[go: up one dir, main page]

CN112435283B - Image registration method, electronic device and computer readable storage medium - Google Patents

Image registration method, electronic device and computer readable storage medium Download PDF

Info

Publication number
CN112435283B
CN112435283B CN202011217920.0A CN202011217920A CN112435283B CN 112435283 B CN112435283 B CN 112435283B CN 202011217920 A CN202011217920 A CN 202011217920A CN 112435283 B CN112435283 B CN 112435283B
Authority
CN
China
Prior art keywords
image
matching
feature
registration
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011217920.0A
Other languages
Chinese (zh)
Other versions
CN112435283A (en
Inventor
王子彤
刘晓沐
王松
张东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202011217920.0A priority Critical patent/CN112435283B/en
Publication of CN112435283A publication Critical patent/CN112435283A/en
Application granted granted Critical
Publication of CN112435283B publication Critical patent/CN112435283B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种图像的配准方法、电子设备以及计算机可读存储介质,图像的配准方法包括:获取到待配准的第一图像和第二图像;其中,第一图像的尺寸小于第二图像的尺寸;将第一图像在第二图像上滑动,从第二图像中找到与第一图像相关度最高且与第一图像尺寸相关的匹配区域;在匹配区域内,分别对第一图像与第二图像进行特征点提取,得到第一图像对应的第一特征点与第二图像对应的第二特征点;响应于第一特征点的预设区域范围内存在匹配的第二特征点,确定匹配成功;利用匹配成功的特征点的位置信息计算得到第一图像与第二图像之间的变换关系;利用变换关系将第一图像的像素点配准到第二图像中。通过上述方式,本发明能够实现图像的精准配准。

The present invention discloses an image registration method, an electronic device, and a computer-readable storage medium. The image registration method includes: obtaining a first image and a second image to be registered; wherein the size of the first image is smaller than the size of the second image; sliding the first image on the second image, and finding a matching area in the second image that has the highest correlation with the first image and is related to the size of the first image; in the matching area, extracting feature points of the first image and the second image respectively, and obtaining a first feature point corresponding to the first image and a second feature point corresponding to the second image; in response to the presence of a matching second feature point within a preset area of the first feature point, determining that the match is successful; using the position information of the successfully matched feature point to calculate the transformation relationship between the first image and the second image; and using the transformation relationship to register the pixel points of the first image to the second image. In the above manner, the present invention can achieve accurate image registration.

Description

Image registration method, electronic device and computer-readable storage medium
Technical Field
The present invention relates to the field of image registration, and in particular, to a registration method for images, an electronic device, and a computer-readable storage medium.
Background
Image registration is a typical problem in the field of image processing research, with the aim of comparing or fusing images acquired under different conditions (e.g. different acquisition devices, time, imaging angles) for the same object. In the general sense, the image registration refers to that for two images, one image is mapped onto a plane where the other image is located by searching for a transformation, so that points corresponding to the same position in space in the two images are in one-to-one correspondence, and the purpose of information fusion is achieved. Image registration has been widely used in image processing as the most important technique in image stitching, image fusion, stereoscopic vision, three-dimensional reconstruction, depth estimation, and image measurement.
The existing image registration method is easy to match errors when the image features are relatively close, the image is blurred or the feature points are very concentrated, so that the finally calculated registration result is wrong.
Disclosure of Invention
The invention provides an image registration method, electronic equipment and a computer readable storage medium, which are used for solving the problem that image matching is easy to make mistakes in the prior art.
The invention provides an image registration method, which comprises the steps of obtaining a first image and a second image to be registered, wherein the size of the first image is smaller than that of the second image, sliding the first image on the second image, finding a matching area which has highest correlation degree with the first image and is related to the size of the first image from the second image, extracting feature points of the first image and the second image in the matching area respectively to obtain a first feature point corresponding to the first image and a second feature point corresponding to the second image, determining that matching is successful in response to the existence of the matched second feature point in a preset area range of the first feature point, calculating the transformation relation between the first image and the second image by utilizing the position information of the feature point which is matched successfully, and registering the first image and the second image by utilizing the transformation relation.
The method comprises the steps of sliding a first image on a second image, finding a matching area which has highest correlation degree with the first image and is related to the first image in size from the second image, sliding the first image on the second image, obtaining a plurality of reference areas which have the same size as the first image on the second image, calculating correlation degree between the first image and each reference area by using normalized cross correlation based on the number of pixel points, pixel point coordinates, pixel gray values, pixel average values and pixel standard deviations of the first image and each reference area, and taking the reference area with the highest correlation degree as the matching area.
The method comprises the steps of reducing the first image and the second image by a preset multiple and carrying out graying treatment on the reduced first image and second image before the step of sliding the first image on the second image and obtaining a plurality of reference areas of the second image related to the size of the first image.
The step of sliding the first image on the second image to obtain a plurality of reference areas related to the first image size on the second image specifically comprises the step of performing jump sliding comparison on the first image on the second image to obtain a plurality of reference areas related to the first image size on the second image.
The step of extracting the feature points of the first image and the second image in the matching area to obtain the first feature points corresponding to the first image and the second feature points corresponding to the second image comprises the step of extracting the feature points of the first image and the second image through an acceleration robust feature algorithm to obtain the second feature points corresponding to the first image and the second feature points corresponding to the matching area of the second image.
The method comprises the steps of responding to the existence of matched second characteristic points in the preset area range of the first characteristic points, judging whether the second characteristic points exist in the preset area range of each first characteristic point, if so, calculating whether the square difference between the characteristic vectors of the first characteristic points and the characteristic vectors of the second characteristic points meets a first preset condition, and if so, judging that the matching is successful.
The step of judging whether second feature points exist in the preset area range of the first feature points further comprises the steps of obtaining the number of the second feature points corresponding to the preset area range of the first feature points, if the number is 1, executing the step of calculating whether square difference between the first feature points and the second feature points meets a first preset condition, and if the number is more than or equal to 2, judging whether quotient between a group of first feature points and the second feature points with the minimum square difference of the feature vectors and a group of first feature points and the second feature points with the minimum second feature points of the feature vectors Ping Fangcha meets a second preset condition, and if yes, matching is successful.
The method comprises the steps of dividing a first image and a matching area into a plurality of equal parts of areas correspondingly, and calculating the transformation relation by using the position information of a group of point pairs with the smallest square difference in the feature vector of the feature point pair which is successfully matched in each equal part of areas.
Wherein, the number range of a plurality of equal parts areas is more than or equal to 4.
The step of registering the pixel points of the first image into the second image by utilizing the transformation relation comprises registering the target object of the first image into the second image by utilizing the transformation relation so as to realize the registration of the first image and the second image.
In order to solve the technical problems, the invention also provides electronic equipment, which comprises a memory and a processor which are mutually coupled, wherein the processor is used for executing program instructions stored in the memory so as to realize the image registration method of any one of the above.
To solve the above technical problem, the present invention also provides a computer-readable storage medium storing program data that can be executed to implement the image registration method as any one of the above.
The method has the advantages that the method is different from the situation in the prior art, the first image is matched with the second image in a sliding mode based on the information of the first image, so that a matching area with the highest correlation degree with the first image on the second image is obtained, the first image and the second image are subjected to rough matching, the matching area is limited, and the matching accuracy is improved. And extracting the characteristic points of the first image and the matching area respectively to obtain a first characteristic point and a second characteristic point, taking the first characteristic point as a center, judging whether the matched second characteristic point exists in the range of the preset area for the first characteristic point, determining that the characteristic point pair is successfully matched when the matched second characteristic point exists, and finally calculating a transformation relation between the first image and the second image based on the successfully matched characteristic point pair, and registering the first image and the second image by utilizing the transformation relation. According to the embodiment, the accuracy and the accuracy of image matching are improved by combining the sliding matching method and the feature point matching method, and the accuracy of feature point matching can be effectively improved when the image features are not obvious or repeated by using the feature point matching method with the region limitation, so that the registration accuracy and the accuracy are further improved.
Drawings
FIG. 1 is a flow chart of an embodiment of a method for registering images provided by the present invention;
FIG. 2 is a flow chart of another embodiment of a method for registering images provided by the present invention;
FIG. 3 is a schematic diagram of the sliding comparison in step S23;
fig. 4 is a schematic diagram of feature point extraction of the acceleration robust feature algorithm of the present embodiment;
FIG. 5 is a schematic diagram of an embodiment of an electronic device according to the present invention;
Fig. 6 is a schematic structural diagram of an embodiment of a computer readable storage medium according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart of an embodiment of a method for registering images according to the present invention.
And S11, acquiring a first image and a second image to be registered, wherein the size of the first image is smaller than that of the second image.
A first image and a second image to be registered are acquired. In a specific application scenario, the first image and the second image have the same target object, and the angles or reference systems of the target objects of the first image and the second image are different. In a specific application scenario, the first image may be a front side view of the cup, and the second image may be a squint view of the cup, where the front side view and the squint view of the cup are subjected to image registration.
In a specific application scenario, the first image and the second image have the same size of the target object. And if the sizes of the target objects in the first image and the second image are different, adjusting the size of the first image or the second image so that the sizes of the target objects in the first image and the second image are the same. In this embodiment, when the size of the target object in the first image is the same as that of the second image, the size of the first image is smaller than that of the second image.
And S12, sliding the first image on the second image, and finding a matching area which has highest correlation with the first image and is correlated with the first image size from the second image.
And sliding the first image with the smaller size on the second image to find a matching area which has the highest correlation degree with the first image and is correlated with the first image size from the second image. In a specific application scenario, the first image may be subjected to multi-row lateral sliding matching on the second image to find a matching area. In a specific application scenario, the first image may be subjected to multi-column longitudinal sliding matching on the second image to find a matching area. In a specific application scenario, the first image may also be matched by circular sliding on the second image. The specific sliding fit is not limited herein.
During the sliding process, the first image is taken as a reference, and a matching area which has the highest correlation with the first image and is correlated with the first image size is searched in the second image. The matching area has the highest correlation degree with the first image, and the sizes of the matching area and the first image are the same, so that the target object in the matching area and the target object in the first image can be accurately corresponding.
And S13, extracting characteristic points of the first image and the second image in the matching area respectively to obtain a first characteristic point corresponding to the first image and a second characteristic point corresponding to the second image.
And extracting the characteristic points of the first image to obtain first characteristic points corresponding to the first image. And extracting the feature points of the matching region of the second image to obtain second feature points corresponding to the matching region in the second image.
In a specific application scenario, feature point extraction can be performed on the first image and the second image in the matching area through a Sift (SCALE INVARIANT Feature Transform ) feature point extraction algorithm, so as to obtain a first feature point and a second feature point. In a specific application scenario, feature point extraction can be performed on the first image and the second image in the matching area through a Harris feature point extraction algorithm, so as to obtain a first feature point and a second feature point. In a specific application scenario, feature point extraction may also be performed on the first image and the second image in the matching region by using a SURF (Speeded Up Robust Features, accelerated robust feature) feature point extraction algorithm, so as to obtain a first feature point and a second feature point. The feature point extraction algorithm is not limited herein.
And step S14, determining that the matching is successful in response to the existence of the matched second characteristic points in the preset area range of the first characteristic points.
And matching the plurality of first characteristic points of the first image with the plurality of second characteristic points of the matching area on the second image. And after the matching is finished, judging that a second feature point which is successfully matched exists in the preset area range of the first feature point, and if the second feature point which is successfully matched exists in the preset area range of the first feature point, successfully matching the first feature point with the second feature point.
In the step, the second characteristic points are limited through the arrangement of the preset area range, so that the problem that matching dislocation easily occurs in characteristic point matching when repeated image characteristics exist in an image is solved. Thereby improving the accuracy of image registration.
And step S15, calculating to obtain a transformation relation between the first image and the second image by using the position information of the successfully matched feature points.
And after the first characteristic points on the first image and the second characteristic points of the matching area of the second image are successfully matched, calculating the transformation relation between the first image and the second image by using the position information of the characteristic point pairs of the successfully matched first characteristic points and second characteristic points. In a specific application scenario, the position information of the feature point pair may include coordinate information of each feature point in the feature point pair to calculate a transformation relationship between the first image and the second image.
And S16, registering the first image and the second image by utilizing the transformation relation.
In a specific application scenario, any one pixel point on the first image can be registered to the second image in a transformation mode based on the transformation relation, so that image registration of each pixel point of the first image and a corresponding pixel point of the second image is achieved. In a specific application scenario, any pixel point in the matching area in the second image may be transformed and registered into the first image based on the transformation relationship, so as to realize image registration between each pixel point of the first image and the corresponding pixel point of the second image. And are not limited herein.
According to the scheme, the image registration method of the embodiment performs sliding matching on the first image and the second image based on the information of the first image so as to obtain the matching area with the highest correlation degree with the first image on the second image, so that the first image and the second image are subjected to rough matching, the matching area is limited, and the matching accuracy is improved. And extracting the characteristic points of the first image and the matching area respectively to obtain a first characteristic point and a second characteristic point, taking the first characteristic point as a center, judging whether the matched second characteristic point exists in the range of the preset area for the first characteristic point, determining that the characteristic point pair is successfully matched when the matched second characteristic point exists, and finally calculating a transformation relation between the first image and the second image based on the successfully matched characteristic point pair, and registering the first image and the second image by utilizing the transformation relation. The embodiment uses the feature point matching method with region limitation, so that the accuracy of feature point matching can be effectively improved when the image features are not obvious or repeated, and the registration accuracy is further improved.
Referring to fig. 2, fig. 2 is a flowchart of another embodiment of an image registration method according to the present invention.
And S21, acquiring a first image and a second image to be registered, wherein the size of the first image is smaller than that of the second image.
The method comprises the steps of obtaining a first image and a second image to be registered, wherein the first image and the second image contain the same target object, and registration and superposition can be carried out on the same target object in the two images in a specific application scene. The target object may include objects such as objects, backgrounds, scenes, and the like.
In a specific application scenario, when the size of the target object in the first image is the same as that of the second image, the sizes of the first image and the second image are compared, wherein the size of the first image is smaller than that of the second image. In a specific application scenario, when the sizes of the target objects in the first image and the second image are different, the size of the first image or the second image is adjusted so that the sizes of the target objects in the first image and the second image are the same. When the size of the target object in the first image is the same as that of the target object in the second image, the size of the first image is smaller than that of the second image. By the mode, the embodiment can realize accurate registration of images with different resolutions.
And S22, shrinking the first image and the second image by a preset multiple, and carrying out graying treatment on the reduced first image and second image.
In a specific application scenario, to improve accuracy of the subsequent image registration, the first image and the second image may be further preprocessed before the first image and the second image are slidingly registered, where the preprocessing may specifically include adjusting an image size of the first image or the second image to a preset size (for example, 256×256). Or the preprocessing may further include normalizing the image intensities of the first image and the second image to a preset range (e.g., a range of 0 to 1). So as to facilitate feature contrast of the first image and the second image.
In a specific application scene, the first image and the second image can be reduced by a preset multiple to simplify the image features of the first image and the second image, so that the comparison efficiency of the follow-up sliding comparison is improved. The preset multiple may be 5 times, 10 times, 15 times, or the like, and the specific multiple may be set based on the image complexity of the first image and the second image in actual registration, which is not limited herein.
In a specific application scenario, the first image and the second image after the reduction processing may be subjected to graying processing. Wherein the color image is composed of three different components, which are three-channel images. When processing color images, three channels often need to be processed sequentially, which is time consuming. Therefore, in order to increase the processing contrast speed of the first image and the second image, it is necessary to reduce the amount of data required to be processed. Therefore, in this step, the color image of the RGB three-channel data is changed into the gray image of the single-channel data by the gray processing of the image, so that the amount of data for sliding contrast is reduced, and the contrast efficiency of the subsequent first image and second image is improved.
Step S23, performing jump sliding comparison on the first image on the second image, and acquiring a plurality of reference areas with the same size as the first image on the second image.
In a specific application scene, a first image with a smaller size is used as a template, and sliding comparison is performed on a second image with a larger size, so that a matching area closest to the template on the second image is searched.
Referring to fig. 3, fig. 3 is a schematic diagram of sliding comparison in step S23.
The first image 01 with smaller size is compared with the second image 02 with larger size in a sliding manner, so as to obtain a plurality of reference areas 03 on the second image 02, which are related to the first image 01. Wherein the size of the reference area 03 is the same as the first image 01.
In this embodiment, the first image 01 is horizontally translated in multiple rows to perform sliding comparison on the second image 02, so as to obtain multiple reference areas 03 that are arranged in parallel in a fitting manner, and in other embodiments, the first image 01 may perform translational comparison in other sliding rules, which is not limited herein.
In a specific application scenario, the first image 01 can be subjected to jump sliding comparison on the second image 02, so as to obtain a plurality of reference areas 03 arranged at intervals on the second image 02, thereby reducing the data size of sliding comparison and improving the template matching efficiency.
And step S24, calculating the correlation degree between the first image and each reference area by using the normalized cross correlation based on the number of pixel points, the pixel point coordinates, the pixel gray values, the pixel average value and the pixel standard deviation of the first image and each reference area, and taking the reference area with the highest correlation degree as a matching area.
And when the first image is subjected to sliding matching on the second image, calculating the correlation degree between the reference area and the first image after the reference area is acquired. The correlation degree can be calculated by using an NCC (Normalized Cross Correlation ) algorithm as an evaluation criterion.
Specifically, the correlation degree between the first image and each reference region may be calculated using the normalized cross correlation based on the number of pixels, the coordinates of the pixels, the pixel gray value, the pixel average value, and the pixel standard deviation of the first image and each reference region, and the reference region having the highest correlation degree may be used as the matching region.
The NCC algorithm is a judging method for mathematically counting whether a relation exists between two groups of data, is a common correlation evaluation standard, and has the following calculation formula:
Where n is the number of pixels, f is the pixel gray value of the first image, t is the pixel gray value of the second image, μ represents the pixel average of the first image and the second image, σ f represents the first image pixel standard deviation, σ t represents the second image pixel standard deviation. f (x, y) is the coordinates of the first image pixel, and t (x, y) is the coordinates of the second image pixel.
And calculating the correlation degree between the first image and each reference area through the formula, wherein the reference area with the largest correlation degree is taken as the matching area closest to the first image. The subsequent registration process registers the region in which the target object in the second image is located with the matching region as the target object in the first image.
And S25, extracting feature points of the first image and the second image through an acceleration robust feature algorithm to obtain corresponding first feature points of the first image and corresponding second feature points in a matching area of the second image.
And extracting the characteristic points of the first image and the second image through an acceleration robust characteristic algorithm (SURF, speeded Up Robust Features) respectively to obtain the corresponding first characteristic points of the first image and the corresponding second characteristic points in the matching area of the second image.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating feature point extraction of the acceleration robust feature algorithm according to the present embodiment. The method comprises the following steps:
And S251, constructing a black plug matrix.
The black Matrix (Hessian Matrix) is a square Matrix of the second partial derivatives of a multi-element function, describing the local curvature of the multi-element function. The purpose of constructing the Hessian matrix is to generate edge points (mutation points) with stable images, so as to form a foundation for subsequent feature extraction. Each pixel can be found as a Hessian matrix.
In this step, a black matrix is constructed for the matching areas of the first image and the second image, respectively.
Step S252, constructing a scale space.
In the acceleration robust feature algorithm, the sizes of images among different groups are consistent, but the sizes of templates of box-type filters used among different groups are gradually increased, filters with the same size are used among different layers among the same group, but the fuzzy coefficients of the filters are gradually increased, so that the scale space of a matching region of a first image and a matching region of a second image is respectively constructed, wherein the sizes of the first image and the matching region are the same.
And step S253, positioning the characteristic points.
And comparing each pixel point in the first image and the matching area which are processed by the black plug matrix with 26 points in the two-dimensional image space and the scale space neighborhood respectively, preliminarily positioning key points, filtering out key points with weak energy and error positioning key points, and screening out final stable characteristic points so as to position the first characteristic points of the first image and the second characteristic points in the matching area of the second image.
And step S254, feature point main direction distribution.
In the acceleration robust feature algorithm, the main directions of the feature points are allocated by adopting the Harr wavelet features in the circular neighborhood of the statistical feature points. Specifically, in the circular neighborhood of each feature point, the sum of the horizontal and vertical harr wavelet features of all points in a 60-degree fan is counted, then the fan rotates at certain intervals and counts the characteristic value of the harr wavelet in the area again, and after the fan rotates for 360 degrees, the direction of the fan with the largest value is taken as the main direction of the feature point.
And S255, generating a characteristic point descriptor.
And taking a rectangular area block 4*4 around each first characteristic point and each second characteristic point respectively, wherein the direction of the rectangular area is the same as the main direction of the corresponding characteristic point. Each sub-region counts haar wavelet features for the horizontal and vertical directions of 25 pixels, where both the horizontal and vertical directions are relative to the main direction. The haar wavelet features are the sum of horizontal direction values, the sum of vertical direction values, the sum of horizontal direction absolute values and the sum of vertical direction absolute values in 4 directions, so that descriptors of the first feature points and the second feature points are respectively generated.
And step S256, matching the characteristic points.
The matching degree between each feature point pair is determined by calculating the Euclidean distance between each first feature point and each second feature point, and the shorter the Euclidean distance is, the better the matching degree between the two feature points is represented.
The number of feature point descriptors can be reduced by accelerating a robust feature algorithm, so that the speed and the efficiency of feature point extraction and feature point description are accelerated.
And S26, judging whether a second characteristic point exists in the preset area range of each first characteristic point, if so, calculating whether the square difference between the characteristic vector of the first characteristic point and the characteristic vector of the second characteristic point meets a first preset condition, and if so, successfully matching.
After the first feature points of the first image and the second feature points of the matching area are acquired, all the first feature points are matched with all the second feature points. Specifically, whether a second feature point exists in a matching area corresponding to a preset area range of each first feature point is judged, if so, whether the square difference between the feature vector of the first feature point and the feature vector of the second feature point meets a first preset condition is calculated, if so, matching is successful, and if not, matching is failed. The first preset condition may be that the first threshold is exceeded, and a specific value of the first threshold may be set based on an actual application, which is not limited herein.
In a specific application scenario, the preset area range may be set based on actual application, and the smaller the preset area range is, the more accurate the feature point matching is, which is not limited herein.
In a specific application scenario, when a square difference between a certain second feature point feature vector and a first feature point feature vector meets a first preset condition, but the second feature point is not in a matching area corresponding to a preset area range of the first feature point, matching between the second feature point and the first feature point fails.
In a specific application scenario, when the first feature points are matched, the number of corresponding second feature points in the preset area range of the first feature points can be obtained first, if the number of the second feature points is 1, the step of calculating whether the square difference between the feature vectors of the first feature points and the feature vectors of the second feature points meets the first preset condition is executed, and if yes, the first feature points and the second feature points are successfully matched, and if not, the matching is failed.
If the number of the second feature points is greater than or equal to 2, calculating whether the quotient between a group of first feature points and the second feature points with the smallest square difference of the feature vectors and a group of first feature points and the second feature points with the smallest second feature vector Ping Fangcha meets a second preset condition, if so, successfully matching the first feature points with the plurality of second feature points, and if not, failing to match the first feature points. The second preset condition may be that the second threshold is exceeded, and a specific value of the second threshold may be set based on the actual application, which is not limited herein.
In a specific application scenario, the step of matching the feature points may be to obtain a first feature point first, determine whether a second feature point exists in a preset area range of the first feature point, and if not, re-match the first feature point to perform matching again. If so, judging whether the number of the second feature points in the preset area range of the first feature point is 1, if so, calculating whether the square difference between the feature vector of the first feature point and the feature vector of the second feature point meets a first preset condition to perform feature point matching, and if not, calculating whether the quotient between a group of first feature points with the minimum square difference of the feature vector and the second feature point and a group of first feature points with the minimum square difference of the feature vector Ping Fangcha and the second feature point meets a second preset condition to perform feature point matching. After the feature point matching of the first feature point is finished, other first feature points are continuously acquired to perform feature point matching.
According to the method, the accuracy of feature point pairing can be effectively improved when the image features are not obvious or repeated, and the registration accuracy is improved.
And S27, correspondingly dividing the first image and the matching area into a plurality of equal-part areas, and calculating to obtain a transformation relation by utilizing the position information of a group of point pairs with the minimum square difference in the feature vectors of the feature point pairs successfully matched in each equal-part area.
After the feature point matching is finished, the first image and the matching area are correspondingly divided into a plurality of equal-part areas, and the transformation relation is calculated by utilizing the position information of a group of point pairs with the smallest square difference in the feature point pair feature vector which are successfully matched in each equal-part area. The feature point pairs successfully matched are screened through the step so as to be uniformly distributed, and therefore the accuracy of the transformation relation is improved.
In a specific application scenario, the first image and the matching area may be divided into m×n equal parts. And calculating to obtain a transformation relation by utilizing the position information of a group of point pairs with the least square difference in the feature vectors of the feature point pairs successfully matched in each equivalent region.
The calculation formula of the transformation relation is as follows:
U=Hinv*X (2)
expanding the formula (2) to obtain:
Wherein u, v are pixel coordinates of the first image, and x, y are pixel coordinates of the second image. A. B, C, D, E, F, G, H, I is a conversion parameter, and in this case, the conversion parameter is an unknown quantity. And then calculating and solving the transformation parameters to obtain a specific transformation relation, thereby registering the first image and the matching area.
After the formula (3) is converted, a formula (4) is obtained:
assuming that I is 1, after substituting the coordinates of at least 4 pairs of successfully matched feature points into formula (4), at least 8 equation sets can be obtained:
obviously, after setting the value of I, 8 equation sets can be obtained based on the coordinates of the 4 feature point pairs, that is, the 8 feature points, so that A, B, C, D, E, F, G, H,8 transformation parameters can be solved based on the 8 equation sets, and a calculation formula of a specific transformation relation is obtained.
Thus, hinv= [ AB C D E F G H ]' is the inverse of the registration matrix of the transformation relationship, so hinv=x\u, and the calculation formula of the transformation relationship can be obtained by the above calculation.
The transformation relationship may be transformed based on each pixel of the first image to register with a corresponding pixel in the matching region.
And step S28, registering the target object of the first image into the second image by utilizing the transformation relation so as to realize registration of the first image and the second image.
Registering each pixel point of the target object of the first image into the second image by utilizing the transformation relation obtained in the previous step so as to realize the registration of the first image and the second image.
In a specific application scenario, any one pixel point on the first image can be registered to the second image in a transformation mode based on the transformation relation, so that image registration of each pixel point of the first image and a corresponding pixel point of the second image is achieved. In a specific application scenario, any pixel point in the matching area in the second image may be transformed and registered into the first image based on the transformation relationship, so as to realize image registration between each pixel point of the first image and the corresponding pixel point of the second image. And are not limited herein.
By the above scheme, the image registration method of the embodiment reduces the preset times and graying processing on the first image and the second image to simplify the image characteristics, thereby reducing the data volume of template matching and improving the problem of overlarge time consumption of the template matching algorithm. And then, finding a matching area most relevant to the first image on the second image based on a template matching algorithm, extracting a first characteristic point and a second characteristic point through an acceleration robust characteristic algorithm, dividing the first image and the matching area into a plurality of equal parts of areas after the first characteristic point and the second characteristic point can be successfully matched in a preset area range of each first characteristic point, and calculating a transformation relation by utilizing the successfully matched characteristic point pairs in each equal part of areas, so that the characteristic point pairs are uniformly distributed by a method of dividing the image area into blocks and selecting a group of matching point pairs for each block, the problem of huge registration errors of partial areas can not occur, and the final registration effect is improved. In addition, the embodiment adopts a method combining template matching and feature point matching, so that images with different resolutions can be automatically registered, manual participation is not needed, and the efficiency and effect of image registration are improved.
Based on the same inventive concept, the present invention also proposes an electronic device capable of being executed to implement the image registration method of any of the above embodiments, referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of the electronic device provided by the present invention, where the electronic device includes a processor 51 and a memory 52.
The processor 51 is configured to execute program instructions stored in the memory 52 to implement the steps of any of the above-described image registration method embodiments. In one specific implementation scenario, the electronic device may include, but is not limited to, a microcomputer, a server, a mobile device such as a notebook computer, a tablet computer, etc., and is not limited herein.
In particular, the processor 51 is adapted to control itself and the memory 52 to implement the steps of the registration method embodiment of any of the images described above. The processor 51 may also be referred to as a CPU (Central Processing Unit ). The processor 51 may be an integrated circuit chip with signal processing capabilities. The Processor 51 may also be a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 51 may be commonly implemented by an integrated circuit chip.
By means of the scheme, accurate registration of the images can be achieved.
Based on the same inventive concept, the present invention also provides a computer readable storage medium, please refer to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of the computer readable storage medium provided by the present invention. The computer-readable storage medium 60 stores therein at least one program data 61, the program data 61 for implementing any of the methods described above. In one embodiment, the computer readable storage medium 60 includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, etc. various media that can store program codes.
In the several embodiments provided in the present invention, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the invention, in essence or a part contributing to the prior art or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a storage medium.
The foregoing description is only of embodiments of the present invention, and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes using the descriptions and the drawings of the present invention or directly or indirectly applied to other related technical fields are included in the scope of the present invention.

Claims (11)

1. A method of registration of images, the method comprising:
Acquiring a first image and a second image to be registered, wherein the size of the first image is smaller than that of the second image;
Sliding the first image on the second image, and finding a matching area which has highest correlation degree with the first image and is correlated with the first image size from the second image;
Extracting feature points of the first image and the second image in the matching area respectively to obtain first feature points corresponding to the first image and second feature points corresponding to the second image;
determining that the matching is successful in response to the existence of a matched second characteristic point in the preset area range of the first characteristic point;
calculating to obtain a transformation relation between the first image and the second image by using the position information of the successfully matched feature points;
Registering the first image with the second image using the transformation relationship;
The step of calculating the transformation relationship between the first image and the second image by using the position information of the successfully matched feature points comprises the following steps:
And calculating the transformation relation by utilizing the position information of a group of point pairs with the minimum square difference in the feature vectors of the feature point pairs successfully matched in each equal part region.
2. The method of registration of images of claim 1 wherein the step of sliding the first image over the second image to find a matching region from the second image that has the highest correlation with the first image and that is correlated with the first image size comprises:
Sliding the first image on the second image to obtain a plurality of reference areas with the same size as the first image on the second image;
Calculating the correlation degree between the first image and each reference area by using normalized cross correlation based on the number of pixel points, pixel point coordinates, pixel gray values, pixel average values and pixel standard deviations of the first image and each reference area;
and taking the reference area with the highest correlation degree as the matching area.
3. The method of registration of images of claim 2 wherein prior to the step of sliding the first image over the second image to obtain a second image a plurality of reference areas associated with the first image size comprises:
and reducing the first image and the second image by a preset multiple, and carrying out graying treatment on the reduced first image and second image.
4. The method of registration of images according to claim 2, wherein the step of sliding the first image over the second image to obtain a plurality of reference areas on the second image related to the first image size comprises:
And performing jump sliding comparison on the first image on the second image to obtain a plurality of reference areas on the second image, wherein the reference areas are related to the size of the first image.
5. The method according to claim 1, wherein the step of extracting feature points of the first image and the second image in the matching region to obtain a first feature point corresponding to the first image and a second feature point corresponding to the second image includes:
and extracting feature points of the first image and the second image respectively through an acceleration robust feature algorithm to obtain first feature points corresponding to the first image and second feature points corresponding to the second image in a matching area.
6. The method of registration of images according to claim 1, wherein the step of determining that the matching is successful in response to there being a matching second feature point within a preset area of the first feature point comprises:
Judging whether second characteristic points correspondingly exist in a preset area range of each first characteristic point or not;
If so, calculating whether the square difference between the first characteristic point characteristic vector and the second characteristic point characteristic vector meets a first preset condition;
If so, the match is successful.
7. The method of registration of images according to claim 6, wherein the step of determining whether a second feature point exists within a preset area of the first feature point further comprises:
Acquiring the number of second characteristic points corresponding to the first characteristic points in a preset area range;
If the number is 1, executing the step of calculating whether the square difference between the first feature point and the second feature point meets a first preset condition;
If the number is greater than or equal to 2, judging whether the quotient between a group of first characteristic points and second characteristic points with the minimum eigenvector square difference and a group of first characteristic points and second characteristic points with the second small eigenvector Ping Fangcha meets a second preset condition, and if so, matching is successful.
8. The method of registration of images of claim 1 wherein the number of the plurality of equal areas ranges from 4 or more.
9. The method of registration of images according to claim 1, wherein the step of registering pixels of the first image into the second image using the transformation relationship comprises:
Registering a target object of the first image into the second image by utilizing the transformation relation so as to realize registration of the first image and the second image.
10. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the method of registration of images according to any one of claims 1 to 9.
11. A computer readable storage medium, characterized in that the computer readable storage medium stores program data executable to implement the image registration method according to any one of claims 1-9.
CN202011217920.0A 2020-11-04 2020-11-04 Image registration method, electronic device and computer readable storage medium Active CN112435283B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011217920.0A CN112435283B (en) 2020-11-04 2020-11-04 Image registration method, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011217920.0A CN112435283B (en) 2020-11-04 2020-11-04 Image registration method, electronic device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112435283A CN112435283A (en) 2021-03-02
CN112435283B true CN112435283B (en) 2024-12-06

Family

ID=74695305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011217920.0A Active CN112435283B (en) 2020-11-04 2020-11-04 Image registration method, electronic device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112435283B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409371B (en) * 2021-06-25 2023-04-07 浙江商汤科技开发有限公司 Image registration method and related device and equipment
CN113409370B (en) * 2021-06-25 2023-04-18 浙江商汤科技开发有限公司 Image registration method and related device and equipment
CN114612465A (en) * 2022-03-31 2022-06-10 联想(北京)有限公司 A data processing method, device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513038A (en) * 2014-10-20 2016-04-20 网易(杭州)网络有限公司 Image matching method and mobile phone application test platform

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2685842A1 (en) * 1991-12-30 1993-07-02 Philips Electronique Lab METHOD OF RECALING IMAGES
CN101051386B (en) * 2007-05-23 2010-12-08 北京航空航天大学 An Accurate Registration Method for Multiple Depth Images
US8385689B2 (en) * 2009-10-21 2013-02-26 MindTree Limited Image alignment using translation invariant feature matching
CN102194225A (en) * 2010-03-17 2011-09-21 中国科学院电子学研究所 Automatic registering method for coarse-to-fine space-borne synthetic aperture radar image
US8571328B2 (en) * 2010-08-16 2013-10-29 Adobe Systems Incorporated Determining correspondence between image regions
CN102914549B (en) * 2012-09-10 2015-03-25 中国航天科技集团公司第五研究院第五一三研究所 Optical image matching detection method aiming at satellite-borne surface exposed printed circuit board (PCB) soldering joint quality
CN102982543A (en) * 2012-11-20 2013-03-20 北京航空航天大学深圳研究院 Multi-source remote sensing image registration method
US9025822B2 (en) * 2013-03-11 2015-05-05 Adobe Systems Incorporated Spatially coherent nearest neighbor fields
CN103679714B (en) * 2013-12-04 2016-05-18 中国资源卫星应用中心 A kind of optics and SAR automatic image registration method based on gradient cross-correlation
CN105447842B (en) * 2014-07-11 2019-05-21 阿里巴巴集团控股有限公司 A kind of method and device of images match
CN105869153B (en) * 2016-03-24 2018-08-07 西安交通大学 The non-rigid Facial Image Alignment method of the related block message of fusion
CN106529591A (en) * 2016-11-07 2017-03-22 湖南源信光电科技有限公司 Improved MSER image matching algorithm
US9990753B1 (en) * 2017-01-11 2018-06-05 Macau University Of Science And Technology Image stitching
CN106981076A (en) * 2017-01-12 2017-07-25 深圳市大德激光技术有限公司 The edge concentration algorithm of high-precision rapid image matching
CN108537830A (en) * 2017-03-02 2018-09-14 广州康昕瑞基因健康科技有限公司 Method for registering images and system and image taking alignment method and system
JP6790995B2 (en) * 2017-04-27 2020-11-25 富士通株式会社 Collation device, collation method and collation program
CN107220997B (en) * 2017-05-22 2020-12-25 成都通甲优博科技有限责任公司 Stereo matching method and system
CN109919247B (en) * 2019-03-18 2021-02-23 北京石油化工学院 Method, system and equipment for matching characteristic points in dangerous chemical stacking binocular ranging
CN110473238B (en) * 2019-06-25 2021-08-27 浙江大华技术股份有限公司 Image registration method and device, electronic equipment and storage medium
CN111598177A (en) * 2020-05-19 2020-08-28 中国科学院空天信息创新研究院 An Adaptive Maximum Sliding Window Matching Method for Low Overlap Image Matching

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513038A (en) * 2014-10-20 2016-04-20 网易(杭州)网络有限公司 Image matching method and mobile phone application test platform

Also Published As

Publication number Publication date
CN112435283A (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN112435283B (en) Image registration method, electronic device and computer readable storage medium
CN110223330B (en) Registration method and system for visible light and infrared images
CN111340109B (en) Image matching method, device, equipment and storage medium
KR101753360B1 (en) A feature matching method which is robust to the viewpoint change
US9767383B2 (en) Method and apparatus for detecting incorrect associations between keypoints of a first image and keypoints of a second image
Ishikura et al. Saliency detection based on multiscale extrema of local perceptual color differences
CN104537376B (en) One kind identification platform calibration method and relevant device, system
US20190171909A1 (en) Selecting keypoints in images using descriptor scores
CN106485651A (en) The image matching method of fast robust Scale invariant
KR20150053438A (en) Stereo matching system and method for generating disparity map using the same
CN117173437A (en) Multi-modal remote sensing image hybrid matching method and system with multi-dimensional directional self-similar features
Katramados et al. Real-time visual saliency by division of gaussians
CN111709426B (en) Diatom recognition method based on contour and texture
JP6110174B2 (en) Image detection apparatus, control program, and image detection method
CN119090926A (en) A difference map registration method based on feature point matching
KR101741761B1 (en) A classification method of feature points required for multi-frame based building recognition
CN115423855B (en) Template matching method, device, equipment and medium for image
CN117437262A (en) Target motion estimation method, device, equipment and storage medium
Nayak et al. A comparative study on feature descriptors for relative pose estimation in connected vehicles
CN108470351B (en) Method, device and storage medium for measuring body excursion using image patch tracking
CN115147389B (en) Image processing method, device and computer readable storage medium
CN103034859B (en) A kind of method and device obtaining gesture model
Wu et al. An accurate feature point matching algorithm for automatic remote sensing image registration
JP6717769B2 (en) Information processing device and program
CN115760996A (en) Corner screening method and device, computer equipment and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant