Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart of an embodiment of a method for registering images according to the present invention.
And S11, acquiring a first image and a second image to be registered, wherein the size of the first image is smaller than that of the second image.
A first image and a second image to be registered are acquired. In a specific application scenario, the first image and the second image have the same target object, and the angles or reference systems of the target objects of the first image and the second image are different. In a specific application scenario, the first image may be a front side view of the cup, and the second image may be a squint view of the cup, where the front side view and the squint view of the cup are subjected to image registration.
In a specific application scenario, the first image and the second image have the same size of the target object. And if the sizes of the target objects in the first image and the second image are different, adjusting the size of the first image or the second image so that the sizes of the target objects in the first image and the second image are the same. In this embodiment, when the size of the target object in the first image is the same as that of the second image, the size of the first image is smaller than that of the second image.
And S12, sliding the first image on the second image, and finding a matching area which has highest correlation with the first image and is correlated with the first image size from the second image.
And sliding the first image with the smaller size on the second image to find a matching area which has the highest correlation degree with the first image and is correlated with the first image size from the second image. In a specific application scenario, the first image may be subjected to multi-row lateral sliding matching on the second image to find a matching area. In a specific application scenario, the first image may be subjected to multi-column longitudinal sliding matching on the second image to find a matching area. In a specific application scenario, the first image may also be matched by circular sliding on the second image. The specific sliding fit is not limited herein.
During the sliding process, the first image is taken as a reference, and a matching area which has the highest correlation with the first image and is correlated with the first image size is searched in the second image. The matching area has the highest correlation degree with the first image, and the sizes of the matching area and the first image are the same, so that the target object in the matching area and the target object in the first image can be accurately corresponding.
And S13, extracting characteristic points of the first image and the second image in the matching area respectively to obtain a first characteristic point corresponding to the first image and a second characteristic point corresponding to the second image.
And extracting the characteristic points of the first image to obtain first characteristic points corresponding to the first image. And extracting the feature points of the matching region of the second image to obtain second feature points corresponding to the matching region in the second image.
In a specific application scenario, feature point extraction can be performed on the first image and the second image in the matching area through a Sift (SCALE INVARIANT Feature Transform ) feature point extraction algorithm, so as to obtain a first feature point and a second feature point. In a specific application scenario, feature point extraction can be performed on the first image and the second image in the matching area through a Harris feature point extraction algorithm, so as to obtain a first feature point and a second feature point. In a specific application scenario, feature point extraction may also be performed on the first image and the second image in the matching region by using a SURF (Speeded Up Robust Features, accelerated robust feature) feature point extraction algorithm, so as to obtain a first feature point and a second feature point. The feature point extraction algorithm is not limited herein.
And step S14, determining that the matching is successful in response to the existence of the matched second characteristic points in the preset area range of the first characteristic points.
And matching the plurality of first characteristic points of the first image with the plurality of second characteristic points of the matching area on the second image. And after the matching is finished, judging that a second feature point which is successfully matched exists in the preset area range of the first feature point, and if the second feature point which is successfully matched exists in the preset area range of the first feature point, successfully matching the first feature point with the second feature point.
In the step, the second characteristic points are limited through the arrangement of the preset area range, so that the problem that matching dislocation easily occurs in characteristic point matching when repeated image characteristics exist in an image is solved. Thereby improving the accuracy of image registration.
And step S15, calculating to obtain a transformation relation between the first image and the second image by using the position information of the successfully matched feature points.
And after the first characteristic points on the first image and the second characteristic points of the matching area of the second image are successfully matched, calculating the transformation relation between the first image and the second image by using the position information of the characteristic point pairs of the successfully matched first characteristic points and second characteristic points. In a specific application scenario, the position information of the feature point pair may include coordinate information of each feature point in the feature point pair to calculate a transformation relationship between the first image and the second image.
And S16, registering the first image and the second image by utilizing the transformation relation.
In a specific application scenario, any one pixel point on the first image can be registered to the second image in a transformation mode based on the transformation relation, so that image registration of each pixel point of the first image and a corresponding pixel point of the second image is achieved. In a specific application scenario, any pixel point in the matching area in the second image may be transformed and registered into the first image based on the transformation relationship, so as to realize image registration between each pixel point of the first image and the corresponding pixel point of the second image. And are not limited herein.
According to the scheme, the image registration method of the embodiment performs sliding matching on the first image and the second image based on the information of the first image so as to obtain the matching area with the highest correlation degree with the first image on the second image, so that the first image and the second image are subjected to rough matching, the matching area is limited, and the matching accuracy is improved. And extracting the characteristic points of the first image and the matching area respectively to obtain a first characteristic point and a second characteristic point, taking the first characteristic point as a center, judging whether the matched second characteristic point exists in the range of the preset area for the first characteristic point, determining that the characteristic point pair is successfully matched when the matched second characteristic point exists, and finally calculating a transformation relation between the first image and the second image based on the successfully matched characteristic point pair, and registering the first image and the second image by utilizing the transformation relation. The embodiment uses the feature point matching method with region limitation, so that the accuracy of feature point matching can be effectively improved when the image features are not obvious or repeated, and the registration accuracy is further improved.
Referring to fig. 2, fig. 2 is a flowchart of another embodiment of an image registration method according to the present invention.
And S21, acquiring a first image and a second image to be registered, wherein the size of the first image is smaller than that of the second image.
The method comprises the steps of obtaining a first image and a second image to be registered, wherein the first image and the second image contain the same target object, and registration and superposition can be carried out on the same target object in the two images in a specific application scene. The target object may include objects such as objects, backgrounds, scenes, and the like.
In a specific application scenario, when the size of the target object in the first image is the same as that of the second image, the sizes of the first image and the second image are compared, wherein the size of the first image is smaller than that of the second image. In a specific application scenario, when the sizes of the target objects in the first image and the second image are different, the size of the first image or the second image is adjusted so that the sizes of the target objects in the first image and the second image are the same. When the size of the target object in the first image is the same as that of the target object in the second image, the size of the first image is smaller than that of the second image. By the mode, the embodiment can realize accurate registration of images with different resolutions.
And S22, shrinking the first image and the second image by a preset multiple, and carrying out graying treatment on the reduced first image and second image.
In a specific application scenario, to improve accuracy of the subsequent image registration, the first image and the second image may be further preprocessed before the first image and the second image are slidingly registered, where the preprocessing may specifically include adjusting an image size of the first image or the second image to a preset size (for example, 256×256). Or the preprocessing may further include normalizing the image intensities of the first image and the second image to a preset range (e.g., a range of 0 to 1). So as to facilitate feature contrast of the first image and the second image.
In a specific application scene, the first image and the second image can be reduced by a preset multiple to simplify the image features of the first image and the second image, so that the comparison efficiency of the follow-up sliding comparison is improved. The preset multiple may be 5 times, 10 times, 15 times, or the like, and the specific multiple may be set based on the image complexity of the first image and the second image in actual registration, which is not limited herein.
In a specific application scenario, the first image and the second image after the reduction processing may be subjected to graying processing. Wherein the color image is composed of three different components, which are three-channel images. When processing color images, three channels often need to be processed sequentially, which is time consuming. Therefore, in order to increase the processing contrast speed of the first image and the second image, it is necessary to reduce the amount of data required to be processed. Therefore, in this step, the color image of the RGB three-channel data is changed into the gray image of the single-channel data by the gray processing of the image, so that the amount of data for sliding contrast is reduced, and the contrast efficiency of the subsequent first image and second image is improved.
Step S23, performing jump sliding comparison on the first image on the second image, and acquiring a plurality of reference areas with the same size as the first image on the second image.
In a specific application scene, a first image with a smaller size is used as a template, and sliding comparison is performed on a second image with a larger size, so that a matching area closest to the template on the second image is searched.
Referring to fig. 3, fig. 3 is a schematic diagram of sliding comparison in step S23.
The first image 01 with smaller size is compared with the second image 02 with larger size in a sliding manner, so as to obtain a plurality of reference areas 03 on the second image 02, which are related to the first image 01. Wherein the size of the reference area 03 is the same as the first image 01.
In this embodiment, the first image 01 is horizontally translated in multiple rows to perform sliding comparison on the second image 02, so as to obtain multiple reference areas 03 that are arranged in parallel in a fitting manner, and in other embodiments, the first image 01 may perform translational comparison in other sliding rules, which is not limited herein.
In a specific application scenario, the first image 01 can be subjected to jump sliding comparison on the second image 02, so as to obtain a plurality of reference areas 03 arranged at intervals on the second image 02, thereby reducing the data size of sliding comparison and improving the template matching efficiency.
And step S24, calculating the correlation degree between the first image and each reference area by using the normalized cross correlation based on the number of pixel points, the pixel point coordinates, the pixel gray values, the pixel average value and the pixel standard deviation of the first image and each reference area, and taking the reference area with the highest correlation degree as a matching area.
And when the first image is subjected to sliding matching on the second image, calculating the correlation degree between the reference area and the first image after the reference area is acquired. The correlation degree can be calculated by using an NCC (Normalized Cross Correlation ) algorithm as an evaluation criterion.
Specifically, the correlation degree between the first image and each reference region may be calculated using the normalized cross correlation based on the number of pixels, the coordinates of the pixels, the pixel gray value, the pixel average value, and the pixel standard deviation of the first image and each reference region, and the reference region having the highest correlation degree may be used as the matching region.
The NCC algorithm is a judging method for mathematically counting whether a relation exists between two groups of data, is a common correlation evaluation standard, and has the following calculation formula:
Where n is the number of pixels, f is the pixel gray value of the first image, t is the pixel gray value of the second image, μ represents the pixel average of the first image and the second image, σ f represents the first image pixel standard deviation, σ t represents the second image pixel standard deviation. f (x, y) is the coordinates of the first image pixel, and t (x, y) is the coordinates of the second image pixel.
And calculating the correlation degree between the first image and each reference area through the formula, wherein the reference area with the largest correlation degree is taken as the matching area closest to the first image. The subsequent registration process registers the region in which the target object in the second image is located with the matching region as the target object in the first image.
And S25, extracting feature points of the first image and the second image through an acceleration robust feature algorithm to obtain corresponding first feature points of the first image and corresponding second feature points in a matching area of the second image.
And extracting the characteristic points of the first image and the second image through an acceleration robust characteristic algorithm (SURF, speeded Up Robust Features) respectively to obtain the corresponding first characteristic points of the first image and the corresponding second characteristic points in the matching area of the second image.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating feature point extraction of the acceleration robust feature algorithm according to the present embodiment. The method comprises the following steps:
And S251, constructing a black plug matrix.
The black Matrix (Hessian Matrix) is a square Matrix of the second partial derivatives of a multi-element function, describing the local curvature of the multi-element function. The purpose of constructing the Hessian matrix is to generate edge points (mutation points) with stable images, so as to form a foundation for subsequent feature extraction. Each pixel can be found as a Hessian matrix.
In this step, a black matrix is constructed for the matching areas of the first image and the second image, respectively.
Step S252, constructing a scale space.
In the acceleration robust feature algorithm, the sizes of images among different groups are consistent, but the sizes of templates of box-type filters used among different groups are gradually increased, filters with the same size are used among different layers among the same group, but the fuzzy coefficients of the filters are gradually increased, so that the scale space of a matching region of a first image and a matching region of a second image is respectively constructed, wherein the sizes of the first image and the matching region are the same.
And step S253, positioning the characteristic points.
And comparing each pixel point in the first image and the matching area which are processed by the black plug matrix with 26 points in the two-dimensional image space and the scale space neighborhood respectively, preliminarily positioning key points, filtering out key points with weak energy and error positioning key points, and screening out final stable characteristic points so as to position the first characteristic points of the first image and the second characteristic points in the matching area of the second image.
And step S254, feature point main direction distribution.
In the acceleration robust feature algorithm, the main directions of the feature points are allocated by adopting the Harr wavelet features in the circular neighborhood of the statistical feature points. Specifically, in the circular neighborhood of each feature point, the sum of the horizontal and vertical harr wavelet features of all points in a 60-degree fan is counted, then the fan rotates at certain intervals and counts the characteristic value of the harr wavelet in the area again, and after the fan rotates for 360 degrees, the direction of the fan with the largest value is taken as the main direction of the feature point.
And S255, generating a characteristic point descriptor.
And taking a rectangular area block 4*4 around each first characteristic point and each second characteristic point respectively, wherein the direction of the rectangular area is the same as the main direction of the corresponding characteristic point. Each sub-region counts haar wavelet features for the horizontal and vertical directions of 25 pixels, where both the horizontal and vertical directions are relative to the main direction. The haar wavelet features are the sum of horizontal direction values, the sum of vertical direction values, the sum of horizontal direction absolute values and the sum of vertical direction absolute values in 4 directions, so that descriptors of the first feature points and the second feature points are respectively generated.
And step S256, matching the characteristic points.
The matching degree between each feature point pair is determined by calculating the Euclidean distance between each first feature point and each second feature point, and the shorter the Euclidean distance is, the better the matching degree between the two feature points is represented.
The number of feature point descriptors can be reduced by accelerating a robust feature algorithm, so that the speed and the efficiency of feature point extraction and feature point description are accelerated.
And S26, judging whether a second characteristic point exists in the preset area range of each first characteristic point, if so, calculating whether the square difference between the characteristic vector of the first characteristic point and the characteristic vector of the second characteristic point meets a first preset condition, and if so, successfully matching.
After the first feature points of the first image and the second feature points of the matching area are acquired, all the first feature points are matched with all the second feature points. Specifically, whether a second feature point exists in a matching area corresponding to a preset area range of each first feature point is judged, if so, whether the square difference between the feature vector of the first feature point and the feature vector of the second feature point meets a first preset condition is calculated, if so, matching is successful, and if not, matching is failed. The first preset condition may be that the first threshold is exceeded, and a specific value of the first threshold may be set based on an actual application, which is not limited herein.
In a specific application scenario, the preset area range may be set based on actual application, and the smaller the preset area range is, the more accurate the feature point matching is, which is not limited herein.
In a specific application scenario, when a square difference between a certain second feature point feature vector and a first feature point feature vector meets a first preset condition, but the second feature point is not in a matching area corresponding to a preset area range of the first feature point, matching between the second feature point and the first feature point fails.
In a specific application scenario, when the first feature points are matched, the number of corresponding second feature points in the preset area range of the first feature points can be obtained first, if the number of the second feature points is 1, the step of calculating whether the square difference between the feature vectors of the first feature points and the feature vectors of the second feature points meets the first preset condition is executed, and if yes, the first feature points and the second feature points are successfully matched, and if not, the matching is failed.
If the number of the second feature points is greater than or equal to 2, calculating whether the quotient between a group of first feature points and the second feature points with the smallest square difference of the feature vectors and a group of first feature points and the second feature points with the smallest second feature vector Ping Fangcha meets a second preset condition, if so, successfully matching the first feature points with the plurality of second feature points, and if not, failing to match the first feature points. The second preset condition may be that the second threshold is exceeded, and a specific value of the second threshold may be set based on the actual application, which is not limited herein.
In a specific application scenario, the step of matching the feature points may be to obtain a first feature point first, determine whether a second feature point exists in a preset area range of the first feature point, and if not, re-match the first feature point to perform matching again. If so, judging whether the number of the second feature points in the preset area range of the first feature point is 1, if so, calculating whether the square difference between the feature vector of the first feature point and the feature vector of the second feature point meets a first preset condition to perform feature point matching, and if not, calculating whether the quotient between a group of first feature points with the minimum square difference of the feature vector and the second feature point and a group of first feature points with the minimum square difference of the feature vector Ping Fangcha and the second feature point meets a second preset condition to perform feature point matching. After the feature point matching of the first feature point is finished, other first feature points are continuously acquired to perform feature point matching.
According to the method, the accuracy of feature point pairing can be effectively improved when the image features are not obvious or repeated, and the registration accuracy is improved.
And S27, correspondingly dividing the first image and the matching area into a plurality of equal-part areas, and calculating to obtain a transformation relation by utilizing the position information of a group of point pairs with the minimum square difference in the feature vectors of the feature point pairs successfully matched in each equal-part area.
After the feature point matching is finished, the first image and the matching area are correspondingly divided into a plurality of equal-part areas, and the transformation relation is calculated by utilizing the position information of a group of point pairs with the smallest square difference in the feature point pair feature vector which are successfully matched in each equal-part area. The feature point pairs successfully matched are screened through the step so as to be uniformly distributed, and therefore the accuracy of the transformation relation is improved.
In a specific application scenario, the first image and the matching area may be divided into m×n equal parts. And calculating to obtain a transformation relation by utilizing the position information of a group of point pairs with the least square difference in the feature vectors of the feature point pairs successfully matched in each equivalent region.
The calculation formula of the transformation relation is as follows:
U=Hinv*X (2)
expanding the formula (2) to obtain:
Wherein u, v are pixel coordinates of the first image, and x, y are pixel coordinates of the second image. A. B, C, D, E, F, G, H, I is a conversion parameter, and in this case, the conversion parameter is an unknown quantity. And then calculating and solving the transformation parameters to obtain a specific transformation relation, thereby registering the first image and the matching area.
After the formula (3) is converted, a formula (4) is obtained:
assuming that I is 1, after substituting the coordinates of at least 4 pairs of successfully matched feature points into formula (4), at least 8 equation sets can be obtained:
obviously, after setting the value of I, 8 equation sets can be obtained based on the coordinates of the 4 feature point pairs, that is, the 8 feature points, so that A, B, C, D, E, F, G, H,8 transformation parameters can be solved based on the 8 equation sets, and a calculation formula of a specific transformation relation is obtained.
Thus, hinv= [ AB C D E F G H ]' is the inverse of the registration matrix of the transformation relationship, so hinv=x\u, and the calculation formula of the transformation relationship can be obtained by the above calculation.
The transformation relationship may be transformed based on each pixel of the first image to register with a corresponding pixel in the matching region.
And step S28, registering the target object of the first image into the second image by utilizing the transformation relation so as to realize registration of the first image and the second image.
Registering each pixel point of the target object of the first image into the second image by utilizing the transformation relation obtained in the previous step so as to realize the registration of the first image and the second image.
In a specific application scenario, any one pixel point on the first image can be registered to the second image in a transformation mode based on the transformation relation, so that image registration of each pixel point of the first image and a corresponding pixel point of the second image is achieved. In a specific application scenario, any pixel point in the matching area in the second image may be transformed and registered into the first image based on the transformation relationship, so as to realize image registration between each pixel point of the first image and the corresponding pixel point of the second image. And are not limited herein.
By the above scheme, the image registration method of the embodiment reduces the preset times and graying processing on the first image and the second image to simplify the image characteristics, thereby reducing the data volume of template matching and improving the problem of overlarge time consumption of the template matching algorithm. And then, finding a matching area most relevant to the first image on the second image based on a template matching algorithm, extracting a first characteristic point and a second characteristic point through an acceleration robust characteristic algorithm, dividing the first image and the matching area into a plurality of equal parts of areas after the first characteristic point and the second characteristic point can be successfully matched in a preset area range of each first characteristic point, and calculating a transformation relation by utilizing the successfully matched characteristic point pairs in each equal part of areas, so that the characteristic point pairs are uniformly distributed by a method of dividing the image area into blocks and selecting a group of matching point pairs for each block, the problem of huge registration errors of partial areas can not occur, and the final registration effect is improved. In addition, the embodiment adopts a method combining template matching and feature point matching, so that images with different resolutions can be automatically registered, manual participation is not needed, and the efficiency and effect of image registration are improved.
Based on the same inventive concept, the present invention also proposes an electronic device capable of being executed to implement the image registration method of any of the above embodiments, referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of the electronic device provided by the present invention, where the electronic device includes a processor 51 and a memory 52.
The processor 51 is configured to execute program instructions stored in the memory 52 to implement the steps of any of the above-described image registration method embodiments. In one specific implementation scenario, the electronic device may include, but is not limited to, a microcomputer, a server, a mobile device such as a notebook computer, a tablet computer, etc., and is not limited herein.
In particular, the processor 51 is adapted to control itself and the memory 52 to implement the steps of the registration method embodiment of any of the images described above. The processor 51 may also be referred to as a CPU (Central Processing Unit ). The processor 51 may be an integrated circuit chip with signal processing capabilities. The Processor 51 may also be a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 51 may be commonly implemented by an integrated circuit chip.
By means of the scheme, accurate registration of the images can be achieved.
Based on the same inventive concept, the present invention also provides a computer readable storage medium, please refer to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of the computer readable storage medium provided by the present invention. The computer-readable storage medium 60 stores therein at least one program data 61, the program data 61 for implementing any of the methods described above. In one embodiment, the computer readable storage medium 60 includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, etc. various media that can store program codes.
In the several embodiments provided in the present invention, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the invention, in essence or a part contributing to the prior art or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a storage medium.
The foregoing description is only of embodiments of the present invention, and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes using the descriptions and the drawings of the present invention or directly or indirectly applied to other related technical fields are included in the scope of the present invention.