CN108961162A - A kind of unmanned plane forest zone Aerial Images joining method and system - Google Patents
A kind of unmanned plane forest zone Aerial Images joining method and system Download PDFInfo
- Publication number
- CN108961162A CN108961162A CN201810643274.0A CN201810643274A CN108961162A CN 108961162 A CN108961162 A CN 108961162A CN 201810643274 A CN201810643274 A CN 201810643274A CN 108961162 A CN108961162 A CN 108961162A
- Authority
- CN
- China
- Prior art keywords
- image
- matching
- transformation matrix
- feature point
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明涉及航拍图像拼接处理领域,特别是指一种无人机林区航拍图像拼接方法和系统。The invention relates to the field of stitching and processing of aerial images, in particular to a method and system for stitching images of aerial images of unmanned forest areas.
背景技术Background technique
无人机航拍作为一种近年新兴的低空遥感技术,具备时效性强、成本低、安全性高等优点,现已广泛应用于林业病虫害监测、森林火灾监测、林分分析监测等林业调查相关领域。As an emerging low-altitude remote sensing technology in recent years, UAV aerial photography has the advantages of strong timeliness, low cost, and high safety. It has been widely used in forestry surveys such as forestry pest monitoring, forest fire monitoring, and forest stand analysis.
但是由于无人机遥感平台获取图像时,受到飞行高度、数码相机焦距、高空间分辨率等因素的限制,单张无人机图像的覆盖范围有限,仅依靠单张图像,难以形成对目标研究区域的整体认知,因此需要对无人机航拍图像进行拼接,为面向林业调查研究的遥感图像的解析和处理提供高分辨率、宽视野的林区航拍图像。However, due to the limitations of flight height, digital camera focal length, high spatial resolution and other factors when the UAV remote sensing platform acquires images, the coverage of a single UAV image is limited. Therefore, it is necessary to stitch the aerial images of UAVs to provide high-resolution, wide-field aerial images of forest areas for the analysis and processing of remote sensing images for forestry investigation and research.
传统的航拍图像拼接算法在处理林区航拍图像时,存在着特征点提取过少、匹配方式复杂及配准精度较低等问题,为解决这一问题,本专利提出了一种IE-SIFT(ImageEnhanced Scale-Invariant Feature Transform)图像拼接算法。该算法首先在图像预处理阶段引入全局直方图均衡化提高图像对比度,使得图像细节信息增强,增加特征点提取数目;随后通过优化的对比度阈值筛选法进行极值点检测和提取,减少不必要特征点的提取,提高检测效率,同时采用向量点积反余弦函数的角度比代替欧氏距离之比进行特征粗匹配,降低计算复杂度,提高匹配效率;接着在采用RANSAC算法去除误匹配点的基础上,引入L-M非线性迭代算法精炼图像间变换矩阵,进行精细匹配,提高图像配准精度;最后采用加权平均融合算法实现图像拼接。When the traditional aerial image stitching algorithm processes aerial images in forest areas, there are problems such as too few feature points to be extracted, complex matching methods, and low registration accuracy. In order to solve this problem, this patent proposes an IE-SIFT ( ImageEnhanced Scale-Invariant Feature Transform) image stitching algorithm. The algorithm first introduces global histogram equalization in the image preprocessing stage to improve image contrast, which enhances image detail information and increases the number of feature points to be extracted; then uses the optimized contrast threshold screening method to detect and extract extreme points to reduce unnecessary features Point extraction improves detection efficiency. At the same time, the angle ratio of the vector dot product arccosine function is used instead of the ratio of Euclidean distance for feature rough matching, which reduces computational complexity and improves matching efficiency; Firstly, the L-M non-linear iterative algorithm is introduced to refine the transformation matrix between images, for fine matching, and to improve the accuracy of image registration; finally, the weighted average fusion algorithm is used to realize image mosaic.
发明内容Contents of the invention
有鉴于此,本发明的目的在于提出一种无人机林区航拍图像拼接方法和系统,实现无人机林区航拍图像的快速有效拼接。In view of this, the object of the present invention is to propose a method and system for splicing aerial images of UAV forest areas to realize fast and effective splicing of aerial images of UAV forest areas.
基于上述目的本发明提供的无人机林区航拍图像拼接方法和系统,包括步骤:Based on the above-mentioned purpose, the method and system for splicing aerial images of unmanned aerial vehicles in forest areas provided by the present invention include steps:
通过无人机搭载可见光采集设备在目标区域上空拍摄林区图像;Take images of the forest area over the target area through the drone equipped with visible light acquisition equipment;
用全局直方图均衡化算法来对所述航拍图像进行预处理,得到增强后的灰度航拍图像;Using a global histogram equalization algorithm to preprocess the aerial image to obtain an enhanced grayscale aerial image;
通过优化的对比度阈值筛选法对所述增强后的灰度航拍图像进行特征点检测和提取,减少不必要特征点的提取,并采用改进的特征点匹配方式对所述参考图像和待配准图像的特征点进行匹配,得到特征点匹配示意图;Through the optimized contrast threshold screening method, the enhanced grayscale aerial image is subjected to feature point detection and extraction to reduce the extraction of unnecessary feature points, and an improved feature point matching method is used to compare the reference image and the image to be registered The feature points are matched, and the schematic diagram of feature point matching is obtained;
使用RANSAC算法去除误匹配点并计算得到变换矩阵初值,使用L-M非线性迭代算法对计算得到的初始变换矩阵进行精炼,通过迭代计算的方式不断修正变换矩阵各参数的值,得到最终精确的变换矩阵,使用所述最终变换矩阵对待配准图像原图进行图像变换后得到待融合图像;Use the RANSAC algorithm to remove mismatching points and calculate the initial value of the transformation matrix, use the L-M nonlinear iterative algorithm to refine the calculated initial transformation matrix, and continuously correct the values of the parameters of the transformation matrix through iterative calculations to obtain the final accurate transformation Matrix, using the final transformation matrix to obtain the image to be fused after performing image transformation on the original image of the image to be registered;
使用加权平均融合算法对所述参考图像的原图和所述待融合图像进行图像融合,得到最终融合图像。performing image fusion on the original image of the reference image and the image to be fused by using a weighted average fusion algorithm to obtain a final fused image.
作为一个实施例,所述拍摄图像的图像间重叠度为40%-60%。As an embodiment, the overlapping degree of the captured images is 40%-60%.
作为一个实施例,所述全局直方图均衡化算法处理过程如下:将航拍图像gzh(x,y)的灰度级r归一化到区间[0,1],r=0时为黑色,r=1时为白色;gzh(x,y)灰度级范围为[0,L-1],像素的总数为n,则有灰度级为rk的像素个数为nk;其全局直方图均衡化对应的变换如式(1)所示;As an embodiment, the processing process of the global histogram equalization algorithm is as follows: normalize the gray level r of the aerial image gzh(x, y) to the interval [0,1], when r=0, it is black, When r=1, it is white; gzh(x, y) gray scale range is [0, L-1], the total number of pixels is n, then the number of pixels with gray scale r k is nk ; The transformation corresponding to global histogram equalization is shown in formula (1);
Pr(rj)为gzh(x,y)的概率分布函数,Sk即为所述航拍图像gzh(x,y)的灰度级k从0取至(L-1)时,对图像gzh(x,y)概率分布函数Pr(rj)求和,L为所述航拍图像灰度级总数,T(r)代表变换函数;使用所述变换公式改变所述航拍图像的灰度,得到增强后的灰度航拍图像。P r (r j ) is the probability distribution function of g zh(x, y) , and S k is when the gray level k of the aerial image g zh(x, y) is taken from 0 to (L-1), Summing the image gzh(x, y) probability distribution function P r (r j ), L is the total number of gray levels of the aerial image, T (r) represents a transformation function; use the transformation formula to change the aerial image The grayscale of the enhanced grayscale aerial image is obtained.
作为一个实施例,所述优化的对比度阈值筛选法为:首先剔除图像中像素值低于对比度阈值的点,所述阈值选取的方法为:通过实验验证的方式获取阈值大小与特征点正确配准率的关系,并以此选取最优的对比度阈值。As an embodiment, the optimized contrast threshold screening method is as follows: first, the points in the image whose pixel value is lower than the contrast threshold are eliminated, and the method for selecting the threshold is: the correct registration of the threshold size and the feature points is obtained through experimental verification. ratio, and select the optimal contrast threshold.
作为一个实施例,所述改进的特征点匹配算法处理过程如下:将参考图像中的某个特征点与待拼接图像中的所有特征点做点积运算,然后将点积结果进行反余弦变换存储到指定数组中,接着从该数组中找到最小角度θ1和次小角度θ2,若两者之比小于指定的阈值M=0.6,则认为最小角度所对应的特征点el与参考图像中的特征点er相匹配;(el,er)则留下作为特征点匹配对,匹配公式如式(2)所示:As an embodiment, the processing procedure of the improved feature point matching algorithm is as follows: perform a dot product operation on a certain feature point in the reference image and all the feature points in the image to be stitched, and then perform an inverse cosine transform on the dot product result to store to the specified array, and then find the minimum angle θ 1 and the second minimum angle θ 2 from the array, if the ratio of the two is less than the specified threshold M=0.6, then it is considered that the feature point e l corresponding to the minimum angle is the same as that in the reference image match the feature points e r ; (e l , e r ) is left as a matching pair of feature points, and the matching formula is shown in formula (2):
由此可对提取到的特征点进行初始匹配,得到初始匹配点对。In this way, initial matching can be performed on the extracted feature points to obtain initial matching point pairs.
作为一个实施例,所述使用RANSAC算法去除误匹配点为使用式(3):As an embodiment, the use of RANSAC algorithm to remove mismatching points is using formula (3):
去除所述误匹配点,得到对应匹配点;式(3)中,H为变换矩阵,所述变换矩阵H为一个含有8个未知参数(m11~m32)的3×3矩阵;(xi,yi),(x′i,y′i)分别为所述参考图像和所述待拼接图像的对应匹配点Al和Ar。Remove the mismatching points to obtain corresponding matching points; in formula (3), H is a transformation matrix, and the transformation matrix H is a 3×3 matrix containing 8 unknown parameters (m 11 ~m 32 ); (x i , y i ), (x′ i , y′ i ) are corresponding matching points A l and A r of the reference image and the image to be stitched, respectively.
作为一个实施例,所述使用L-M非线性迭代算法对计算得到的初始变换矩阵进行精炼,通过迭代计算的方式不断修正变换矩阵各参数的值,得到最终精确的变换矩阵,包括步骤:As an embodiment, the L-M nonlinear iterative algorithm is used to refine the calculated initial transformation matrix, and the value of each parameter of the transformation matrix is continuously corrected by iterative calculation to obtain the final accurate transformation matrix, including steps:
S601,设n=0;μ=0.01;设立阈值τ;最大迭代次数为N;初始变换矩阵为m0;S601, set n=0; μ=0.01; set a threshold τ; the maximum number of iterations is N; the initial transformation matrix is m 0 ;
S602,计算F(mn)的值;S602, calculating the value of F(m n );
S603,判断F(mn)是否大于τ,若是则转至S604,反之则结束流程;S603, judge whether F(m n ) is greater than τ, if so, go to S604, otherwise end the process;
S604,判断n!是否大于0,若是则转至S605,反之则转至S606;S604, judge n! Whether it is greater than 0, if so, go to S605, otherwise go to S606;
S605,判断F(mn)是否小于F(mn-1),若是则转至S606,反之则设定mn=mn-1,μ=10×μ并转至S607;S605, judge whether F(m n ) is smaller than F(m n-1 ), if so, go to S606, otherwise, set m n =m n-1 , μ=10×μ and go to S607;
S606,设定μ=μ/10并计算J(mn)的值,转至S607;S606, set μ=μ/10 and calculate the value of J(m n ), go to S607;
S607,判断n是否大于N,若是则结束流程,反之则根据公式:S607, judge whether n is greater than N, if so, end the process, otherwise, according to the formula:
mn+1=mn-[J(mn)τ+J(mn)μI]-1J(mn)τf(mn),式中J(m)为雅克比矩阵,设n=n+1,转至S602,即计算F(mn)的值;m n+1 =m n -[J(m n ) τ +J(m n )μI] -1 J(m n ) τ f(m n ), where J(m) is the Jacobian matrix, let n =n+1, go to S602, namely calculate the value of F(m n );
使用F(mn)的值带入(4)-式(6)Use the value of F(m n ) into (4)-Equation (6)
式中(m11~m32)为变换矩阵H的各分量,式(6)中,F(m)为所有对应匹配的特征点之间距离的和;计算各分量的最优解,即可获得最终变换矩Hb。In the formula (m 11 ~m 32 ) are the components of the transformation matrix H, and in the formula (6), F(m) is the sum of the distances between all corresponding matching feature points; the optimal solution of each component is calculated as The final transformation moment H b is obtained.
一种无人机林区航拍图像拼接方法和系统,包括:A method and system for mosaicing aerial images of unmanned aerial vehicle forest areas, comprising:
图像输入模块,用于取得目标区域上空的林区航拍图像;The image input module is used to obtain the forest area aerial image over the target area;
图像增强模块,用于使用全局直方图均衡化算法来对所述航拍图像进行预处理,得到增强后的灰度航拍图像;An image enhancement module, configured to use a global histogram equalization algorithm to preprocess the aerial image to obtain an enhanced grayscale aerial image;
提取特征点模块,用于使用优化的对比度阈值筛选法对所述增强后的灰度航拍图像进行尺度空间极值检测,精确定位特征点,确定特征点主方向,生成特征点描述符;使用所述特征点描述符确定特征点,得到提取出特征点的参考图像和待配准图像;The feature point extraction module is used to use the optimized contrast threshold screening method to perform scale space extremum detection on the enhanced gray-scale aerial image, accurately locate feature points, determine the main direction of feature points, and generate feature point descriptors; use the The feature point descriptor is used to determine the feature point, and the reference image and the image to be registered of the feature point are obtained;
特征点匹配模块,用于使用单位矢量点积的反余弦函数之比对所述参考图像和待配准图像的特征点进行特征点匹配,得到特征点匹配示意图;The feature point matching module is used to perform feature point matching on the feature points of the reference image and the image to be registered using the ratio of the arccosine function of the unit vector dot product to obtain a schematic diagram of feature point matching;
去除误匹配点模块,用于使RANSAC算法去除所述特征点匹配示意图中的误匹配点,得到去除误匹配点后的匹配结果示意图并计算得到最初变换矩阵;Remove the mismatching point module, which is used to make the RANSAC algorithm remove the mismatching points in the feature point matching schematic diagram, obtain the matching result schematic diagram after removing the mismatching points and calculate and obtain the initial transformation matrix;
精炼变换矩阵模块,用于使用L-M非线性迭代算法精炼所述最初变换矩阵,使得去除误匹配点后的匹配结果示意图中的所有对应匹配的特征点之间距离的和最小,得到最终变换矩阵;使用所述最终变换矩阵对待配准图像原图进行图像变换后得到待融合图像;The refining transformation matrix module is used to refine the initial transformation matrix using the L-M nonlinear iterative algorithm, so that the sum of the distances between all corresponding matching feature points in the matching result schematic diagram after removing the mismatching points is the smallest, and obtains the final transformation matrix; Using the final transformation matrix to obtain the image to be fused after performing image transformation on the original image of the image to be registered;
图像融合模块,用于使用加权平均融合算法对所述参考图像的原图和所述待融合图像进行图像融合,得到最终融合图像。An image fusion module, configured to use a weighted average fusion algorithm to perform image fusion on the original image of the reference image and the image to be fused to obtain a final fused image.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. Those skilled in the art can also obtain other drawings based on these drawings without creative work.
图1为本发明实施例一种无人机林区航拍图像拼接方法流程图;Fig. 1 is a kind of flow chart of the splicing method of unmanned aerial vehicle forest area aerial photography image of the embodiment of the present invention;
图2a为本发明实施例原始航拍图像;Fig. 2a is the original aerial image of the embodiment of the present invention;
图2b为本发明实施例灰度航拍图像;Figure 2b is a grayscale aerial image of an embodiment of the present invention;
图2c为本发明实施例增强后的灰度航拍图像;FIG. 2c is an enhanced grayscale aerial image of an embodiment of the present invention;
图3a为本发明实施例特征点提取结果示意图;Fig. 3a is a schematic diagram of a feature point extraction result according to an embodiment of the present invention;
图3b为本发明实施例另一个特征点提取结果示意图;Fig. 3b is a schematic diagram of another feature point extraction result according to the embodiment of the present invention;
图4为本发明实施例特征点匹配结果示意图;FIG. 4 is a schematic diagram of a feature point matching result according to an embodiment of the present invention;
图5为本发明实施例去除误匹配点后的匹配结果示意图;FIG. 5 is a schematic diagram of a matching result after removing mismatch points according to an embodiment of the present invention;
图6位本发明实施例L-M算法精炼变换矩阵流程图;Fig. 6 is a flow chart of L-M algorithm refining transformation matrix according to the embodiment of the present invention;
图7a为本发明实施例待配准图像原图;Fig. 7a is the original image of the image to be registered according to the embodiment of the present invention;
图7b为本发明实施例待融合图像;Fig. 7b is an image to be fused according to an embodiment of the present invention;
图8为本发明实施例最终融合图像;Fig. 8 is the final fused image of the embodiment of the present invention;
图9为本发明实施例一种无人机林区航拍图像拼接系统结构图。Fig. 9 is a structural diagram of an aerial image mosaic system of an unmanned aerial vehicle forest area according to an embodiment of the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本发明进一步详细说明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with specific embodiments and with reference to the accompanying drawings.
如图1所示,为本发明实施例一种无人机林区航拍图像拼接方法流程图,具体实施步骤包括:As shown in Figure 1, it is a flow chart of a method for splicing aerial photographs of unmanned aerial vehicles in forest areas according to an embodiment of the present invention, and the specific implementation steps include:
步骤101,取得图像,无人机将在目标区域上空飞行过程中拍摄林区航拍图像。In step 101, images are obtained, and the drone will take aerial images of forest areas during its flight over the target area.
步骤102,直方图均衡化图像增强,由于有些区域的航拍图像色彩单一且变化不明显,导致所述航拍图像转化为灰度图像后对比度过低,图像细节模糊,难以有效地采用SIFT算法提取特征点。因此在所述图像处理终端接收到所述航拍图像后,用全局直方图均衡化算法来对所述航拍图像进行预处理,可以提高图像对比度,增加图像细节信息,从而增加提取特征点的数目。Step 102, histogram equalization image enhancement, because the aerial image in some areas has a single color and the change is not obvious, resulting in the conversion of the aerial image into a grayscale image, the contrast is too low, the image details are blurred, and it is difficult to effectively use the SIFT algorithm to extract features point. Therefore, after the image processing terminal receives the aerial image, the global histogram equalization algorithm is used to preprocess the aerial image, which can improve image contrast, increase image detail information, and thereby increase the number of extracted feature points.
具体处理过程如下:将所述航拍图像gzh(x,y)的灰度级r归一化到区间[0,1],r=0时为黑色,r=1时为白色。gzh(x,y)灰度级范围为The specific processing process is as follows: the gray level r of the aerial image gzh(x,y) is normalized to the interval [0,1], when r=0 it is black, and when r=1 it is white. g zh(x,y) gray scale range is
[0,L-1],像素的总数为n,则有灰度级为rk的像素个数为nk。所述全局直方图均衡化对应的变换如式(1)所示。[0,L-1], the total number of pixels is n, then the number of pixels with gray level r k is n k . The transformation corresponding to the global histogram equalization is shown in formula (1).
式(1)中Pr(rj)为gzh(x,y)的概率分布函数,Sk即为所述航拍图像gzh(x,y)的灰度级k从0取至(L-1)时,对图像gzh(x,y)概率分布函数Pr(rj)求和,L为所述航拍图像灰度级总数,T(r)代表变换函数。使用所述变换公式改变所述航拍图像的灰度,得到增强后的灰度航拍图像。图像增强后的对比效果如图2a-2c所示,增强后的灰度航拍图像对比度明显提高,图像细节信息更加丰富。其中,图2a为本发明实施例原始航拍图像,图2b为本发明实施例灰度航拍图像,图2c为本发明实施例增强后的灰度航拍图像。In formula (1), P r (r j ) is the probability distribution function of g zh(x, y) , and S k is the gray level k of the aerial image g zh(x, y) from 0 to (L -1), the probability distribution function P r (r j ) of the image gzh(x,y) is summed, L is the total number of gray levels of the aerial image, and T(r) represents the transformation function. Using the transformation formula to change the grayscale of the aerial image to obtain an enhanced grayscale aerial image. The contrast effect after image enhancement is shown in Figure 2a-2c. The contrast of the enhanced grayscale aerial image is significantly improved, and the image detail information is more abundant. 2a is the original aerial image of the embodiment of the present invention, FIG. 2b is the grayscale aerial image of the embodiment of the present invention, and FIG. 2c is the enhanced grayscale aerial image of the embodiment of the present invention.
步骤103,采用SIFT算法提取特征点,无人机在拍摄目标区域的航拍图像时,由于受到光照变化及其它环境因素的干扰,可能造成输入的图像序列存在旋转,所述航拍图像的视角和尺度也有可能发生变化,为克服上述干扰,本发明实施例采用SIFT算法来进行图像配准。所述图像配准是图像拼接最为核心的步骤,配准的精度将直接影响图像拼接的最终效果。所述图像配准为计算参考图像和待配准图像间的空间变换参数,确定空间变换模型,实现两幅图像之间坐标系的转换。所述SIFT算法通过在尺度空间上进行对特征点的提取,具有良好的尺度不变性以及对视角变化的鲁棒性。Step 103, using the SIFT algorithm to extract feature points. When the UAV is shooting the aerial image of the target area, due to the interference of illumination changes and other environmental factors, the input image sequence may be rotated. The perspective and scale of the aerial image Changes may also occur. In order to overcome the above interference, the embodiment of the present invention uses the SIFT algorithm to perform image registration. The image registration is the core step of image stitching, and the accuracy of registration will directly affect the final effect of image stitching. The image registration is to calculate the spatial transformation parameters between the reference image and the image to be registered, determine the spatial transformation model, and realize the transformation of the coordinate system between the two images. The SIFT algorithm has good scale invariance and robustness to viewing angle changes by extracting feature points in the scale space.
所述提取特征点包括:The extracted feature points include:
尺度空间极值检测,一副二维图像的尺度空间L(x,y,σ)定义为尺度可变高斯函数G(x,y,σ)与原图像I(x,y)的卷积,如式(2)、(3)所示。Scale space extreme value detection, the scale space L(x,y,σ) of a two-dimensional image is defined as the convolution of the scale-variable Gaussian function G(x,y,σ) and the original image I(x,y), As shown in formulas (2) and (3).
L(x,y,σ)=G(x,y,σ)*I(x,y) (3)L(x,y,σ)=G(x,y,σ)*I(x,y) (3)
式(3)中,*是卷积操作,(x,y)为图像像素的位置,σ为尺度空间因子,σ值越大,图像越模糊,σ值越小,图像越清晰。为了有效地在尺度空间检测到相对稳定的特征点,用不同尺度的高斯差分核与图像卷积生成高斯差分尺度空间D(x,y,σ)。如式(4)所示。In formula (3), * is the convolution operation, (x, y) is the position of the image pixel, σ is the scale space factor, the larger the value of σ, the blurrier the image, and the smaller the value of σ, the clearer the image. In order to effectively detect relatively stable feature points in the scale space, the Gaussian difference scale space D(x, y, σ) is generated by convolution of Gaussian difference kernels of different scales with the image. As shown in formula (4).
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ)D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y ,σ)
(4)(4)
式(4)中,k是常数乘性因子,其取值与尺度空间每层图片数S有关,即k=21/S。对所述增强后的灰度航拍图像进行检测,若其中某像素点灰度值在其四周及上下共26个邻域内达到极值,则记录此点所在的位置和尺度,作为候选特征点。本专利将对比度阈值筛选方法与林区图像存在的特征、特点相结合来进行极值点检测,减少图像中无效特征点的提取,从而提高算法对该类图像特征点检测和提取的效率。阈值选取的原则是:在尽量减少特征点提取的同时,保证林区图像中的正确配准。通过实验验证的方式获取阈值大小与特征点正确配准率的关系,从而选取最优的对比度阈值。该方法的主要根据为:林区图像中景物存在着大量的相似结构,且林木景物之间的像素值变化小,但图像中林木与非林木之间却存在比较明显的对比度和像素值变化,因此,可以通过增加对比度阈值的方法,实现对图像内容中主要景物的提取,从而提取出更有可能对图像配准有用的特征点,减少对配准无贡献特征点的检测。In formula (4), k is a constant multiplicative factor, and its value is related to the number S of pictures per layer in the scale space, that is, k=2 1/S . The enhanced grayscale aerial image is detected, and if the grayscale value of a certain pixel reaches an extreme value in 26 neighborhoods around and up and down, record the position and scale of this point as a candidate feature point. This patent combines the contrast threshold screening method with the features and characteristics of the forest area image to detect the extreme points, reduce the extraction of invalid feature points in the image, and thus improve the efficiency of the algorithm for detecting and extracting the feature points of this type of image. The principle of threshold selection is to ensure the correct registration in the forest image while minimizing feature point extraction. The relationship between the threshold size and the correct registration rate of feature points is obtained through experimental verification, so as to select the optimal contrast threshold. The main basis of this method is: there are a large number of similar structures in the forest image, and the pixel value changes between the forest scenes are small, but there are obvious contrast and pixel value changes between forest trees and non-forest trees in the image. Therefore, the method of increasing the contrast threshold can be used to extract the main scene in the image content, thereby extracting feature points that are more likely to be useful for image registration, and reducing the detection of feature points that do not contribute to registration.
精确定位特征点,通过拟和三维二次函数以精确确定特征点的位置和尺度,同时过滤低对比度的特征点和不稳定的边缘响应点,以增强匹配稳定性、提高抗噪声能力。Precisely locate feature points, accurately determine the position and scale of feature points by fitting three-dimensional quadratic functions, and filter low-contrast feature points and unstable edge response points to enhance matching stability and improve noise resistance.
确定特征点主方向,为了使描述符具有旋转不变性,利用所述灰度航拍图像的局部特征给每一个特征点分配一个方向。利用特征点邻域像素的梯度及方向分布的特性,得到梯度的模值m(x,y)和方向θ(x,y),如式(5)、(6)所示。The main direction of the feature point is determined. In order to make the descriptor invariant to rotation, a direction is assigned to each feature point by using the local features of the grayscale aerial image. Using the characteristics of the gradient and direction distribution of the neighborhood pixels of the feature point, the modulus m(x, y) and the direction θ(x, y) of the gradient are obtained, as shown in formulas (5) and (6).
生成特征点描述符,为确保特征向量旋转不变性,先将坐标轴旋转到特征点主方向上,采用高斯圆形窗口对梯度模值进行高斯加权,以特征点为中心16×16的窗口,将这个窗口分成4×4个子区域,每个区域通过梯度直方图统计8个方向,最终形成一个不受尺度、旋转和光照等影响的128维SIFT特征向量,即所述特征点描述符。To generate a feature point descriptor, in order to ensure the invariance of feature vector rotation, first rotate the coordinate axis to the main direction of the feature point, use a Gaussian circular window to Gaussian weight the gradient modulus, and use the feature point as the center of the 16×16 window, Divide this window into 4×4 sub-regions, each region counts 8 directions through the gradient histogram, and finally forms a 128-dimensional SIFT feature vector that is not affected by scale, rotation and illumination, which is the feature point descriptor.
使用所述特征点描述符确定特征点。Feature points are determined using the feature point descriptor.
如图3a、图3b所示,为特征点提取结果。图3a为本发明实施例特征点提取结果示意图,图3b为本发明实施例另一个特征点提取结果示意图。As shown in Figure 3a and Figure 3b, it is the feature point extraction result. FIG. 3 a is a schematic diagram of a feature point extraction result according to an embodiment of the present invention, and FIG. 3 b is a schematic diagram of another feature point extraction result according to an embodiment of the present invention.
步骤104,特征点匹配,为了简化匹配过程,提高特征点匹配速度,本发明实施例采用单位矢量点积的反余弦函数之比来代替传统方法的欧式距离之比进行匹配。将参考图像中的某个特征点与待配准图像中的所有特征点做点积运算,然后将点积结果进行反余弦变换存储到指定数组中,接着从该数组中找到最小角度θ1和次小角度θ2,若两者之比小于指定的阈值M,则认为最小角度所对应的特征点与参考图像中的特征点相匹配。所述参考图像为所述特征点提取结果示意图3a,所述待配准图像为另一个特征点提取结果示意图3b。作为一个实施例,M为0.6。如式(7)所示。Step 104, feature point matching. In order to simplify the matching process and improve the feature point matching speed, the embodiment of the present invention uses the ratio of the arccosine function of the unit vector dot product instead of the Euclidean distance ratio of the traditional method for matching. Perform a dot product operation on a certain feature point in the reference image and all the feature points in the image to be registered, then perform an inverse cosine transform on the dot product result and store it in the specified array, and then find the minimum angle θ1 and times Small angle θ2, if the ratio of the two is less than the specified threshold M, the feature point corresponding to the minimum angle is considered to match the feature point in the reference image. The reference image is a schematic diagram 3a of the feature point extraction result, and the image to be registered is another schematic diagram 3b of the feature point extraction result. As an example, M is 0.6. As shown in formula (7).
而所述传统方法是通过计算两个特征点描述符之间的欧氏距离得到的。欧氏距离计算公式如式(8)所示。However, the traditional method is obtained by calculating the Euclidean distance between two feature point descriptors. Euclidean distance calculation formula is shown in formula (8).
式(8)中Desp和Desq分别表示特征点p和q的特征描述符,d为p、q两特征点间的欧式距离。对于所述参考图像中的特征描述符el,在所述待配准图像中找到距离el最近以及次近的特征描述符er和eq。计算欧氏距离D(el,er)和D(el,eq)的比值N,如式(9)所示。In formula (8), Des p and Des q represent the feature descriptors of feature points p and q, respectively, and d is the Euclidean distance between the two feature points of p and q. For the feature descriptor e l in the reference image, find the feature descriptors e r and e q closest to e l in the image to be registered. Calculate the ratio N of the Euclidean distance D(e l , e r ) and D(e l , e q ), as shown in formula (9).
式(9)中el=(el1,el2,…,el128),er=(er1,er2,…,er128),eq=(eq1,eq2,…,eq128)。设定阈值M,所述阈值M的范围为0.4-0.6,若N<M,则保留(el,er)这对特征点作为匹配对,反之则舍弃。所述传统方法虽然可以找到合适的匹配对,但利用欧式距离之比进行计算时,需要多次进行平方以及开平方运算,计算复杂度较高,从而会增加匹配时间,影响匹配效率。而本发明实施例采取的使用单位矢量点积反余弦角度之比进行特征匹配的计算方法只需要进行向量相乘和求反余弦函数等基本运算,简化了计算公式,提高了特征点的匹配效率。In formula (9), e l = (e l1 ,e l2 ,…,e l128 ), e r =(e r1 ,e r2 ,…,e r128 ), e q =(e q1 ,e q2 ,…,e q128 ). A threshold M is set, and the range of the threshold M is 0.4-0.6. If N<M, the pair of feature points (e l , e r ) is retained as a matching pair, otherwise discarded. Although the traditional method can find a suitable matching pair, when using the Euclidean distance ratio for calculation, it needs to perform square and square root operations many times, and the calculation complexity is high, which will increase the matching time and affect the matching efficiency. However, the calculation method for feature matching using the ratio of unit vector dot product arccosine angles adopted in the embodiment of the present invention only needs to perform basic operations such as vector multiplication and inverse cosine function, which simplifies the calculation formula and improves the matching efficiency of feature points .
如图4所示,为本发明实施例特征点匹配结果示意图。As shown in FIG. 4 , it is a schematic diagram of a feature point matching result according to an embodiment of the present invention.
步骤5,去除误匹配点,为了提高图像配准精度,本发明实施例使用RANSAC算法来进一步提纯特征点。所述步骤104虽然实现了参考图像和带配准图像特征点的粗匹配,但其中依然存在误匹配。因此,本发明实施例采用RANSAC算法进行匹配点提纯。具体处理过程如下:从所获取的匹配点中随机选取4对,线性计算得到变换矩阵H,计算各个匹配点经H变换后到对应匹配点的距离d,确定距离阈值p,把满足d<p的点作为内点,并根据内点重新估计变换矩阵,如此迭代N次后,可以剔除误匹配点后并得到变换矩阵H的初值Ha,所述变换矩阵H为一个含有8个参数的3×3矩阵,投影所述变换矩阵H如式(10)所示。Step 5, removing mismatching points. In order to improve image registration accuracy, the embodiment of the present invention uses RANSAC algorithm to further purify feature points. Although the step 104 achieves a rough matching of feature points of the reference image and the image with registration, there are still mismatches. Therefore, the embodiment of the present invention uses the RANSAC algorithm to purify matching points. The specific process is as follows: randomly select 4 pairs from the obtained matching points, linearly calculate the transformation matrix H, calculate the distance d from each matching point to the corresponding matching point after H transformation, determine the distance threshold p, and satisfy d<p as the interior point, and re-estimate the transformation matrix according to the interior point. After N times of iterations, the initial value H a of the transformation matrix H can be obtained after removing the mismatching points. The transformation matrix H is an 8-parameter 3×3 matrix, and project the transformation matrix H as shown in formula (10).
式(10)中(xi,yi),(x′i,y′i)分别为所述参考图像和所述待拼接图像的对应匹配点Al和Ar。In formula (10), ( xi , y i ), (x' i , y' i ) are the corresponding matching points A l and A r of the reference image and the image to be stitched, respectively.
如图5所示,为本发明实施例去除误匹配点后的匹配结果示意图。As shown in FIG. 5 , it is a schematic diagram of a matching result after removing mismatch points according to an embodiment of the present invention.
步骤6,精炼变换矩阵,本发明实施例采用L-M非线性迭代算法精炼变换矩阵H,通过迭代计算的方式不断修正变换矩阵各参数的值。以求得最终精确的变换矩阵。Step 6: Refining the transformation matrix. In the embodiment of the present invention, the L-M nonlinear iterative algorithm is used to refine the transformation matrix H, and the value of each parameter of the transformation matrix is continuously corrected through iterative calculation. In order to obtain the final accurate transformation matrix.
L-M非线性迭算法精炼变换矩阵的流程图如图6所示,图6为L-M非线性迭算法精炼变换矩阵流程图,包括步骤:The flow chart of refining the transformation matrix by the L-M nonlinear iterative method is shown in Figure 6, and Figure 6 is a flow chart of refining the transformation matrix by the L-M nonlinear iterative method, including steps:
S601,设n=0;μ=0.01;设立阈值τ;最大迭代次数为N;初始变换矩阵为m0。S601, set n=0; μ=0.01; set a threshold τ; the maximum number of iterations is N; the initial transformation matrix is m 0 .
S602,计算F(mn)的值。S602. Calculate the value of F(m n ).
S603,判断F(mn)是否大于τ,若是则转至S604,反之则结束流程。S603, judging whether F(m n ) is greater than τ, if so, go to S604, otherwise, end the process.
S604,判断n!是否大于0,若是则转至S605,反正则转至S606。S604, judge n! Whether it is greater than 0, if so, go to S605, otherwise, go to S606.
S605,判断F(mn)是否小于F(mn-1),若是则转至S606,反之则设定mn=mn-1,μ=10×μ并转至S607。S605, judge whether F(m n ) is smaller than F(m n-1 ), if so, go to S606, otherwise, set m n =m n-1 , μ=10×μ and go to S607.
S606,设定μ=μ/10并计算J(mn)的值,转至S607。S606, set μ=μ/10 and calculate the value of J(m n ), go to S607.
S607,判断n是否大于N,若是则结束流程,反之则根据公式:S607, judge whether n is greater than N, if so, end the process, otherwise, according to the formula:
mn+1=mn-[J(mn)τ+J(mn)μI]-1J(mn)τf(mn),式中J(m)为雅克比矩阵,设n=n+1,转至S602,即计算F(mn)的值。m n+1 =m n -[J(m n ) τ +J(m n )μI] -1 J(m n ) τ f(m n ), where J(m) is the Jacobian matrix, let n =n+1, go to S602, that is, calculate the value of F(m n ).
经过上述步骤,使用F(mn)的值带入式(11)-式(13)After the above steps, use the value of F(m n ) into formula (11) - formula (13)
计算后得到最终变换矩Hb。式中(m11~m32)为变换矩阵H的各分量,式(13)中,F(m)为所有对应匹配的特征点之间距离的和。L-M算法的目标是求解变换矩阵的m11~m32八个参数的最优解,使得所述去除误匹配点后的匹配结果示意图中的所有对应匹配的特征点之间距离的和最小,即F(m)取得最小值时,所对应的所述各分量m11~m32的各个解。根据所述各分量的最优解,即可获得最终变换矩Hb。After calculation, the final transformation moment H b is obtained. In the formula (m 11 ~m 32 ) are the components of the transformation matrix H, and in the formula (13), F(m) is the sum of the distances between all corresponding matching feature points. The goal of the LM algorithm is to solve the optimal solution of the eight parameters m11 ~ m32 of the transformation matrix, so that the sum of the distances between all corresponding matching feature points in the schematic diagram of the matching result after removing the mismatching points is the smallest, that is, When F(m) takes the minimum value, the corresponding solutions of the respective components m 11 -m 32 . According to the optimal solution of each component, the final transformation moment H b can be obtained.
使用所述最终变换矩阵Hb对待配准图像原图进行图像变换后可得待融合图像。The image to be fused can be obtained after image transformation is performed on the original image of the image to be registered by using the final transformation matrix Hb .
如图7a所示,为本发明实施例待配准图像原图。将所述图7a本发明实施例待配准图像原图采用所述L-M非线性迭代算法精炼变换矩阵后,得到本发明实施例待融合图像,即图7b。As shown in Fig. 7a, it is the original image of the image to be registered according to the embodiment of the present invention. After using the L-M nonlinear iterative algorithm to refine the transformation matrix from the original image of the image to be registered in the embodiment of the present invention in FIG. 7a, the image to be fused in the embodiment of the present invention is obtained, that is, FIG. 7b.
步骤7,图像融合,本发明实施例采用加权平均融合算法进行图像融合。所述加权融合算法为将所述参考图像的原图和所述待融合图像的对应像素点的像素值直接取相同的权值,然后进行加权平均,得到融合图像对应像素点的像素值。如图8所示,为本发明实施例最终拼接图像。Step 7, image fusion, the embodiment of the present invention uses a weighted average fusion algorithm for image fusion. The weighted fusion algorithm is to directly take the same weight for the pixel values of the original image of the reference image and the corresponding pixel points of the image to be fused, and then perform a weighted average to obtain the pixel values of the corresponding pixel points of the fused image. As shown in FIG. 8 , it is the final spliced image according to the embodiment of the present invention.
如图9所示,为本发明实施例一种无人机林区航拍图像拼接系统结构图,包括:As shown in Figure 9, it is a structural diagram of an aerial image mosaic system of an unmanned aerial vehicle forest area according to an embodiment of the present invention, including:
图像输入模块801,用于取得目标区域上空的林区航拍图像。The image input module 801 is used to acquire aerial images of the forest area above the target area.
图像增强模块802,由于有些区域的航拍图像色彩单一且变化不明显,导致所述航拍图像转化为灰度图像后对比度过低,图像细节模糊,难以有效地采用SIFT算法提取特征点。因此所述图像增强模块802用于使用全局直方图均衡化算法对图像进行预处理,可以提高图像对比度,增加图像细节信息,从而增加提取特征点的数目。In the image enhancement module 802, due to the single color of the aerial image in some areas and no obvious change, the contrast of the aerial image is too low after being converted into a grayscale image, and the image details are blurred, so it is difficult to effectively use the SIFT algorithm to extract feature points. Therefore, the image enhancement module 802 is used to preprocess the image using the global histogram equalization algorithm, which can improve image contrast, increase image detail information, and thus increase the number of extracted feature points.
具体处理过程如下:将图像gzh(x,y)的灰度级r归一化到区间[0,1],r=0时为黑色,r=1时为白色。gzh(x,y)灰度级范围为[0,L-1],像素的总数为n,则有灰度级为rk的像素个数为nk。所述全局直方图均衡化对应的变换如式(1)所示。The specific processing process is as follows: normalize the gray level r of the image gzh(x,y) to the interval [0,1], when r=0 it is black, and when r=1 it is white. g zh(x, y) gray scale range is [0, L-1], the total number of pixels is n, then the number of pixels with gray scale r k is n k . The transformation corresponding to the global histogram equalization is shown in formula (1).
式(1)中Pr(rj)为gzh(x,y)的概率分布函数,Sk即为图像gzh(x,y)的灰度级k从0取至(L-1)时,对图像gzh(x,y)概率分布函数Pr(rj)求和,L为图像灰度级总数,T(r)代表变换函数。使用所述变换公式改变所述航拍图像的灰度,得到图像增强后的对比效果如图2a-2c所示,增强后的灰度航拍图像对比度明显提高,图像细节信息更加丰富。其中,图2a为本发明实施例原始航拍图像,图2b为本发明实施例灰度航拍图像,图2c为本发明实施例增强后的灰度航拍图像。In formula (1), P r (r j ) is the probability distribution function of g zh(x,y) , and S k is the gray level k of the image g zh(x,y) from 0 to (L-1) When , the probability distribution function P r (r j ) of the image g zh(x,y) is summed, L is the total number of gray levels of the image, and T(r) represents the transformation function. Using the transformation formula to change the grayscale of the aerial image, the contrast effect after image enhancement is obtained as shown in Figures 2a-2c, the contrast of the enhanced grayscale aerial image is obviously improved, and the image detail information is more abundant. 2a is the original aerial image of the embodiment of the present invention, FIG. 2b is the grayscale aerial image of the embodiment of the present invention, and FIG. 2c is the enhanced grayscale aerial image of the embodiment of the present invention.
提取特征点模块803,无人机在拍摄目标区域图像时,由于受到光照变化及其它环境因素的干扰,可能造成输入的图像序列存在旋转,图像的视角和尺度也有可能发生变化,为克服上述干扰,本发明实施例采用SIFT算法来进行图像配准。所述图像配准是图像拼接最为核心的步骤,配准的精度将直接影响图像拼接的最终效果。所述图像配准为计算参考图像和待配准图像间的空间变换参数,确定空间变换模型,并实现两幅图像之间坐标系的转换。所述提取特征点模块803用于使用所述SIFT算法通过在尺度空间上提取特征点,具有良好的尺度不变性以及对视角变化的鲁棒性。Feature point extraction module 803. When the UAV is shooting the image of the target area, due to the interference of illumination changes and other environmental factors, the input image sequence may be rotated, and the viewing angle and scale of the image may also change. In order to overcome the above interference , the embodiment of the present invention uses the SIFT algorithm to perform image registration. The image registration is the core step of image stitching, and the accuracy of registration will directly affect the final effect of image stitching. The image registration is to calculate the space transformation parameters between the reference image and the image to be registered, determine the space transformation model, and realize the transformation of the coordinate system between the two images. The feature point extraction module 803 is used to extract feature points in scale space using the SIFT algorithm, which has good scale invariance and robustness to changes in viewing angles.
所述提取特征点模块803的具体功能包括:The specific functions of the feature point extraction module 803 include:
尺度空间极值检测,一副二维图像的尺度空间L(x,y,σ)定义为尺度可变高斯函数G(x,y,σ)与原图像I(x,y)的卷积,如式(2)、(3)所示。Scale space extreme value detection, the scale space L(x,y,σ) of a two-dimensional image is defined as the convolution of the scale-variable Gaussian function G(x,y,σ) and the original image I(x,y), As shown in formulas (2) and (3).
L(x,y,σ)=G(x,y,σ)*I(x,y) (3)L(x,y,σ)=G(x,y,σ)*I(x,y) (3)
式(3)中,*是卷积操作,(x,y)为图像像素的位置,σ为尺度空间因子,σ值越大,图像越模糊,σ值越小,图像越清晰。为了有效地在尺度空间检测到相对稳定的特征点,用不同尺度的高斯差分核与图像卷积生成高斯差分尺度空间D(x,y,σ)。如式(4)所示。In formula (3), * is the convolution operation, (x, y) is the position of the image pixel, σ is the scale space factor, the larger the value of σ, the blurrier the image, and the smaller the value of σ, the clearer the image. In order to effectively detect relatively stable feature points in the scale space, the Gaussian difference scale space D(x, y, σ) is generated by convolution of Gaussian difference kernels of different scales with the image. As shown in formula (4).
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ)D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y ,σ)
(4)(4)
式(4)中,k是常数乘性因子,其取值与尺度空间每层图片数S有关,即k=21/S。对所述增强后的灰度航拍图像进行检测,若其中某像素点灰度值在其四周及上下共26个邻域内达到极值,则记录此点所在的位置和尺度,作为候选特征点。本专利将对比度阈值筛选方法与林区图像存在的特征、特点相结合来进行极值点检测,减少图像中无效特征点的提取,从而提高算法对该类图像特征点检测和提取的效率。阈值选取的原则是:在尽量减少特征点提取的同时,保证林区图像中的正确配准。通过实验验证的方式获取阈值大小与特征点正确配准率的关系,从而选取最优的对比度阈值。该方法的主要根据为:林区图像中景物存在着大量的相似结构,且林木景物之间的像素值变化小,但图像中林木与非林木之间却存在比较明显的对比度和像素值变化,因此,可以通过增加对比度阈值的方法,实现对图像内容中主要景物的提取,从而提取出更有可能对图像配准有用的特征点,减少对配准无贡献特征点的检测。In formula (4), k is a constant multiplicative factor, and its value is related to the number S of pictures per layer in the scale space, that is, k=2 1/S . The enhanced grayscale aerial image is detected, and if the grayscale value of a certain pixel reaches an extreme value in its surrounding and 26 neighborhoods, record the position and scale of this point as a candidate feature point. This patent combines the contrast threshold screening method with the features and characteristics of forest images to detect extreme points, reduce the extraction of invalid feature points in the image, and improve the efficiency of the algorithm for detecting and extracting feature points of this type of image. The principle of threshold selection is to ensure the correct registration in the forest image while minimizing feature point extraction. The relationship between the threshold size and the correct registration rate of feature points is obtained through experimental verification, so as to select the optimal contrast threshold. The main basis of this method is: there are a large number of similar structures in the forest image, and the pixel value changes between the forest scenes are small, but there are obvious contrast and pixel value changes between the forest and non-forest in the image. Therefore, the method of increasing the contrast threshold can be used to extract the main scene in the image content, thereby extracting feature points that are more likely to be useful for image registration, and reducing the detection of feature points that do not contribute to registration.
精确定位特征点,通过拟和三维二次函数以精确确定特征点的位置和尺度,同时过滤低对比度的特征点和不稳定的边缘响应点,以增强匹配稳定性、提高抗噪声能力。Precisely locate feature points, accurately determine the position and scale of feature points by fitting three-dimensional quadratic functions, and filter low-contrast feature points and unstable edge response points to enhance matching stability and improve noise resistance.
确定特征点主方向,为了使描述符具有旋转不变性,利用所述灰度航拍图像图像的局部特征给每一个特征点分配一个方向。利用特征点邻域像素的梯度及方向分布的特性,得到梯度的模值m(x,y)和方向(x,y),如式(5)、(6)所示。The main direction of the feature point is determined. In order to make the descriptor invariant to rotation, a direction is assigned to each feature point by using the local features of the grayscale aerial image. Using the characteristics of the gradient and direction distribution of the neighborhood pixels of the feature point, the modulus m(x, y) and direction (x, y) of the gradient are obtained, as shown in formulas (5) and (6).
生成特征点描述符,为确保特征向量旋转不变性,先将坐标轴旋转到特征点主方向上,采用高斯圆形窗口对梯度模值进行高斯加权,以特征点为中心16×16的窗口,将这个窗口分成4×4个子区域,每个区域通过梯度直方图统计8个方向,最终形成一个不受尺度、旋转和光照等影响的128维SIFT特征向量。To generate a feature point descriptor, in order to ensure the invariance of feature vector rotation, first rotate the coordinate axis to the main direction of the feature point, use a Gaussian circular window to Gaussian weight the gradient modulus, and use the feature point as the center of the 16×16 window, Divide this window into 4×4 sub-regions, each region counts 8 directions through the gradient histogram, and finally forms a 128-dimensional SIFT feature vector that is not affected by scale, rotation and illumination.
使用所述特征点描述符确定特征点。Feature points are determined using the feature point descriptor.
如图3a、图3b所示,为特征点提取结果。图3a为本发明实施例特征点提取结果示意图,图3b为本发明另一个实施例特征点提取结果示意图。As shown in Figure 3a and Figure 3b, it is the feature point extraction result. Fig. 3a is a schematic diagram of a feature point extraction result of an embodiment of the present invention, and Fig. 3b is a schematic diagram of a feature point extraction result of another embodiment of the present invention.
特征点匹配模块804,用于对特征点进行匹配,为了简化匹配过程,提高特征点匹配速度,所述特征点匹配模块804采用单位矢量点积的反余弦函数之比来代替传统方法的欧式距离之比进行匹配。将参考图像中的某个特征点与待配准图像中的所有特征点做点积运算,然后将点积结果进行反余弦变换存储到指定数组中,接着从该数组中找到最小角度θ1和次小角度θ2,若两者之比小于指定的阈值M,则认为最小角度所对应的特征点与参考图像中的特征点相匹配。所述参考图像为所述特征点提取结果示意图3a,所述待配准图像为另一个特征点提取结果示意图3b。作为一个实施例,M为0.6。如式(7)所示。The feature point matching module 804 is used to match the feature points. In order to simplify the matching process and improve the feature point matching speed, the feature point matching module 804 uses the ratio of the arccosine function of the unit vector dot product to replace the Euclidean distance of the traditional method The ratio is matched. Perform a dot product operation on a certain feature point in the reference image and all the feature points in the image to be registered, then perform an inverse cosine transform on the dot product result and store it in the specified array, and then find the minimum angle θ1 and times Small angle θ2, if the ratio of the two is less than the specified threshold M, the feature point corresponding to the minimum angle is considered to match the feature point in the reference image. The reference image is a schematic diagram 3a of the feature point extraction result, and the image to be registered is another schematic diagram 3b of the feature point extraction result. As an example, M is 0.6. As shown in formula (7).
而所述传统方法是通过计算两个特征点描述符之间的欧氏距离得到的。欧氏距离计算公式如式(8)所示。However, the traditional method is obtained by calculating the Euclidean distance between two feature point descriptors. Euclidean distance calculation formula is shown in formula (8).
式(8)中Desp和Desq分别表示特征点p和q的特征描述符,d为p、q两特征点间的欧式距离。对于所述参考图像中的特征描述符el,在所述待配准图像中找到距离el最近以及次近的特征描述符er和eq。计算欧氏距离D(el,er)和D(el,eq)的比值N,如式(9)所示。In formula (8), Des p and Des q represent the feature descriptors of feature points p and q, respectively, and d is the Euclidean distance between the two feature points of p and q. For the feature descriptor e l in the reference image, find the feature descriptors e r and e q closest to e l in the image to be registered. Calculate the ratio N of the Euclidean distance D(e l , e r ) and D(e l , e q ), as shown in formula (9).
式(9)中el=(el1,el2,…,el128),er=(er1,er2,…,er128),eq=(eq1,eq2,…,eq128)。设定阈值M,所述阈值M的范围为0.4-0.6,若N<M,则保留(el,er)这对特征点作为匹配对,反之则舍弃。所述传统方法虽然可以找到合适的匹配对,但利用欧式距离之比进行计算时,需要多次进行平方以及开平方运算,计算复杂度较高,从而会增加匹配时间,影响匹配效率。所述特征点匹配模块804使用单位矢量点积反余弦角度之比进行特征匹配的计算方法只需要进行向量相乘和求反余弦函数等基本运算,简化了计算公式,提高了特征点的匹配效率。In formula (9), e l = (e l1 ,e l2 ,…,e l128 ), e r =(e r1 ,e r2 ,…,e r128 ), e q =(e q1 ,e q2 ,…,e q128 ). A threshold M is set, and the range of the threshold M is 0.4-0.6. If N<M, the pair of feature points (e l , e r ) is retained as a matching pair, otherwise discarded. Although the traditional method can find a suitable matching pair, when using the Euclidean distance ratio for calculation, it needs to perform square and square root operations many times, and the calculation complexity is high, which will increase the matching time and affect the matching efficiency. The feature point matching module 804 uses the ratio of the unit vector dot product arccosine angle to perform feature matching. The calculation method only needs to perform basic operations such as vector multiplication and inverse cosine function, which simplifies the calculation formula and improves the matching efficiency of feature points. .
如图4所示,为本发明实施例特征点匹配结果示意图。As shown in FIG. 4 , it is a schematic diagram of a feature point matching result according to an embodiment of the present invention.
去除误匹配点模块805,用于去除误匹配的特征点。为了提高图像配准精度,本发明实施例使用RANSAC算法来进一步提纯特征点。所述特征点匹配模块804虽然实现了前后图像特征点的粗匹配,但其中依然存在误匹配。因此,所述去除误匹配点模块805采用RANSAC(Random Sample Consensus,随机抽样一致性)算法进行匹配点提纯。具体处理过程如下:从所获取的匹配点中随机选取4对,线性计算得到变换矩阵H,计算各个匹配点经H变换后到对应匹配点的距离d,确定距离阈值p,把满足d<p的点作为内点,并根据内点重新估计变换矩阵,如此迭代N次后,可以得到变换矩阵H的初值Ha,所述变换矩阵H为一个含有8个参数的3×3矩阵,投影所述变换矩阵H如式(10)所示。The module 805 for removing incorrectly matched points is used to remove incorrectly matched feature points. In order to improve the accuracy of image registration, the embodiment of the present invention uses the RANSAC algorithm to further purify the feature points. Although the feature point matching module 804 realizes the rough matching of the feature points of the front and back images, there are still mismatches. Therefore, the module 805 for removing incorrect matching points uses a RANSAC (Random Sample Consensus, Random Sample Consensus) algorithm to purify matching points. The specific process is as follows: randomly select 4 pairs from the obtained matching points, linearly calculate the transformation matrix H, calculate the distance d from each matching point to the corresponding matching point after H transformation, determine the distance threshold p, and satisfy d<p as the interior point, and re-estimate the transformation matrix according to the interior point. After N iterations, the initial value Ha of the transformation matrix H can be obtained. The transformation matrix H is a 3×3 matrix containing 8 parameters, and the projected The above transformation matrix H is shown in formula (10).
式(10)中(xi,yi),(x′i,y′i)分别为参考图像和待拼接图像的对应匹配点Al和Ar。In formula (10), (x i , y i ), (x′ i , y′ i ) are the corresponding matching points A l and A r of the reference image and the image to be stitched, respectively.
如图5所示,为本发明实施例去除误匹配点后的匹配结果示意图。As shown in FIG. 5 , it is a schematic diagram of a matching result after removing mismatch points according to an embodiment of the present invention.
精炼变换矩阵模块806,用于采用L-M非线性迭代算法精炼变换矩阵H。The refining transformation matrix module 806 is configured to refine the transformation matrix H by using the L-M nonlinear iterative algorithm.
L-M非线性迭算法精炼变换矩阵的流程图如图6所示,图6为L-M非线性迭算法精炼变换矩阵流程图,包括步骤:The flow chart of refining the transformation matrix by the L-M nonlinear iterative method is shown in Figure 6, and Figure 6 is a flow chart of refining the transformation matrix by the L-M nonlinear iterative method, including steps:
S601,设n=0;μ=0.01;设立阈值τ;最大迭代次数为N;初始变换矩阵为m0。S601, set n=0; μ=0.01; set a threshold τ; the maximum number of iterations is N; the initial transformation matrix is m 0 .
S602,计算F(mn)的值。S602. Calculate the value of F(m n ).
S603,判断F(mn)是否大于τ,若是则转至S604,反之则结束流程。S603, judging whether F(m n ) is greater than τ, if so, go to S604, otherwise, end the process.
S604,判断n!是否大于0,若是则转至S605,反正则转至S606。S604, judge n! Whether it is greater than 0, if so, go to S605, otherwise, go to S606.
S605,判断F(mn)是否小于F(mn-1),若是则转至S606,反之则设定mn=mn-1,μ=10×μ并转至S607。S605, judge whether F(m n ) is smaller than F(m n-1 ), if so, go to S606, otherwise, set m n =m n-1 , μ=10×μ and go to S607.
S606,设定μ=μ/10并计算J(mn)的值,转至S607。S606, set μ=μ/10 and calculate the value of J(m n ), go to S607.
S607,判断n是否大于N,若是则结束流程,反之则根据公式:S607, judge whether n is greater than N, if so, end the process, otherwise, according to the formula:
mn+1=mn-[J(mn)τ+J(mn)μI]-1J(mn)τf(mn),式中J(m)为雅克比矩阵,设n=n+1,转至S602,即计算F(mn)的值。m n+1 =m n -[J(m n )τ+J(m n )μI] -1 J(m n ) τ f(m n ), where J(m) is the Jacobian matrix, let n =n+1, go to S602, that is, calculate the value of F(m n ).
经过上述步骤,使用F(mn)的值带入式(11)-式(13)After the above steps, use the value of F(m n ) into formula (11) - formula (13)
计算后得到最终变换矩Hb。式中(m11~m32)为变换矩阵H的各分量,式(13)中,F(m)为所有对应匹配的特征点之间距离的和。L-M算法的目标是求解变换矩阵的m11~m32八个参数的最优解,使得所述去除误匹配点后的匹配结果示意图中的所有对应匹配的特征点之间距离的和最小,即F(m)取得最小值时,所对应的所述各分量m11~m32的各个解。根据所述各分量的最优解,即可获得最终变换矩Hb。After calculation, the final transformation moment H b is obtained. In the formula (m 11 ~m 32 ) are the components of the transformation matrix H, and in the formula (13), F(m) is the sum of the distances between all corresponding matching feature points. The goal of the LM algorithm is to solve the optimal solution of the eight parameters m11 ~ m32 of the transformation matrix, so that the sum of the distances between all corresponding matching feature points in the schematic diagram of the matching result after removing the mismatching points is the smallest, that is, When F(m) takes the minimum value, the corresponding solutions of the respective components m 11 -m 32 . According to the optimal solution of each component, the final transformation moment H b can be obtained.
使用所述最终变换矩阵Hb对待配准图像进行图像变换后可得待融合图像。The image to be fused can be obtained by performing image transformation on the image to be registered using the final transformation matrix Hb .
如图7a所示,为本发明实施例待配准图像原图。将所述图7a本发明实施例待配准图像原图采用所述L-M非线性迭代算法精炼变换矩阵后,得到本发明实施例待融合图像,即图7b。As shown in Fig. 7a, it is the original image of the image to be registered according to the embodiment of the present invention. After using the L-M nonlinear iterative algorithm to refine the transformation matrix from the original image of the image to be registered in the embodiment of the present invention in FIG. 7a, the image to be fused in the embodiment of the present invention is obtained, that is, FIG. 7b.
图像融合模块807,用于采用加权平均融合算法进行图像融合。所述加权融合算法为将所述参考图像的原图和所述待融合图像的对应像素点的像素值直接取相同的权值,然后进行加权平均,得到融合图像对应像素点的像素值。如图8所示,为本发明实施例最终融合图像。The image fusion module 807 is configured to perform image fusion using a weighted average fusion algorithm. The weighted fusion algorithm is to directly take the same weight for the pixel values of the original image of the reference image and the corresponding pixel points of the image to be fused, and then perform a weighted average to obtain the pixel values of the corresponding pixel points of the fused image. As shown in FIG. 8 , it is the final fused image of the embodiment of the present invention.
综上所述,本发明提供的一种无人机林区航拍图像拼接方法和系统,先使用全局直方图均衡化算法来对图像进行预处理,之后采用SIFT特征图像拼接算法提取特征点。在去除误匹配特征点之后,精炼变换矩阵,最后进行图像融合,得到最终融合图像。所述一种无人机林区航拍图像拼接方法和系统具有速度快且准确率高的特点,可以实现将色彩单一、对比度低的航拍图像进行快速拼接,方便使用者进行对后续遥感图像的分析和处理。To sum up, the present invention provides a method and system for splicing aerial images of unmanned aerial vehicles in forest areas. First, the global histogram equalization algorithm is used to preprocess the images, and then the SIFT feature image splicing algorithm is used to extract feature points. After removing the mismatched feature points, the transformation matrix is refined, and finally the image is fused to obtain the final fused image. The method and system for splicing aerial images of unmanned aerial vehicle forest areas have the characteristics of fast speed and high accuracy, and can quickly splice aerial images with single color and low contrast, which is convenient for users to analyze subsequent remote sensing images and processing.
所属领域的普通技术人员应当理解:以上任何实施例的讨论仅为示例性的,并非旨在暗示本公开的范围(包括权利要求)被限于这些例子;在本发明的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本发明的不同方面的许多其它变化,为了简明它们没有在细节中提供。因此,凡在本发明的精神和原则之内,所做的任何省略、修改、等同替换、改进等,均应包含在本发明的保护范围之内。Those of ordinary skill in the art should understand that: the discussion of any of the above embodiments is exemplary only, and is not intended to imply that the scope of the present disclosure (including claims) is limited to these examples; under the idea of the present invention, the above embodiments or Combinations between technical features in different embodiments are also possible, steps may be carried out in any order, and there are many other variations of the different aspects of the invention as described above, which are not presented in detail for the sake of brevity. Therefore, any omissions, modifications, equivalent replacements, improvements, etc. within the spirit and principles of the present invention shall be included within the protection scope of the present invention.
Claims (7)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810200580 | 2018-03-12 | ||
| CN2018102005807 | 2018-03-12 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN108961162A true CN108961162A (en) | 2018-12-07 |
Family
ID=64491666
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810643274.0A Pending CN108961162A (en) | 2018-03-12 | 2018-06-21 | A kind of unmanned plane forest zone Aerial Images joining method and system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN108961162A (en) |
Cited By (23)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109767387A (en) * | 2018-12-26 | 2019-05-17 | 北京木业邦科技有限公司 | A kind of forest image acquiring method and device based on unmanned plane |
| CN109829853A (en) * | 2019-01-18 | 2019-05-31 | 电子科技大学 | A method for stitching aerial images of unmanned aerial vehicles |
| CN110286091A (en) * | 2019-06-11 | 2019-09-27 | 华南农业大学 | A UAV-Based Near-Earth Remote Sensing Image Acquisition Method |
| CN110458845A (en) * | 2019-06-25 | 2019-11-15 | 上海圭目机器人有限公司 | Unmanned plane image difference analysis method based on image similarity |
| CN111696084A (en) * | 2020-05-20 | 2020-09-22 | 平安科技(深圳)有限公司 | Cell image segmentation method, cell image segmentation device, electronic equipment and readable storage medium |
| CN111861866A (en) * | 2020-06-30 | 2020-10-30 | 国网电力科学研究院武汉南瑞有限责任公司 | A panorama reconstruction method of substation equipment inspection image |
| CN111967337A (en) * | 2020-07-24 | 2020-11-20 | 电子科技大学 | Pipeline line change detection method based on deep learning and unmanned aerial vehicle images |
| CN112348105A (en) * | 2020-11-17 | 2021-02-09 | 贵州省环境工程评估中心 | Unmanned aerial vehicle image matching optimization method |
| CN112529021A (en) * | 2020-12-29 | 2021-03-19 | 辽宁工程技术大学 | Aerial image matching method based on scale invariant feature transformation algorithm features |
| CN112634186A (en) * | 2020-12-25 | 2021-04-09 | 江西裕丰智能农业科技有限公司 | Image analysis method of unmanned aerial vehicle |
| CN112991176A (en) * | 2021-03-19 | 2021-06-18 | 南京工程学院 | Panoramic image splicing method based on optimal suture line |
| CN113591949A (en) * | 2021-07-19 | 2021-11-02 | 浙江农林大学 | Standing tree feature point matching method, device, equipment and medium |
| CN113723465A (en) * | 2021-08-02 | 2021-11-30 | 哈尔滨工业大学 | Improved feature extraction method and image splicing method based on same |
| CN114170565A (en) * | 2021-11-09 | 2022-03-11 | 广州市鑫广飞信息科技有限公司 | A method, device and terminal equipment for image comparison based on UAV aerial photography |
| CN114240845A (en) * | 2021-11-23 | 2022-03-25 | 华南理工大学 | Surface roughness measuring method by adopting light cutting method applied to cutting workpiece |
| CN114697684A (en) * | 2022-04-12 | 2022-07-01 | 杭州当虹科技股份有限公司 | Method for realizing multi-VR machine position switching |
| CN114897705A (en) * | 2022-06-24 | 2022-08-12 | 徐州飞梦电子科技有限公司 | Unmanned aerial vehicle remote sensing image splicing method based on feature optimization |
| CN114913071A (en) * | 2022-05-16 | 2022-08-16 | 扬州大学 | Underwater image stitching method based on feature point matching integrating luminance region information |
| CN114973028A (en) * | 2022-05-17 | 2022-08-30 | 中国电子科技集团公司第十研究所 | Aerial video image real-time change detection method and system |
| CN115115682A (en) * | 2022-07-18 | 2022-09-27 | 合肥讯飞数码科技有限公司 | Image registration method and related equipment thereof |
| CN115240079A (en) * | 2022-07-05 | 2022-10-25 | 中国人民解放军战略支援部队信息工程大学 | A Multi-source Remote Sensing Image Depth Feature Fusion Matching Method |
| CN116012741A (en) * | 2023-03-17 | 2023-04-25 | 国网湖北省电力有限公司经济技术研究院 | Water and Soil Erosion Monitoring System for High-Voltage Transmission Lines |
| CN118015004A (en) * | 2024-04-10 | 2024-05-10 | 宝鸡康盛精工精密制造有限公司 | Laser cutting scanning system and method |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106940876A (en) * | 2017-02-21 | 2017-07-11 | 华东师范大学 | A kind of quick unmanned plane merging algorithm for images based on SURF |
-
2018
- 2018-06-21 CN CN201810643274.0A patent/CN108961162A/en active Pending
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106940876A (en) * | 2017-02-21 | 2017-07-11 | 华东师范大学 | A kind of quick unmanned plane merging algorithm for images based on SURF |
Non-Patent Citations (3)
| Title |
|---|
| 于瑶瑶: "无人机影像快速拼接关键技术研究", 《中国优秀硕士学位论文全文数据库》 * |
| 徐阳: "无人机遥感图像拼接技术研究", 《中国优秀硕士学位论文全文数据库》 * |
| 王茜: "基于SIFT算法的无人机遥感图像拼接技术", 《吉林大学学报》 * |
Cited By (33)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109767387A (en) * | 2018-12-26 | 2019-05-17 | 北京木业邦科技有限公司 | A kind of forest image acquiring method and device based on unmanned plane |
| CN109829853B (en) * | 2019-01-18 | 2022-12-23 | 电子科技大学 | Unmanned aerial vehicle aerial image splicing method |
| CN109829853A (en) * | 2019-01-18 | 2019-05-31 | 电子科技大学 | A method for stitching aerial images of unmanned aerial vehicles |
| CN110286091A (en) * | 2019-06-11 | 2019-09-27 | 华南农业大学 | A UAV-Based Near-Earth Remote Sensing Image Acquisition Method |
| CN110458845A (en) * | 2019-06-25 | 2019-11-15 | 上海圭目机器人有限公司 | Unmanned plane image difference analysis method based on image similarity |
| CN111696084A (en) * | 2020-05-20 | 2020-09-22 | 平安科技(深圳)有限公司 | Cell image segmentation method, cell image segmentation device, electronic equipment and readable storage medium |
| CN111696084B (en) * | 2020-05-20 | 2024-05-31 | 平安科技(深圳)有限公司 | Cell image segmentation method, device, electronic equipment and readable storage medium |
| CN111861866A (en) * | 2020-06-30 | 2020-10-30 | 国网电力科学研究院武汉南瑞有限责任公司 | A panorama reconstruction method of substation equipment inspection image |
| CN111967337A (en) * | 2020-07-24 | 2020-11-20 | 电子科技大学 | Pipeline line change detection method based on deep learning and unmanned aerial vehicle images |
| CN112348105B (en) * | 2020-11-17 | 2023-09-01 | 贵州省环境工程评估中心 | Unmanned aerial vehicle image matching optimization method |
| CN112348105A (en) * | 2020-11-17 | 2021-02-09 | 贵州省环境工程评估中心 | Unmanned aerial vehicle image matching optimization method |
| CN112634186A (en) * | 2020-12-25 | 2021-04-09 | 江西裕丰智能农业科技有限公司 | Image analysis method of unmanned aerial vehicle |
| CN112529021A (en) * | 2020-12-29 | 2021-03-19 | 辽宁工程技术大学 | Aerial image matching method based on scale invariant feature transformation algorithm features |
| CN112529021B (en) * | 2020-12-29 | 2024-05-28 | 辽宁工程技术大学 | Aerial image matching method based on scale invariant feature transformation algorithm features |
| CN112991176A (en) * | 2021-03-19 | 2021-06-18 | 南京工程学院 | Panoramic image splicing method based on optimal suture line |
| CN112991176B (en) * | 2021-03-19 | 2022-03-01 | 南京工程学院 | Panoramic image splicing method based on optimal suture line |
| CN113591949A (en) * | 2021-07-19 | 2021-11-02 | 浙江农林大学 | Standing tree feature point matching method, device, equipment and medium |
| CN113723465A (en) * | 2021-08-02 | 2021-11-30 | 哈尔滨工业大学 | Improved feature extraction method and image splicing method based on same |
| CN113723465B (en) * | 2021-08-02 | 2024-04-05 | 哈尔滨工业大学 | An improved feature extraction method and image stitching method based on the method |
| CN114170565A (en) * | 2021-11-09 | 2022-03-11 | 广州市鑫广飞信息科技有限公司 | A method, device and terminal equipment for image comparison based on UAV aerial photography |
| CN114240845A (en) * | 2021-11-23 | 2022-03-25 | 华南理工大学 | Surface roughness measuring method by adopting light cutting method applied to cutting workpiece |
| CN114240845B (en) * | 2021-11-23 | 2024-03-26 | 华南理工大学 | Light cutting method surface roughness measurement method applied to cutting workpiece |
| CN114697684A (en) * | 2022-04-12 | 2022-07-01 | 杭州当虹科技股份有限公司 | Method for realizing multi-VR machine position switching |
| CN114913071A (en) * | 2022-05-16 | 2022-08-16 | 扬州大学 | Underwater image stitching method based on feature point matching integrating luminance region information |
| CN114973028A (en) * | 2022-05-17 | 2022-08-30 | 中国电子科技集团公司第十研究所 | Aerial video image real-time change detection method and system |
| CN114897705A (en) * | 2022-06-24 | 2022-08-12 | 徐州飞梦电子科技有限公司 | Unmanned aerial vehicle remote sensing image splicing method based on feature optimization |
| CN115240079A (en) * | 2022-07-05 | 2022-10-25 | 中国人民解放军战略支援部队信息工程大学 | A Multi-source Remote Sensing Image Depth Feature Fusion Matching Method |
| CN115240079B (en) * | 2022-07-05 | 2025-08-15 | 中国人民解放军网络空间部队信息工程大学 | Multi-source remote sensing image depth feature fusion matching method |
| CN115115682A (en) * | 2022-07-18 | 2022-09-27 | 合肥讯飞数码科技有限公司 | Image registration method and related equipment thereof |
| CN116012741A (en) * | 2023-03-17 | 2023-04-25 | 国网湖北省电力有限公司经济技术研究院 | Water and Soil Erosion Monitoring System for High-Voltage Transmission Lines |
| CN116012741B (en) * | 2023-03-17 | 2023-06-13 | 国网湖北省电力有限公司经济技术研究院 | Water and soil loss monitoring system for high-voltage transmission line |
| CN118015004A (en) * | 2024-04-10 | 2024-05-10 | 宝鸡康盛精工精密制造有限公司 | Laser cutting scanning system and method |
| CN118015004B (en) * | 2024-04-10 | 2024-07-05 | 宝鸡康盛精工精密制造有限公司 | Laser cutting scanning system and method |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108961162A (en) | A kind of unmanned plane forest zone Aerial Images joining method and system | |
| US7702131B2 (en) | Segmenting images and simulating motion blur using an image sequence | |
| CN113065558A (en) | Lightweight small target detection method combined with attention mechanism | |
| CN111222395A (en) | Target detection method and device and electronic equipment | |
| WO2021057294A1 (en) | Method and apparatus for detecting subject, electronic device, and computer readable storage medium | |
| CN111260539B (en) | Fish eye pattern target identification method and system thereof | |
| CN113902657A (en) | Image splicing method and device and electronic equipment | |
| CN109829853A (en) | A method for stitching aerial images of unmanned aerial vehicles | |
| CN111582022A (en) | A fusion method, system and electronic device of mobile video and geographic scene | |
| CN107123090A (en) | It is a kind of that farmland panorama system and method are automatically synthesized based on image mosaic technology | |
| CN115035281B (en) | Rapid infrared panoramic image stitching method | |
| CN115861352B (en) | Data fusion and edge extraction method for monocular vision, IMU and laser radar | |
| CN112396556A (en) | Data recording device for low-altitude airborne Lidar terrain rapid and accurate informatization | |
| CN117876608B (en) | Three-dimensional image reconstruction method, three-dimensional image reconstruction device, computer equipment and storage medium | |
| CN114897676A (en) | A method, equipment and medium for stitching multispectral images of UAV remote sensing | |
| CN108376409A (en) | A kind of light field image method for registering and system | |
| Shen et al. | Image-matching enhancement using a polarized intensity-hue-saturation fusion method | |
| CN107609562A (en) | A kind of metric space characteristic detection method based on SIFT algorithms | |
| CN115456870A (en) | Multi-image splicing method based on external parameter estimation | |
| CN113272855A (en) | Response normalization for overlapping multi-image applications | |
| CN112686962A (en) | Indoor visual positioning method and device and electronic equipment | |
| CN115439326B (en) | Image stitching optimization method and device in vehicle looking around system | |
| CN116543014A (en) | Panorama-integrated automatic teacher tracking method and system | |
| Hwang et al. | Real-time 2d orthomosaic mapping from drone-captured images using feature-based sequential image registration | |
| Chand et al. | Implementation of panoramic image stitching using Python |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |