[go: up one dir, main page]

CN107330934B - Low-dimensional cluster adjustment calculation method and system - Google Patents

Low-dimensional cluster adjustment calculation method and system Download PDF

Info

Publication number
CN107330934B
CN107330934B CN201710370360.4A CN201710370360A CN107330934B CN 107330934 B CN107330934 B CN 107330934B CN 201710370360 A CN201710370360 A CN 201710370360A CN 107330934 B CN107330934 B CN 107330934B
Authority
CN
China
Prior art keywords
view
jth
relative
views
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710370360.4A
Other languages
Chinese (zh)
Other versions
CN107330934A (en
Inventor
武元新
蔡奇
郁文贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiao Tong University
Original Assignee
Shanghai Jiao Tong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiao Tong University filed Critical Shanghai Jiao Tong University
Priority to CN201710370360.4A priority Critical patent/CN107330934B/en
Priority to PCT/CN2017/087500 priority patent/WO2018214179A1/en
Publication of CN107330934A publication Critical patent/CN107330934A/en
Application granted granted Critical
Publication of CN107330934B publication Critical patent/CN107330934B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/285Analysis of motion using a sequence of stereo image pairs
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Medicines Containing Antibodies Or Antigens For Use As Internal Diagnostic Agents (AREA)

Abstract

本发明提供了一种低维度的集束调整计算方法与系统,包括:确定运动参数的初值;对运动参数的目标函数进行优化计算,得到优化后的运动参数;根据优化后的运动参数,计算三维场景点坐标。本发明将多幅视图的景深表示为两两视图的相对运动参数的函数,实现了从多幅视图直接恢复运动参数,再从运动参数中解析获得三维场景点坐标,从而将三维场景点坐标从集束调整的参数优化过程中剔除,大幅降低了参数空间的维度。本发明是一种初始化简便、鲁邦性好、计算速度更快、计算精度更高的低维度集束调整方法。本发明可用作无人车/无人机视觉导航、视觉三维重建、增强现实等应用的核心计算引擎。

Figure 201710370360

The invention provides a low-dimensional cluster adjustment calculation method and system, including: determining the initial value of the motion parameter; performing optimization calculation on the objective function of the motion parameter to obtain the optimized motion parameter; according to the optimized motion parameter, calculating 3D scene point coordinates. The invention expresses the depth of field of multiple views as a function of the relative motion parameters of the two views, realizes that the motion parameters are directly recovered from the multiple views, and then the three-dimensional scene point coordinates are obtained through analysis from the motion parameters, so that the three-dimensional scene point coordinates are converted from It is eliminated in the parameter optimization process of cluster adjustment, which greatly reduces the dimension of the parameter space. The invention is a low-dimensional bundle adjustment method with simple initialization, good Lupin performance, faster calculation speed and higher calculation accuracy. The invention can be used as a core computing engine for applications such as unmanned vehicle/unmanned aerial vehicle visual navigation, visual three-dimensional reconstruction, and augmented reality.

Figure 201710370360

Description

Low-dimensional cluster adjustment calculation method and system
Technical Field
The invention relates to the field of computer vision and photogrammetry, in particular to a low-dimensional cluster adjustment calculation method and system.
Background
Bundle Adjustment (Bundle Adjustment), i.e., recovering three-dimensional scene point coordinates, motion parameters and camera parameters from multiple views, is one of the core technologies in the fields of computer vision, photogrammetry and the like. The goal of the bundle adjustment technique is to minimize the reprojection error of the image points, which can be expressed as a non-linear function of the three-dimensional scene point coordinates, motion parameters, and camera parameters. For the case of m three-dimensional scene points and n views, the parameter space is 3 × m +6 × n dimensions. Since the number of three-dimensional scene points is usually large, the dimensions of the parameter space to be optimized are large. At present, the mainstream method of bundle adjustment is realized by adopting a nonlinear optimization algorithm considering the sparsity of a parameter Jacobian (Jacobian) matrix to improve the calculation speed, but the mainstream method still needs to be further improved due to more dimensions of a parameter space to meet the requirement of real-time calculation.
Disclosure of Invention
In view of the defects in the prior art, the present invention provides a method and a system for calculating a low-dimensional bundle adjustment. The invention expresses the depth of field of the multiple views as a function of relative motion parameters of every two views, realizes the direct recovery of the motion parameters from the multiple views, and then obtains the three-dimensional scene point coordinates from the analysis of the motion parameters.
The invention provides a low-dimensional bundle adjustment calculation method, which comprises the following steps:
step 1: determining an initial value of the motion parameter;
step 2: performing minimum calculation on the objective function of the motion parameters to obtain optimized motion parameters;
and step 3: and calculating the coordinates of the three-dimensional scene points according to the optimized motion parameters.
Preferably, the step 1 comprises the steps of:
step 1.1: for a dual-view image formed by the jth view and the jth view +1, j is 1, 2.. and n-1, image feature points corresponding to the common matching feature point set { j, j +1} on the dual-view image are solved by adopting a direct linear transformation algorithm (R) to solve the relative pose (R) of the jth view +1 relative to the jth viewj,j+1,tj,j+1);
Wherein:
n is the number of views participating in bundle adjustment;
Rj,j+1the relative posture of the j +1 th view relative to the j view is shown;
tj,j+1is a unit relative displacement vector of the j +1 th view relative to the jth view, i.e. | tj,j+1||=1;
Calculating the three-dimensional coordinates of the ith matching image point pair corresponding to the common matching feature point set { j, j +1} in the jth view coordinate system
Figure BDA0001302625070000021
And the three-dimensional coordinates of the ith matching image point pair corresponding to the common matching feature point set { j, j +1} in the j +1 view coordinate system
Figure BDA0001302625070000022
Figure BDA0001302625070000023
Figure BDA0001302625070000024
Wherein:
i=1,2,...,m(j,j+1)
m(j,j+1)representing the number of matched image point pairs in the dual view formed by the jth view and the jth +1 th view;
Figure BDA0001302625070000025
normalized image point coordinates of an ith matching image point pair corresponding to the public matching feature point set { j, j +1} on a jth view;
Figure BDA0001302625070000026
normalized image point coordinates of an ith matching image point pair corresponding to the public matching feature point set { j, j +1} on a j +1 th view;
Figure BDA0001302625070000027
representing the three-dimensional coordinates of the ith matching image point pair corresponding to the public matching feature point set { j, j +1} in the jth view coordinate system;
Figure BDA0001302625070000028
representing the three-dimensional coordinates of the ith matching image point pair corresponding to the public matching feature point set { j, j +1} in the j +1 view coordinate system;
step 1.2: fixed | | | T1,21, |; for the three views of j-1, j-1 and j +1, j is 2,3Calculating the scale of relative displacement | | | T by using a point set { j-1, j, j +1}, and calculatingj,j+1||/||Tj-1,j| | obtaining a relative displacement vector T with uniform dimensionj,j+1
Figure BDA0001302625070000029
Tj,j+1=||Tj,j+1||tj,j+1
Wherein:
T1,2is the relative displacement vector of the 2 nd view relative to the 1 st view;
Tj,j+1a relative displacement vector of the j +1 th view relative to the j view;
Tj-1,jis the relative displacement vector of the jth view relative to the jth-1 view;
m(j-1,j,j+1)representing the number of common matching image point pairs in three views consisting of the j-1 th view, the j +1 th view and the j +1 th view;
Figure BDA0001302625070000031
representing three-dimensional coordinates of an ith matching image point pair corresponding to the common matching feature point set { j-1, j } on the jth view and the jth view in a jth view coordinate system;
Figure BDA0001302625070000032
representing three-dimensional coordinates of an ith matching image point pair corresponding to a common matching feature point set { j, j +1} on jth and jth +1 views in a jth view coordinate system;
tj,j+1a unit relative displacement vector of the j +1 th view relative to the j view;
step 1.3: according to absolute pose (R) of jth viewj,Tj) And calculating to obtain the absolute pose (R) of the j +1 th viewj+1,Tj+1):
Rj+1=Rj,j+1Rj
Tj+1=Tj,j+1+Rj,j+1Tj
Wherein:
Rjrepresenting the absolute pose of the jth view;
Rj+1represents the absolute pose of the j +1 th view;
Rj,j+1the relative posture of the j +1 th view relative to the j view is shown;
Tjan absolute displacement vector representing the jth view;
Tj+1absolute displacement vector representing the j +1 th view;
Tj,j+1a relative displacement vector of the j +1 th view relative to the j view;
when referring to the first view:
(R1,t1)≡(I3,03×1)
wherein:
R1representing an absolute pose of the first view;
T1an absolute displacement vector representing the first view;
I3representing a 3-dimensional identity matrix;
03×1a zero matrix of 3 rows and 1 column is shown.
Preferably, in step 2, the objective function of the motion parameter is specifically as follows:
motion parameter θ ═ (R)j,Tj)j=1,2,...nThe minimization objective function δ (θ) of (d) is given as follows:
Figure BDA0001302625070000041
e3=[0 0 1]T
Figure BDA0001302625070000042
Figure BDA0001302625070000043
wherein:
theta represents the set of absolute pose parameters for all views;
δ (·) represents the minimization objective function;
m(j,k)representing the number of matched image point pairs in the double view formed by the jth view and the kth view;
Figure BDA0001302625070000044
normalized image point coordinates of an ith matched image point corresponding to the common matched feature point set { j, k } on the jth view and the kth view;
Figure BDA0001302625070000045
normalized image point coordinates of the ith matched image point corresponding to the common matched feature point set { j, k } on the jth view and the kth view on the jth view;
Rj,kthe relative posture of the kth view relative to the jth view is taken;
Tj,kis the relative displacement vector of the kth view relative to the jth view.
Preferably, the motion parameter θ given in step 2 is (R ═ R)j,Tj)j=1,2,...nThe premise of minimizing the objective function δ (θ) is: the same three-dimensional scene point is equidistant from the same view.
Preferably, the step 3 comprises the steps of:
the motion parameter theta (R) obtained by optimizationj,Tj)j=1,2,...nFor the double-view composed of the jth view and the kth view, the coordinates of the three-dimensional scene point are calculated in a weighted manner as follows:
Figure BDA0001302625070000046
Figure BDA0001302625070000047
Tj,k=Tk-Rj,kTj
Figure BDA0001302625070000051
wherein:
Xithree-dimensional coordinates representing the ith three-dimensional scene point, XiCorresponding to the s-th image feature point in the double view formed by the jth view and the kth view;
Figure BDA0001302625070000052
representing the ith three-dimensional scene point XiWhether an identification function is visible in the dual view formed by the jth and kth views, i.e. when X isiWhen visible in this double view the image is,
Figure BDA0001302625070000053
otherwise, then
Figure BDA0001302625070000054
RjRepresenting the absolute pose of the jth view;
Tj,ka relative displacement vector of the kth view relative to the jth view;
Figure BDA0001302625070000055
representing the coordinates of the normalized image point of the s-th matched image point corresponding to the common matched feature point set { j, k } on the j-th view;
Figure BDA0001302625070000056
representing the common set of matching feature points { j, k }The coordinates of the normalized image point of the s-th matched image point pair on the k-th view;
Rkrepresenting the absolute pose of the kth view;
Tjan absolute displacement vector representing the jth view;
Tkan absolute displacement vector representing the kth view;
Rj,krepresenting the relative pose of the kth view with respect to the jth view;
Tj,krepresenting the relative displacement vector of the kth view with respect to the jth view.
Preferably, the low-dimensional bundle adjustment calculation method considers the situation that the camera is calibrated, and assumes that matching image point pairs between the views have been determined.
According to the present invention, a low-dimensional bundle adjustment calculation system includes a computer readable storage medium storing a computer program, and the computer program, when executed by a processor, implements the steps of the low-dimensional bundle adjustment calculation method described above.
Compared with the prior art, the invention has the following beneficial effects:
the invention is a low-dimensional cluster adjustment method with simple and convenient initialization, good Lu bang property, higher calculation speed and higher calculation precision. The invention can be used as a core calculation engine for unmanned vehicle/unmanned aerial vehicle visual navigation, visual three-dimensional reconstruction, augmented reality and other applications.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a flowchart illustrating steps of a method for adjusting a low-dimensional bundle according to the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The invention represents the depth of field as a function of the motion parameters, thereby eliminating the three-dimensional scene point coordinates from the parameter optimization process of cluster adjustment. For the case of m three-dimensional scene points and n views, the parameter space is 6 x n dimensions. Compared with the current mainstream method, the bundling adjustment method provided by the invention greatly reduces the dimensionality of the parameter space.
The present invention considers the situation where the camera is calibrated and assumes that matching pairs of image points between the views have been determined.
The following definitions are illustrative of the general forms of the formulae:
assuming n as the number of views for bundle adjustment, numbered sequentially view 1, view 2, … view n;
(Ri,Ti) Representing the absolute pose of the ith view;
Rirepresenting the absolute pose of the ith view;
Ti=||Ti||tian absolute displacement vector representing the ith view;
tithe unit absolute displacement vector representing the ith view, i.e. | ti||=1;
Theta represents the set of absolute pose parameters for all views;
Figure BDA0001302625070000061
representing the relative pose of the kth view with respect to the jth view;
Tj,k≡Tk-RjkTjrepresenting a relative displacement vector of the kth view relative to the jth view;
Tj,k=||Tj,k||tj,k,tj,kis a unit relative displacement vector of the kth view relative to the jth view, namely | | tj,k||=1;
(Rj,k,tj,k) Represents the k-th webRelative pose of view with respect to jth view;
{ j } represents all feature point sets on the jth view;
{ j, k } represents a common set of matching feature points on the jth and kth views, { j, k. } and so on, representing common sets of matching feature points on more than three views;
(j, k) represents a dual view composed of the jth view and the kth view;
m(j,k)representing the number of matched image point pairs in the double view formed by the jth view and the kth view;
Figure BDA0001302625070000071
the coordinates of the ith matching image point in the double views respectively consisting of the jth view and the kth view are normalized image point coordinates on the jth view and the kth view, namely the coordinates of the image point after calibration are obtained in the first two components, and the coordinate of the image point after calibration is obtained in the third component, namely 1.
The invention provides a low-dimensional bundling adjustment method, which comprises the following steps:
step 1: determining an initial value of the motion parameter;
step 2: performing minimum calculation on the objective function of the motion parameters to obtain optimized motion parameters;
and step 3: and calculating the coordinates of the three-dimensional scene points according to the optimized motion parameters.
The respective steps will be described in detail below.
The step 1 comprises the following steps:
step 1.1: for a double-view image formed by the jth view and the jth +1 th view, j is 1, 2.. and n-1, image feature points corresponding to a common matching feature point set { j, j +1} on the double-view image are solved by adopting a Direct Linear Transformation (DLT) algorithm to solve the relative pose (R) of the jth +1 th view relative to the jth viewj,j+1,tj,j+1);
Wherein:
n is the number of views participating in bundle adjustment;
Rj,j+1the relative posture of the j +1 th view relative to the j view is shown;
tj,j+1is a unit relative displacement vector of the j +1 th view relative to the jth view, i.e. | tj,j+1||=1;
Calculating the three-dimensional coordinates of the ith matching image point pair corresponding to the common matching feature point set { j, j +1} in the jth view coordinate system
Figure BDA0001302625070000072
And the three-dimensional coordinates of the ith matching image point pair corresponding to the common matching feature point set { j, j +1} in the j +1 view coordinate system
Figure BDA0001302625070000073
Figure BDA0001302625070000074
Figure BDA0001302625070000075
Wherein:
i=1,2,...,m(j,j+1)
m(j,j+1)representing the number of matched image point pairs in the dual view formed by the jth view and the jth +1 th view;
Figure BDA0001302625070000081
normalized image point coordinates of an ith matching image point pair corresponding to the public matching feature point set { j, j +1} on a jth view;
Figure BDA0001302625070000082
normalized image point coordinates of an ith matching image point pair corresponding to the public matching feature point set { j, j +1} on a j +1 th view;
Figure BDA0001302625070000083
representing the three-dimensional coordinates of the ith matching image point pair corresponding to the public matching feature point set { j, j +1} in the jth view coordinate system;
Figure BDA0001302625070000084
representing the three-dimensional coordinates of the ith matching image point pair corresponding to the public matching feature point set { j, j +1} in the j +1 view coordinate system;
step 1.2: without loss of generality, | T is fixed1,21, |; for a three-view composed of a j-1 th view, a j-1 th view and a j +1 th view, j is 2,3j,j+1||/||Tj-1,j| | obtaining a relative displacement vector T with uniform dimensionj,j+1
Figure BDA0001302625070000085
Tj,j+1=||Tj,j+1||tj,j+1
Wherein:
T1,2is the relative displacement vector of the 2 nd view relative to the 1 st view;
Tj,j+1a relative displacement vector of the j +1 th view relative to the j view;
Tj-1,jis the relative displacement vector of the jth view relative to the jth-1 view;
m(j-1,j,j+1)representing the number of common matching image point pairs in three views consisting of the j-1 th view, the j +1 th view and the j +1 th view;
Figure BDA0001302625070000086
representing the three-dimensional coordinates of the ith matching image point pair corresponding to the common matching feature point set { j-1, j } on the jth view and the jth view in the jth view coordinate system;
Figure BDA0001302625070000087
Representing three-dimensional coordinates of an ith matching image point pair corresponding to a common matching feature point set { j, j +1} on jth and jth +1 views in a jth view coordinate system;
tj,j+1a unit relative displacement vector of the j +1 th view relative to the j view;
step 1.3: according to absolute pose (R) of jth viewj,Tj) And calculating to obtain the absolute pose (R) of the j +1 th viewj+1,Tj+1):
Rj+1=Rj,j+1Rj
Tj+1=Tj,j+1+Rj,j+1Tj
Wherein:
Rjrepresenting the absolute pose of the jth view;
Rj+1represents the absolute pose of the j +1 th view;
Rj,j+1the relative posture of the j +1 th view relative to the j view is shown;
Tjan absolute displacement vector representing the jth view;
Tj+1absolute displacement vector representing the j +1 th view;
Tj,j+1a relative displacement vector of the j +1 th view relative to the j view;
when referring to the first view:
(R1,t1)≡(I3,03×1)
wherein:
R1representing an absolute pose of the first view;
T1an absolute displacement vector representing the first view;
I3representing a 3-dimensional identity matrix;
03×1a zero matrix representing 3 rows and 1 columns;
it should be noted that:
in step 1.1, j has the value j 1, 2.., n-1;
in step 1.2, j has the value j 2, 3.
In step 1.3, j has the value j 1, 2.
In step 2, the objective function of the motion parameter is specifically as follows:
on the premise that the distances from the same three-dimensional scene point to the same view are equal, the motion parameter θ is equal to (R)j,Tj)j=1,2,...nThe minimization objective function δ (θ) of (d) is given as follows:
Figure BDA0001302625070000091
e3=[0 0 1]T
Figure BDA0001302625070000101
Figure BDA0001302625070000102
wherein:
theta represents the set of absolute pose parameters for all views;
δ (·) represents the minimization objective function;
m(j,k)representing the number of matched image point pairs in the double view formed by the jth view and the kth view;
Figure BDA0001302625070000103
normalized image point coordinates of an ith matched image point corresponding to the common matched feature point set { j, k } on the jth view and the kth view;
Figure BDA0001302625070000104
normalized image point coordinates of the ith matched image point corresponding to the common matched feature point set { j, k } on the jth view and the kth view on the jth view;
Rj,kthe relative posture of the kth view relative to the jth view is taken;
Tj,ka relative displacement vector of the kth view relative to the jth view;
since the initial value of the motion parameter obtained in step 1 is already optimized in step 2, and the optimized value of the motion parameter is obtained, step 3 is performed according to the optimized value of the motion parameter. Specifically, the step 3 includes the following steps:
the motion parameter theta (R) obtained by optimizationj,Tj)j=1,2,...nFor the double-view composed of the jth view and the kth view, the coordinates of the three-dimensional scene point are calculated in a weighted manner as follows:
Figure BDA0001302625070000105
Figure BDA0001302625070000106
Tj,k=Tk-Rj,kTj
Figure BDA0001302625070000107
wherein:
Xithree-dimensional coordinates representing the ith three-dimensional scene point, XiCorresponding to the s-th image feature point in the double view formed by the jth view and the kth view;
Figure BDA0001302625070000111
representing the ith three-dimensional scene point XiWhether or not a mark is visible in a dual view formed by a jth view and a kth viewIdentification function, i.e. when XiWhen visible in this double view the image is,
Figure BDA0001302625070000112
otherwise, then
Figure BDA0001302625070000113
RjRepresenting the absolute pose of the jth view;
Tj,ka relative displacement vector of the kth view relative to the jth view;
Figure BDA0001302625070000114
representing the coordinates of the normalized image point of the s-th matched image point corresponding to the common matched feature point set { j, k } on the j-th view;
Figure BDA0001302625070000115
representing the coordinates of the normalized image points of the s-th matched image point pair corresponding to the common matched feature point set { j, k } on the k-th view;
Rkrepresenting the absolute pose of the kth view;
Tjan absolute displacement vector representing the jth view;
Tkan absolute displacement vector representing the kth view;
Rj,krepresenting the relative pose of the kth view with respect to the jth view;
Tj,krepresenting the relative displacement vector of the kth view with respect to the jth view.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (6)

1.一种低维度的集束调整计算方法,其特征在于,包括如下步骤:1. A low-dimensional bundle adjustment calculation method, characterized in that, comprising the steps: 步骤1:确定运动参数的初值;Step 1: Determine the initial value of the motion parameters; 步骤2:对运动参数的目标函数进行最小化计算,得到优化后的运动参数;Step 2: Minimize the objective function of the motion parameters to obtain the optimized motion parameters; 步骤3:根据优化后的运动参数,计算三维场景点坐标;Step 3: Calculate the coordinates of the three-dimensional scene point according to the optimized motion parameters; 所述步骤1包括如下步骤:The step 1 includes the following steps: 步骤1.1:对于第j幅和第j+1幅视图构成的双视图,j=1,2,...,n-1,对该双视图上的公共匹配特征点集{j,j+1}所对应的图像特征点,采用直接线性变换算法,求解第j+1幅视图相对于第j幅视图的相对位姿(Rj,j+1,tj,j+1);Step 1.1: For the dual view composed of the jth and j+1th views, j=1, 2, ..., n-1, the common matching feature point set on the dual views {j, j+1 } For the corresponding image feature points, a direct linear transformation algorithm is used to solve the relative pose (R j,j+1 ,t j,j+1 ) of the j+1th view relative to the jth view; 其中:in: n为参与集束调整的视图数目;n is the number of views participating in bundle adjustment; Rj,j+1为第j+1幅视图相对于第j幅视图的相对姿态;R j,j+1 is the relative posture of the j+1th view relative to the jth view; tj,j+1为第j+1幅视图相对于第j幅视图的单位相对位移向量,即||tj,j+1||=1;t j,j+1 is the unit relative displacement vector of the j+1th view relative to the jth view, ie ||t j,j+1 ||=1; 计算公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j幅视图坐标系下的三维坐标
Figure FDA0003314786780000011
以及公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j+1幅视图坐标系下的三维坐标
Figure FDA0003314786780000012
Calculate the three-dimensional coordinates of the i-th matching image point pair corresponding to the public matching feature point set {j,j+1} in the j-th view coordinate system
Figure FDA0003314786780000011
and the three-dimensional coordinates of the i-th matching image point pair corresponding to the public matching feature point set {j,j+1} in the j+1-th view coordinate system
Figure FDA0003314786780000012
Figure FDA0003314786780000013
Figure FDA0003314786780000013
Figure FDA0003314786780000014
Figure FDA0003314786780000014
其中:in: i=1,2,...,m(j,j+1)i=1,2,...,m (j,j+1) ; m(j,j+1)表示第j幅和第j+1幅视图组成的双视图中的匹配图像点对数目;m (j, j+1) represents the number of matched image point pairs in the dual view composed of the jth and j+1th views;
Figure FDA0003314786780000015
为公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j幅视图上的归一化图像点坐标;
Figure FDA0003314786780000015
is the normalized image point coordinates of the ith matching image point pair corresponding to the common matching feature point set {j,j+1} on the jth view;
Figure FDA0003314786780000016
为公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j+1幅视图上的归一化图像点坐标;
Figure FDA0003314786780000016
is the normalized image point coordinates of the i-th matching image point pair corresponding to the public matching feature point set {j, j+1} on the j+1-th view;
Figure FDA0003314786780000021
表示公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j幅视图坐标系下的三维坐标;
Figure FDA0003314786780000021
Indicates the three-dimensional coordinates of the i-th matching image point pair corresponding to the public matching feature point set {j,j+1} in the j-th view coordinate system;
Figure FDA0003314786780000022
表示公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j+1幅视图坐标系下的三维坐标;
Figure FDA0003314786780000022
Indicates the three-dimensional coordinates of the i-th matching image point pair corresponding to the public matching feature point set {j,j+1} in the j+1-th view coordinate system;
步骤1.2:固定||T1,2||=1;对于第j-1幅、第j幅以及第j+1幅视图构成的三视图,j=2,3,...,n-1,根据该三视图上的公共匹配特征点集{j-1,j,j+1},计算相对位移的尺度||Tj,j+1||/||Tj-1,j||,得到尺度统一的相对位移向量Tj,j+1Step 1.2: Fix ||T 1,2 ||=1; for the three views composed of the j-1th, jth and j+1th views, j=2,3,...,n-1 , according to the common matching feature point set {j-1,j,j+1} on the three views, calculate the scale of relative displacement ||T j,j+1 ||/||T j-1,j || , get the relative displacement vector T j,j+1 with uniform scale:
Figure FDA0003314786780000023
Figure FDA0003314786780000023
Tj,j+1=||Tj,j+1||tj,j+1T j,j+1 =||T j,j+1 ||t j,j+1 ; 其中:in: T1,2为第2幅视图相对于第1幅视图的相对位移向量;T 1,2 is the relative displacement vector of the second view with respect to the first view; Tj,j+1为第j+1幅视图相对于第j幅视图的相对位移向量;T j,j+1 is the relative displacement vector of the j+1th view relative to the jth view; Tj-1,j为第j幅视图相对于第j-1幅视图的相对位移向量;T j-1,j is the relative displacement vector of the jth view relative to the j-1th view; m(j-1,j,j+1)表示第j-1幅、第j幅以及第j+1视图幅构成的三视图中的公共匹配图像点对数目;m (j-1,j,j+1) represents the number of common matching image point pairs in the three views formed by the j-1th, jth and j+1th views;
Figure FDA0003314786780000024
表示第j-1幅和第j幅视图上的公共匹配特征点集{j-1,j}所对应的第i个匹配图像点对在第j幅视图坐标系下的三维坐标;
Figure FDA0003314786780000024
Indicates the three-dimensional coordinates of the ith matching image point pair corresponding to the common matching feature point set {j-1,j} on the j-1th and jth views in the jth view coordinate system;
Figure FDA0003314786780000025
表示第j幅和第j+1幅视图上的公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j幅视图坐标系下的三维坐标;
Figure FDA0003314786780000025
represents the three-dimensional coordinates of the i-th matching image point pair corresponding to the public matching feature point set {j,j+1} on the j-th and j+1-th views in the j-th view coordinate system;
tj,j+1为第j+1幅视图相对于第j幅视图的单位相对位移向量;t j,j+1 is the unit relative displacement vector of the j+1th view relative to the jth view; 步骤1.3:根据第j幅视图的绝对位姿(Rj,Tj),计算得到第j+1幅视图的绝对位姿(Rj+1,Tj+1):Step 1.3: According to the absolute pose (R j , T j ) of the j-th view, calculate the absolute pose (R j+1 , T j+1 ) of the j+1-th view: Rj+1=Rj,j+1Rj R j+1 =R j,j+1 R j Tj+1=Tj,j+1+Rj,j+1Tj T j+1 =T j,j+1 +R j,j+1 T j 其中:in: Rj表示第j幅视图的绝对姿态;R j represents the absolute pose of the jth view; Rj+1表示第j+1幅视图的绝对姿态;R j+1 represents the absolute pose of the j+1th view; Rj,j+1为第j+1幅视图相对于第j幅视图的相对姿态;R j,j+1 is the relative posture of the j+1th view relative to the jth view; Tj表示第j幅视图的绝对位移向量;T j represents the absolute displacement vector of the jth view; Tj+1表示第j+1幅视图的绝对位移向量;T j+1 represents the absolute displacement vector of the j+1th view; Tj,j+1为第j+1幅视图相对于第j幅视图的相对位移向量;T j,j+1 is the relative displacement vector of the j+1th view relative to the jth view; 当以第一幅视图为参考时:When referring to the first view: (R1,t1)≡(I3,03×1)(R 1 ,t 1 )≡(I 3 ,0 3×1 ) 其中:in: R1表示第一幅视图的绝对姿态;R 1 represents the absolute pose of the first view; T1表示第一幅视图的绝对位移向量;T 1 represents the absolute displacement vector of the first view; I3表示3维的单位矩阵;I 3 represents a 3-dimensional identity matrix; 03×1表示3行1列的零矩阵。0 3×1 represents a zero matrix with 3 rows and 1 column.
2.根据权利要求1所述的低维度的集束调整计算方法,其特征在于,在所述步骤2中,所述运动参数的目标函数具体如下:2. The low-dimensional bundle adjustment calculation method according to claim 1, wherein, in the step 2, the objective function of the motion parameter is specifically as follows: 运动参数θ=(Rj,Tj)j=1,2,...n的最小化目标函数δ(θ)如下给出:The minimization objective function δ(θ) of the motion parameters θ=(R j ,T j ) j=1,2,...n is given as follows:
Figure FDA0003314786780000031
Figure FDA0003314786780000031
e3=[0 0 1]T e 3 =[0 0 1] T
Figure FDA0003314786780000032
Figure FDA0003314786780000032
Figure FDA0003314786780000033
Figure FDA0003314786780000033
其中:in: θ表示所有视图的绝对位姿参数集合;θ represents the absolute pose parameter set of all views; δ(·)表示最小化目标函数;δ( ) means to minimize the objective function; m(j,k)表示第j幅和第k幅视图组成的双视图中的匹配图像点对数目;m (j, k) represents the number of matched image point pairs in the double view composed of the jth and kth views;
Figure FDA0003314786780000034
为第j幅和第k幅视图上的公共匹配特征点集{j,k}所对应的第i个匹配图像点对在第k幅视图上的归一化图像点坐标;
Figure FDA0003314786780000034
is the normalized image point coordinates on the kth view of the ith matched image point pair corresponding to the common matching feature point set {j,k} on the jth and kth views;
Figure FDA0003314786780000035
为第j幅和第k幅视图上的公共匹配特征点集{j,k}所对应的第i个匹配图像点对在第j幅视图上的归一化图像点坐标;
Figure FDA0003314786780000035
is the normalized image point coordinates on the jth view of the ith matched image point pair corresponding to the common matching feature point set {j,k} on the jth and kth views;
Rj,k为第k幅视图相对于第j幅视图的相对姿态;R j,k is the relative posture of the k-th view relative to the j-th view; Tj,k为第k幅视图相对于第j幅视图的相对位移向量。T j,k is the relative displacement vector of the k-th view relative to the j-th view.
3.根据权利要求2所述的低维度的集束调整计算方法,其特征在于,所述步骤2中给出的运动参数θ=(Rj,Tj)j=1,2,...n的最小化目标函数δ(θ)的前提是:相同三维场景点到相同视图的距离相等。3. The low-dimensional bundle adjustment calculation method according to claim 2, characterized in that, the motion parameter θ=(R j ,T j ) j=1,2,...n given in the step 2 The premise of minimizing the objective function δ(θ) is that the distances from the same 3D scene point to the same view are equal. 4.根据权利要求1所述的低维度的集束调整计算方法,其特征在于,所述步骤3包括如下步骤:4. The low-dimensional bundle adjustment calculation method according to claim 1, wherein the step 3 comprises the following steps: 根据优化得到的运动参数θ=(Rj,Tj)j=1,2,...n,对于第j幅和第k幅视图构成的双视图,加权计算三维场景点的坐标如下:According to the motion parameters θ=(R j ,T j ) j=1,2,...n obtained by optimization, for the double view composed of the jth and kth views, the coordinates of the three-dimensional scene points are weighted and calculated as follows:
Figure FDA0003314786780000041
Figure FDA0003314786780000041
Figure FDA0003314786780000042
Figure FDA0003314786780000042
Tj,k=Tk-Rj,kTj T j,k =T k -R j,k T j
Figure FDA0003314786780000043
Figure FDA0003314786780000043
其中:in: Xi表示第i个三维场景点的三维坐标,该三维场景点Xi对应第j幅和第k幅视图构成的双视图中的第s个图像特征点;X i represents the three-dimensional coordinates of the i-th three-dimensional scene point, and the three-dimensional scene point X i corresponds to the s-th image feature point in the dual view formed by the j-th and k-th views;
Figure FDA0003314786780000044
表示第i个三维场景点Xi在第j幅和第k幅视图构成的双视图中是否可见的标识函数,即当Xi在该双视图中可见时,
Figure FDA0003314786780000045
否则,则
Figure FDA0003314786780000046
Figure FDA0003314786780000044
The identification function indicating whether the i-th 3D scene point X i is visible in the dual view formed by the j-th and k-th views, that is, when X i is visible in the dual view,
Figure FDA0003314786780000045
Otherwise, then
Figure FDA0003314786780000046
Rj表示第j幅视图的绝对姿态;R j represents the absolute pose of the jth view; Tj,k为第k幅视图相对于第j幅视图的相对位移向量;T j,k is the relative displacement vector of the k-th view relative to the j-th view;
Figure FDA0003314786780000047
表示公共匹配特征点集{j,k}所对应的第s个匹配图像点对在第j幅视图上的归一化图像点坐标;
Figure FDA0003314786780000047
Represents the normalized image point coordinates of the sth matching image point pair corresponding to the common matching feature point set {j,k} on the jth view;
Figure FDA0003314786780000048
表示公共匹配特征点集{j,k}所对应的第s个匹配图像点对在第k幅视图上的归一化图像点坐标;
Figure FDA0003314786780000048
Represents the normalized image point coordinates of the sth matching image point pair corresponding to the common matching feature point set {j, k} on the kth view;
Rk表示第k幅视图的绝对姿态;R k represents the absolute pose of the k-th view; Tj表示第j幅视图的绝对位移向量;T j represents the absolute displacement vector of the jth view; Tk表示第k幅视图的绝对位移向量;T k represents the absolute displacement vector of the k-th view; Rj,k表示第k幅视图相对于第j幅视图的相对姿态;R j,k represents the relative posture of the k-th view relative to the j-th view; Tj,k表示第k幅视图相对于第j幅视图的相对位移向量。T j,k represents the relative displacement vector of the k-th view with respect to the j-th view.
5.根据权利要求1所述的低维度的集束调整计算方法,其特征在于,所述低维度的集束调整计算方法,考虑相机已标定的情形,并假设已经确定了各视图间的匹配图像点对。5 . The low-dimensional cluster adjustment calculation method according to claim 1 , wherein the low-dimensional cluster adjustment calculation method considers the situation that the camera has been calibrated, and assumes that the matching image points between the views have been determined. 6 . right. 6.一种低维度的集束调整计算系统,包括存储有计算机程序的计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时实现权利要求1至5中任一项所述的低维度的集束调整计算方法的步骤。6. A low-dimensional cluster adjustment computing system, comprising a computer-readable storage medium storing a computer program, characterized in that, when the computer program is executed by a processor, the computer program described in any one of claims 1 to 5 is implemented. The steps of the low-dimensional bundle adjustment computation method.
CN201710370360.4A 2017-05-23 2017-05-23 Low-dimensional cluster adjustment calculation method and system Active CN107330934B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710370360.4A CN107330934B (en) 2017-05-23 2017-05-23 Low-dimensional cluster adjustment calculation method and system
PCT/CN2017/087500 WO2018214179A1 (en) 2017-05-23 2017-06-07 Low-dimensional bundle adjustment calculation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710370360.4A CN107330934B (en) 2017-05-23 2017-05-23 Low-dimensional cluster adjustment calculation method and system

Publications (2)

Publication Number Publication Date
CN107330934A CN107330934A (en) 2017-11-07
CN107330934B true CN107330934B (en) 2021-12-07

Family

ID=60192859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710370360.4A Active CN107330934B (en) 2017-05-23 2017-05-23 Low-dimensional cluster adjustment calculation method and system

Country Status (2)

Country Link
CN (1) CN107330934B (en)
WO (1) WO2018214179A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584299B (en) * 2018-11-13 2021-01-05 深圳前海达闼云端智能科技有限公司 Positioning method, positioning device, terminal and storage medium
CN109799698B (en) * 2019-01-30 2020-07-14 上海交通大学 Optimal PI parameter optimization method and system for time-lag visual servo system
CN111161355B (en) * 2019-12-11 2023-05-09 上海交通大学 Pure pose calculation method and system for multi-view camera pose and scene

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106157367A (en) * 2015-03-23 2016-11-23 联想(北京)有限公司 Method for reconstructing three-dimensional scene and equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6996254B2 (en) * 2001-06-18 2006-02-07 Microsoft Corporation Incremental motion estimation through local bundle adjustment
US8837811B2 (en) * 2010-06-17 2014-09-16 Microsoft Corporation Multi-stage linear structure from motion
CN103985154A (en) * 2014-04-25 2014-08-13 北京大学 Three-dimensional model reestablishment method based on global linear method
CN104881869A (en) * 2015-05-15 2015-09-02 浙江大学 Real time panorama tracing and splicing method for mobile platform
CN106097436B (en) * 2016-06-12 2019-06-25 广西大学 A kind of three-dimensional rebuilding method of large scene object
CN106408653B (en) * 2016-09-06 2021-02-02 合肥工业大学 Real-time robust cluster adjustment method for large-scale three-dimensional reconstruction

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106157367A (en) * 2015-03-23 2016-11-23 联想(北京)有限公司 Method for reconstructing three-dimensional scene and equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于单目视觉的同时定位与地图构建方法综述;刘浩敏 等;《计算机辅助设计与图形学学报》;20160630;第 28 卷(第 6 期);第855-868页 *
基于图像序列的目标三维重建技术研究;齐南;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140315(第03期);I138-597 *

Also Published As

Publication number Publication date
CN107330934A (en) 2017-11-07
WO2018214179A1 (en) 2018-11-29

Similar Documents

Publication Publication Date Title
CN110853075B (en) A visual tracking and localization method based on dense point cloud and synthetic view
CN107564061B (en) A binocular visual odometry calculation method based on image gradient joint optimization
CN108038902B (en) High-precision three-dimensional reconstruction method and system for depth camera
CN110009674B (en) A real-time calculation method of monocular image depth of field based on unsupervised deep learning
CN102750704B (en) Step-by-step video camera self-calibration method
CN111862213A (en) Positioning method and apparatus, electronic device, computer-readable storage medium
Ventura et al. A minimal solution to the generalized pose-and-scale problem
CN111735439B (en) Map construction method, map construction device and computer-readable storage medium
CN111179427A (en) Autonomous mobile device, control method thereof, and computer-readable storage medium
CN106940704A (en) A kind of localization method and device based on grating map
CN104732518A (en) PTAM improvement method based on ground characteristics of intelligent robot
CN107423772A (en) A kind of new binocular image feature matching method based on RANSAC
CN103411589B (en) A kind of 3-D view matching navigation method based on four-dimensional real number matrix
CN109974618B (en) Global Calibration Method of Multi-sensor Vision Measurement System
CN107657645B (en) Method for calibrating parabolic catadioptric camera by using properties of conjugate diameters of straight line and circle
CN108280858A (en) A kind of linear global camera motion method for parameter estimation in multiple view reconstruction
CN104036542A (en) Spatial light clustering-based image surface feature point matching method
CN111415375B (en) SLAM method based on multi-fisheye camera and double-pinhole projection model
CN105809706A (en) Global calibration method of distributed multi-camera system
CN107330934B (en) Low-dimensional cluster adjustment calculation method and system
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN106157322A (en) A kind of camera installation site scaling method based on plane mirror
CN113223163A (en) Point cloud map construction method and device, equipment and storage medium
CN111583342A (en) Target rapid positioning method and device based on binocular vision
CN113592015A (en) Method and device for positioning and training feature matching network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant