CN109827502B - High-precision calibration method for line-structured light vision sensor for calibration point image compensation - Google Patents
High-precision calibration method for line-structured light vision sensor for calibration point image compensation Download PDFInfo
- Publication number
- CN109827502B CN109827502B CN201811619300.2A CN201811619300A CN109827502B CN 109827502 B CN109827502 B CN 109827502B CN 201811619300 A CN201811619300 A CN 201811619300A CN 109827502 B CN109827502 B CN 109827502B
- Authority
- CN
- China
- Prior art keywords
- calibration
- light
- points
- point
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明涉及传感器标定的技术领域,具体涉及一种标定点图像补偿的线结构光视觉传感 器高精度标定方法。The invention relates to the technical field of sensor calibration, in particular to a high-precision calibration method for line structured light vision sensors with image compensation of calibration points.
背景技术Background technique
线结构光视觉传感器作为重要的三维数据获取手段,具有量程大、非接触、速度快、精 度高等优点,正在被广泛应用于在线动态测量领域。如列车轮对尺寸在线动态测量,受电弓 磨耗在线测量、以及列车车身在线检测等。上述现场测量环境复杂多样,传感器标定和测量 容易受传感器布局、外界光线等因素的影响,难以满足理想的标定条件、标定精度低,这成 为制约测量精度的难题。目前,基于平面靶标的结构光标定方法以其精度高、成本低、灵活 方便等优点被广泛应用于现场标定。但其标定精度仍然容易受现场标定环境影响,使得图像 特征点或光条标定点难以同时满足最佳成像,极易产生定位偏差。而依靠图像处理方法只能 减小定位偏差,却无法根本消除,这使得该方法无法实现现场复杂环境下的标定。因此研究 一种不受图像噪声影响,能够实现复杂光线条件下标定的方法成为结构光视觉传感器发展的 新方向。As an important means of 3D data acquisition, linear structured light vision sensor has the advantages of large range, non-contact, high speed and high precision, and is being widely used in the field of online dynamic measurement. Such as online dynamic measurement of train wheelset size, online measurement of pantograph wear, and online detection of train body. The above-mentioned on-site measurement environment is complex and diverse, and sensor calibration and measurement are easily affected by factors such as sensor layout and external light, so it is difficult to meet the ideal calibration conditions and the calibration accuracy is low, which becomes a difficult problem restricting the measurement accuracy. At present, the structured light calibration method based on planar target is widely used in field calibration due to its advantages of high precision, low cost, flexibility and convenience. However, its calibration accuracy is still easily affected by the on-site calibration environment, making it difficult for image feature points or light bar calibration points to satisfy the optimal imaging at the same time, and it is easy to generate positioning deviation. However, relying on image processing method can only reduce the positioning deviation, but cannot eliminate it completely, which makes this method unable to realize the calibration in the complex environment of the scene. Therefore, researching a method that is not affected by image noise and can achieve calibration under complex light conditions has become a new direction for the development of structured light vision sensors.
线结构光视觉传感器标定包括相机内参数标定与光平面参数标定两大部分。长期以来, 相机内参标定的研究相对较多,其中关于摄像机内部参数标定这方面的研究很多,因此重点 讨论光平面参数标定过程。当前关于光平面方程的标定方法有很多。Dewar R.在文章 “Self-generated targets for spatial calibration of structured lightoptical sectioning sensors with respect to an external coordinate system”提出拉丝标定方法,因亮点本身亮度分布不均匀或高 亮反光等现象,用测量设备在空间中瞄准亮点与图像中的亮点很难严格对应,因此该方法得 到的标定点少且标定精度较低;在文章“Caibration a structured light stripe system:a novel approach”中提出交比不变理论,通过三维靶标上已知坐标的至少三个共线点,利用交比不 变获得结构光光条与该已知三点所在直线的交点的坐标。该方法能获得较高精度的光平面上 的标定点,适合现场标定。但需要至少两个相互垂直的平面构成的高精度三维靶标。同时, 由于平面之间对光照相互遮挡,难以获得高质量的标定图像,也限制标定点数量。刘等人在 文章“novelcalibration method for multi-sensor visual measurement system based onstructured light”提出采用plucker’s等式表示光条直线的方法,与采用较少标定点的方法相比,本方法 可有效提高标定精度。魏等人在文章“a novel 1D target-basedcalibration method with unknown orientation for structured light visionsensor”提出一种基于一维靶标的结构光视觉传感器标定 方法。利用一维靶标特征点之间距离求解光平面与一维靶标的交点的三维坐标。通过拟合多 个交点得到光平面方程。近年来,采用空间几何作为辅助约束,用于现场复杂环境下,带有 滤光片的结构光视觉传感器标定。刘等人在文章“calibration method for line-structured light vision sensorbased a single ball target”提出基于球靶标的结构光视觉传感器标定方法。这种 方法需要提取球靶标外轮廓边缘,进而得到球靶标在相机坐标系下的方位。结合光平面与球 靶标相交得到圆锥轮廓求解光平面方程。这种方法具有获得的球靶标轮廓不受靶标摆放角度的影响,但是需要提取靶标的外轮廓,容易受背景或外界的干扰。以及在“An on-sitecalibration of line-structured light vision in complex light environment”中提出利用平行双圆柱靶标,在相 机装配滤光片且外界环境复杂的情况下,实现结构光参数现场标定。利用光平面于双圆柱相 交产生的空间平行椭圆平面与图像平面之间对应关系,以圆柱半径等于空间椭圆短轴长为约 束,进一步优化得到光平面方程。同时,有学者提出了改善标定精度的方法,如周等人在文 章“complete calibration of a structured lightstripe vison sensor through planar target of unknown orientations”中提出一种基于平面靶标的光平面参数标定方法。通过交比不变性获得光平面 中的标定点,靠平面靶标重复移动获得光平面中的标定点三维坐标,拟合得到光平面方程。 该方法以其成本低、灵活、精度高等优点,被广泛应用在现场高精度测量领域。但其仍然无 法摆脱图像噪声引起的定位偏差导致的标定误差,无法实现更高精度的标定。Linear structured light vision sensor calibration includes two parts: camera internal parameter calibration and light plane parameter calibration. For a long time, there have been relatively many studies on the calibration of camera internal parameters, among which there are many studies on the calibration of camera internal parameters, so the focus is on the calibration process of light plane parameters. There are many calibration methods for the light plane equation at present. Dewar R. proposed a wire drawing calibration method in the article "Self-generated targets for spatial calibration of structured lightoptical sectioning sensors with respect to an external coordinate system". Due to the uneven brightness distribution of the bright spots themselves or high-brightness reflections, the measurement equipment is used in the space It is difficult to strictly correspond to the bright spots in the image and the bright spots in the image, so the calibration points obtained by this method are few and the calibration accuracy is low. For at least three collinear points with known coordinates on the three-dimensional target, the coordinates of the intersection of the structured light strip and the straight line where the known three points are located are obtained by using the constant intersection ratio. This method can obtain high-precision calibration points on the light plane, which is suitable for on-site calibration. However, a high-precision three-dimensional target composed of at least two mutually perpendicular planes is required. At the same time, it is difficult to obtain high-quality calibration images due to the mutual occlusion of light between planes, and the number of calibration points is also limited. In the article "novelcalibration method for multi-sensor visual measurement system based on structured light", Liu et al. proposed a method of using the plucker's equation to represent the straight line of the light bar. Compared with the method using fewer calibration points, this method can effectively improve the calibration accuracy. In the article "a novel 1D target-based calibration method with unknown orientation for structured light vision sensor", Wei et al. proposed a one-dimensional target-based structured light vision sensor calibration method. The three-dimensional coordinates of the intersection of the light plane and the one-dimensional target are solved by using the distance between the feature points of the one-dimensional target. The light plane equation is obtained by fitting multiple intersection points. In recent years, spatial geometry has been used as an auxiliary constraint for the calibration of structured light vision sensors with filters in complex on-site environments. In the article "calibration method for line-structured light vision sensor based a single ball target", Liu et al. proposed a calibration method for structured light vision sensor based on ball target. This method needs to extract the outer contour edge of the ball target, and then obtain the orientation of the ball target in the camera coordinate system. Combined with the intersection of the light plane and the spherical target, the conic contour is obtained to solve the light plane equation. In this method, the obtained contour of the spherical target is not affected by the placement angle of the target, but the outer contour of the target needs to be extracted, which is easily disturbed by the background or the outside world. And in "An on-site calibration of line-structured light vision in complex light environment", it is proposed to use parallel double cylindrical targets to achieve on-site calibration of structured light parameters when the camera is equipped with filters and the external environment is complex. Using the corresponding relationship between the space parallel ellipse plane and the image plane generated by the intersection of the light plane and the double cylinder, and taking the cylinder radius equal to the length of the short axis of the space ellipse as the constraint, the light plane equation was further optimized. At the same time, some scholars have proposed methods to improve the calibration accuracy. For example, Zhou et al. proposed a method for calibrating light plane parameters based on planar targets in the article "complete calibration of a structured lightstripe vison sensor through planar target of unknown orientations". The calibration point in the light plane is obtained by the invariance of the cross ratio, and the three-dimensional coordinates of the calibration point in the light plane are obtained by repeated movement of the plane target, and the light plane equation is obtained by fitting. This method is widely used in the field of high precision measurement due to its low cost, flexibility and high precision. However, it still cannot get rid of the calibration error caused by the positioning deviation caused by image noise, and cannot achieve higher-precision calibration.
分析当前标定方法,都是以提取靶标特征点或光条标定点作为实际成像点坐标真值,得 出线性解后再通过最小化特征点投影误差求解得到最优解。同时,也尽量采用标定空间即为 测量空间的标定模式,然而现场标定时不可避免地会由于复杂光线、激光器质量、靶标表面 加工粗糙度、靶标摆放角度,图像失焦模糊、图像噪声,特征点与光条提取偏差等因素影响, 降低标定精度。现场标定时若仅照顾靶标特征点成像质量,极易造成光条亮度下降或变粗; 若只照顾光条成像,靶标特征容易出现失焦模糊或曝光不足等现象,二者之间呈矛盾状,无 法同时满足最佳成像质量。因此研究现场户外复杂条件下、通用的、不受图像噪声干扰的结 构光视觉传感器高精度标定方法成为亟待解决的难题。Analyzing the current calibration methods, the extraction of target feature points or light bar calibration points is used as the true value of the actual image point coordinates, and the linear solution is obtained, and then the optimal solution is obtained by minimizing the projection error of the feature point. At the same time, try to use the calibration space as the calibration mode of the measurement space. However, during the on-site calibration, it is inevitable that due to complex light, laser quality, target surface processing roughness, target placement angle, image out-of-focus blur, image noise, characteristic Due to the influence of factors such as the extraction deviation of points and light strips, the calibration accuracy is reduced. When only taking care of the imaging quality of target feature points during on-site calibration, it is easy to cause the brightness of the light bar to decrease or become thick; , cannot meet the best image quality at the same time. Therefore, it has become an urgent problem to study the high-precision calibration method of structured light vision sensor under complex outdoor conditions, which is universal and not disturbed by image noise.
发明内容SUMMARY OF THE INVENTION
本发明技术解决问题:克服现有技术的不足,提供一种标定点图像补偿的线结构光视觉 传感器高精度标定方法,能够在现场复杂光线环境的情况下尤其是存在图像模糊、失焦、噪 声干扰情况下实现高精度标定。The technical solution of the present invention is to overcome the deficiencies of the prior art and provide a high-precision calibration method for a line structured light vision sensor with image compensation of calibration points, which can especially be blurred, out-of-focus and noise images in the case of complex light environment on site. High-precision calibration in the case of interference.
为达到上述目的,本发明的技术方案是这样实现的:In order to achieve the above object, the technical scheme of the present invention is achieved in this way:
一种标定点图像补偿的线结构光视觉传感器高精度标定方法,该方法包括:A high-precision calibration method for a line structured light vision sensor with calibration point image compensation, the method includes:
a、在未打开激光器的情况下,对线结构光视觉传感器中的摄像机进行标定;a. Calibrate the camera in the linear structured light vision sensor without turning on the laser;
b、采用镶嵌有LED发光特征点的平面金属靶标,摄像机拍摄带有光条的平面靶标图像; 提取靶标特征点及光条标定点坐标;b. Use a flat metal target inlaid with LED light-emitting feature points, and the camera shoots a flat target image with a light bar; extract the target feature point and the coordinates of the light bar calibration point;
c、分别计算靶标特征点坐标与光条标定点定位不确定度。以特征点定位不确定度为约 束,通过非线性优化方式求解所有特征点的定位偏差并补偿得到精确特征点坐标;c. Calculate the target feature point coordinates and the positioning uncertainty of the light bar calibration point respectively. Constrained by the positioning uncertainty of feature points, the positioning deviation of all feature points is solved by nonlinear optimization and compensated to obtain accurate feature point coordinates;
d、将靶标移动两次以上,获取所有光条标定点在摄像机坐标下的三维坐标,经RANSAC 剔除杂点后拟合求解光平面方程。d. Move the target more than two times to obtain the three-dimensional coordinates of all the calibration points of the light bar under the camera coordinates, and then fit and solve the light plane equation after removing the noise points by RANSAC.
步骤a中在未开启激光器的情况下,对线结构光视觉传感器中的摄像机进行标定,采用 刘震等人提出的改进的张正友标定方法标定摄像机内部参数及镜头二阶径向畸变系数;In step a, when the laser is not turned on, the camera in the linear structured light vision sensor is calibrated, and the improved Zhang Zhengyou calibration method proposed by Liu Zhen et al. is used to calibrate the internal parameters of the camera and the second-order radial distortion coefficient of the lens;
步骤b中拍摄光条与平面靶标相交的图像,并分别提取靶标特征点与光条标定点图像坐 标,方法如下:In step b, the image of the intersection of the light bar and the plane target is taken, and the image coordinates of the target feature point and the light bar calibration point are respectively extracted, and the method is as follows:
(b1)调整金属靶标与空间光平面相交,保证光条不通过空间中的靶标发光特征点;(b1) Adjusting the intersection of the metal target and the space light plane to ensure that the light bar does not pass through the target light-emitting feature points in the space;
(b2)采取多尺度光点中心坐标方法提取靶标特征点图像坐标作为靶标特征点坐标初值; 提取光条中心点坐标,拟合与光条方向垂直的靶标特征点列表为直线,并求解其与光条相交 点作为光条标定点坐标初值。(b2) Using the multi-scale light point center coordinate method to extract the image coordinates of the target feature point as the initial value of the target feature point coordinates; Extract the coordinates of the center point of the light bar, fit the list of target feature points perpendicular to the direction of the light bar as a straight line, and solve its The intersection point with the light bar is used as the initial value of the coordinates of the calibration point of the light bar.
步骤c中求解靶标特征点与光条标定点定位不确定度具体方法如下:In step c, the specific method for solving the positioning uncertainty of the target feature point and the light bar calibration point is as follows:
(c1)利用多个高斯卷积核分别对采集图像进行处理,求解每个特征点的多个定位坐标, 统计得出每个特征点的定位不确定度;(c1) using a plurality of Gaussian convolution kernels to process the collected images respectively, solving a plurality of positioning coordinates of each feature point, and statistically obtaining the positioning uncertainty of each feature point;
(c2)建立每个光条标定点局部图像的定位不确定度数学模型,采用均值滤波方法求解 图像噪声,进而得出每个光条标定点的定位不确定度。(c2) Establish a mathematical model of the localization uncertainty of the local image of each light bar calibration point, use the mean filtering method to solve the image noise, and then obtain the positioning uncertainty of each light bar calibration point.
步骤c中以特征点定位不确定度为约束,通过非线性优化方法求解出靶标特征点与光条 标定点定位偏差,经过补偿后得到精确特征点坐标。In step c, with the positioning uncertainty of the feature point as the constraint, the positioning deviation between the target feature point and the light bar calibration point is solved by the nonlinear optimization method, and the precise feature point coordinates are obtained after compensation.
步骤d中最后传感器标定的方法如下:The method of final sensor calibration in step d is as follows:
(d1)基于补偿后的光条标定点坐标,采用平面靶标标定方法得到光条标定点在摄像机 坐标系下的三维坐标;(d1) based on the compensated light bar calibration point coordinates, adopt the plane target calibration method to obtain the three-dimensional coordinates of the light bar calibration point under the camera coordinate system;
(d2)在获取所有光条标定点在摄像机坐标下的三维坐标后,利用RANSAC方法剔除杂点后拟合求解光平面方程初值;(d2) After obtaining the three-dimensional coordinates of all the calibration points of the light bar in the camera coordinates, use the RANSAC method to eliminate the noise points and then fit and solve the initial value of the light plane equation;
(d3)利用Levenberg-Marquardt非线性优化方法求得光平面方程最优解,完成线结构 光视觉传感器的标定。(d3) Use the Levenberg-Marquardt nonlinear optimization method to obtain the optimal solution of the light plane equation, and complete the calibration of the linear structured light vision sensor.
本发明与现有技术相比的优点在于:The advantages of the present invention compared with the prior art are:
本发明提出一种基于不确定度模型的结构光视觉传感器标定方法。首先建立靶标特征点、 光条标定点的定位不确定度模型,求解出每个特征点的定位不确定度;其次以特征点定位不 确定度为约束,以光条标定点反投影到空间靶标且与靶标平面距离最小为约束建立目标方程, 通过非线性优化方法求得每个特征点的定位偏差;最后经过补偿后,再代入平面靶标标定方 法求解出高精度的光平面方程。本发明可有效弥补靶标特征点和光条标定点提取时带来的定 位偏差,尤其适用于解决现场复杂环境中成像质量不佳导致或失焦、图像噪声等影响而引起 的标定精度下降问题,提高标定的精度。The invention proposes a calibration method of a structured light vision sensor based on an uncertainty model. Firstly, the positioning uncertainty model of target feature points and light bar calibration points is established, and the positioning uncertainty of each feature point is solved. And the minimum distance from the target plane is the constraint to establish the target equation, and the positioning deviation of each feature point is obtained by the nonlinear optimization method; finally, after compensation, the plane target calibration method is substituted to solve the high-precision light plane equation. The invention can effectively make up for the positioning deviation caused by the extraction of target feature points and light bar calibration points, and is especially suitable for solving the problem of degraded calibration accuracy caused by poor imaging quality or out-of-focus, image noise and other influences in complex on-site environments. Calibration accuracy.
附图说明Description of drawings
图1为本发明基于不确定度模型的结构光视觉传感器高精度标定方法流程图;Fig. 1 is the flow chart of the high-precision calibration method of structured light vision sensor based on uncertainty model of the present invention;
图2为本发明线结构光视觉传感标定示意图;Fig. 2 is the schematic diagram of line structured light vision sensor calibration of the present invention;
图3为本发明靶标特征点与光条标定点定位不确定度示意图。FIG. 3 is a schematic diagram of the positioning uncertainty of the target feature point and the light bar calibration point according to the present invention.
具体实施方式Detailed ways
本发明的基本思想是:提取靶标特征点与光条标定点坐标初值,并确定所有特征点定位 不确定度。以特征点定位不确定度为约束,通过非线性优化方式得出每个特征点的定位偏差。 经补偿后再次代入平面靶标标定方法,得出光条标定点在摄像机坐标系下三维坐标。采用 RANSAC方法剔除杂点,并通过优化方法求解出光平面方程精确解,实现结构光视觉传感 器的高精度标定。The basic idea of the present invention is: extracting the initial coordinates of the target feature point and the calibration point of the light bar, and determining the positioning uncertainty of all the feature points. Taking the positioning uncertainty of feature points as constraints, the positioning deviation of each feature point is obtained by nonlinear optimization. After compensation, the plane target calibration method is substituted again, and the three-dimensional coordinates of the calibration point of the light bar in the camera coordinate system are obtained. The RANSAC method is used to remove the noise, and the accurate solution of the light plane equation is solved by the optimization method, so as to realize the high-precision calibration of the structured light vision sensor.
下面以一个摄像机和一个线激光器组成的线结构光视觉传感器为例,对本发明作进一步 详细说明。The present invention will be described in further detail below by taking a line structured light vision sensor composed of a camera and a line laser as an example.
如图1所示,本发明基于不确定度模型的结构光视觉传感器高精度标定方法主要包括以 下步骤:As shown in Figure 1, the high-precision calibration method of structured light vision sensor based on uncertainty model of the present invention mainly comprises the following steps:
步骤11:在线激光器未开启的情况下,对线结构光视觉传感器中的摄像机进行标定。Step 11: When the line laser is not turned on, calibrate the camera in the line structured light vision sensor.
这里对视觉传感器的摄像机进行标定,即求解摄像机的内部参数,具体标定方法在刘震 等人提出的改进的张正友标定法的文章[Liu Z,Wu Q,Chen X,et al.High-accuracy calibration of low-cost camera using image disturbance factor[J].Optics Express,2016, 24(21):24321-24336.]中有详细描述。The camera of the vision sensor is calibrated here, that is, the internal parameters of the camera are solved. The specific calibration method is in the article of the improved Zhang Zhengyou calibration method proposed by Liu Zhen et al. [Liu Z, Wu Q, Chen X, et al.High-accuracy calibration of low-cost camera using image disturbance factor [J]. Optics Express, 2016, 24(21):24321-24336.] is described in detail.
步骤12:开启激光器,摆放平面靶标在相机正前方,使得线激光器投射的光平面和平面 靶标相交,摄像机拍摄带有光条的平面靶标图像。Step 12: Turn on the laser, place the flat target in front of the camera, so that the light plane projected by the line laser intersects the flat target, and the camera captures the flat target image with the light bar.
如图2所示,设Ocxcyczc为相机坐标系,Otxtytzt为靶标坐标系,Yt为靶标平面,Y为激光平面,Li为Y与Yt的交线,li为Li成像,Q1j、Q2j、Q3j表示靶标上一列上三个发光点, 为靶标点连线与Li的交点,用于后续光平面方程求解。p1j p2j p3j分别为Q1j、Q2j、Q3j对 应的图像点,为对应的图像点,定义为光条标定点。光平面方程可表示为 ax+by+cz+d=0,其中 As shown in Figure 2, set O c x c y c z c as the camera coordinate system, O t x t y t z t as the target coordinate system, Y t as the target plane, Y as the laser plane, and Li as Y and Y The intersection line of t , li is Li imaging, Q 1j , Q 2j , Q 3j represent the three luminous points on the target row, is the intersection of the line connecting the target point and Li, which is used to solve the subsequent light plane equation. p 1j p 2j p 3j are the image points corresponding to Q 1j , Q 2j , and Q 3j respectively, for The corresponding image point is defined as the calibration point of the light bar. The light plane equation can be expressed as ax+by+cz+d=0, where
步骤13:提取特征点图像坐标,即靶标特征点与光条标定点坐标。Step 13: Extract the image coordinates of the feature points, that is, the coordinates of the target feature points and the calibration point of the light bar.
这里,具体包括以下步骤:Here, it specifically includes the following steps:
步骤131:提取拍摄图像中靶标标定点的图像坐标,通过多尺度提取方法(例如文章“刘 震,尚砚娜.多尺度光点图像中心的高精度定位[J].光学精密工程,2013,21(6):1586-1591.” 中提到的),得到图像中所有靶标特征点最佳的图像坐标。Step 131: Extract the image coordinates of the target calibration point in the captured image, and use a multi-scale extraction method (such as the article "Liu Zhen, Shang Yanna. High-precision positioning of the center of multi-scale light spot images [J]. Optical Precision Engineering, 2013, 21(6):1586-1591.”) to obtain the best image coordinates of all target feature points in the image.
采用Steger“Steger C.Unbiased Extraction of Curvilinear Structuresfrom 2D and 3D Images[J].1998.”所述的方法提取光条的中心。The center of the light bar was extracted using the method described by Steger "Steger C. Unbiased Extraction of Curvilinear Structures from 2D and 3D Images [J]. 1998.".
步骤14:计算靶标特征点与光条标定点定位不确定度。Step 14: Calculate the positioning uncertainty of the target feature point and the light bar calibration point.
步骤141:如图3所示,本发明采取多尺度提取方法完成靶标特征点定位初值及不确定 度求解。通过多次不同高斯卷积处理图像,多次提取中心点后采用统计方法得到靶标特征点 定位不确定度,同时选取最佳尺度对应的坐标为特征点中心坐标初值。选取m个不同的高斯 卷积核对分别对i位置处j个特征点进行处理,得出m组靶标特征点坐标pij并组成点集Pm, 如图2中的特征点中红色十字叉点。进而求出靶标特征点点集在u,v方向的平均坐标 与标准差σu、σv。并将σu、σv为靶标特征点的定位不确定度。Step 141 : As shown in FIG. 3 , the present invention adopts a multi-scale extraction method to complete the initial value of target feature point location and the solution of uncertainty. The image is processed by multiple different Gaussian convolutions, the center point is extracted multiple times, and the statistical method is used to obtain the positioning uncertainty of the target feature point. At the same time, the coordinate corresponding to the best scale is selected as the initial value of the center coordinate of the feature point. Select m different Gaussian convolution kernels to process j feature points at position i respectively, and obtain m groups of target feature point coordinates pi j and form a point set P m , as shown in the red cross point in the feature points in Figure 2 . Then find the average coordinates of the target feature point set in the u, v direction and standard deviation σ u , σ v . And σ u , σ v are the positioning uncertainty of the target feature point.
步骤142:现场标定时受激光器质量、功率、靶标表面材质以及外界光照的影响,光条 中心点提取容易受图像噪声影响产生定位偏差。本发明给出任意方向下光条标定点不确定度 求解方法。以下为光条标定点坐标不确定度求解过程。Step 142: During the on-site calibration, affected by the laser quality, power, target surface material and external illumination, the extraction of the center point of the light bar is easily affected by image noise, resulting in a positioning deviation. The invention provides a method for solving the uncertainty of the calibration point of the light bar in any direction. The following is the process of solving the coordinate uncertainty of the calibration point of the light bar.
设图像与高斯核卷积后得到偏导数分别为gu、gv、guu、guv、gvv。通过计算Hessian 矩阵,得出光条法线向量n(u,v)。设(u0,v0)为图像任意点坐标,光条边缘方向用(nu,nv)表示,且||(nu,nv)||=1,正交方向用表示。因此,光条截面曲线可以沿着边缘方向用(nu,nv)表 示为,The partial derivatives obtained after convolving the image with the Gaussian kernel are g u , g v , g uu , g uv , and g vv , respectively. By calculating the Hessian matrix, the normal vector n(u,v) of the light strip is obtained. Let (u 0 , v 0 ) be the coordinates of any point in the image, the edge direction of the light bar is represented by (n u , n v ), and ||(n u , n v )||=1, the orthogonal direction is represented by express. Therefore, the light-strip cross-section curve can be expressed as (n u ,n v ) along the edge direction as,
针对线条边缘,令可得:For line edges, let Available:
因此,图像灰度的极大值点也即光条中心点坐标为(pu,pv)=((tnu+u0),(tnv+v0))。Therefore, the coordinates of the maximum value point of the image gray level, that is, the center point of the light bar are (p u ,p v )=((tn u +u 0 ),(tn v +v 0 )).
以无噪声图像对应的光条中心点(0,0)为原点,与(nu,nv)为坐标轴建立o-rc坐 标系,设(nu,nv)方向上灰度曲线为h(c,r),因此求解(pu,pv)的不确定度转化为求解h(c,r)r=0处c0的不确定度。设h(c,r)=I(c,r)+N(c,r),其中I(c,r)为理想图像,N(c,r)是均值为零, 方差为的图像噪声。由[CSteger]可知,h(c0,r0)处一阶导数为零,则:Taking the center point (0,0) of the light bar corresponding to the noise-free image as the origin, The o-rc coordinate system is established with (n u , n v ) as the coordinate axis, and the grayscale curve in the (n u , n v ) direction is set to be h(c, r), so the invariance of (p u , p v ) is solved. The degree of certainty translates into the uncertainty of solving for c 0 at h(c,r) r=0 . Let h(c,r)=I(c,r)+N(c,r), where I(c,r) is the ideal image, N(c,r) is zero mean, and the variance is image noise. It can be known from [CSteger] that the first derivative at h(c 0 ,r 0 ) is zero, then:
其中,分别为灰度分布、理想图像分布、图像噪声经过方差为的高斯核卷积后 的数据,分别为在(c0,r0) 处对c轴方向的一阶、二阶偏导数。对在(0,0)处进行泰勒展开,in, Respectively, gray distribution, ideal image distribution, and image noise through variance are The Gaussian kernel convolved data, are the first-order and second-order partial derivatives with respect to the c-axis direction at (c 0 , r 0 ), respectively. right Taylor expansion at (0,0),
由于理想图像可以表示为M为灰度最大值,σw为高斯核,且满足且求得:Since the ideal image can be expressed as M is the maximum gray value, σ w is the Gaussian kernel, and satisfies the and Get:
对应的方差为, Corresponding variance for,
由式8,9得中心点定位方差为,From equations 8 and 9, the center point positioning variance is obtained for,
将式(11)代入式(10)得中心点坐标定位方差为,Substituting Equation (11) into Equation (10), the center point coordinate positioning variance is:
其中,σw可通过取nc方向上灰度曲线采用matlab中fittype()函数拟合得到,其中拟合函 数原型为 Among them, σw can be obtained by taking the grayscale curve in the n c direction and fitting it with the fittype() function in matlab, where the prototype of the fitting function is
将分解到o-uv坐标系下,可以得出光条标定点点不确定度分量 Will Decomposed into the o-uv coordinate system, the uncertainty component of the light bar calibration point can be obtained
步骤15:基于步骤14中确定的特征点定位不确定度,以特征点不确定度为约束,同时 建立靶标特征点图像与空间靶标之间反投影误差最小,光条标定点反投影到空间靶标平面且 点到平面距离最小为目标方程,通过非线性优化方法得到特征点的定位偏差。Step 15: Based on the feature point positioning uncertainty determined in
步骤151:分解靶标特征点成像过程,建立特征点中透视投影点、畸变点与最终噪声点 之间变换方程。Step 151: Decompose the imaging process of target feature points, and establish a transformation equation between perspective projection points, distortion points and final noise points in the feature points.
设pu=[uu,vu,1]T,pd=[ud,vd,1]T与pn=[un,vn,1]T分别为图像坐标系下无畸变点、畸变点与 实际成像点坐标。由成像光路可以看出,空间靶标点Q=[x,y,1]的成像可分解为三个过程,第 一个是透视投影模型、第二个是镜头畸变模型、第三个是图像噪声叠加模型;其中透视投影 模型可表示为:Suppose p u =[u u ,v u ,1] T , p d =[u d ,v d ,1] T and p n =[u n ,v n ,1] T are respectively the no distortion in the image coordinate system Point, distortion point and actual image point coordinates. It can be seen from the imaging optical path that the imaging of the spatial target point Q=[x,y,1] can be decomposed into three processes, the first is the perspective projection model, the second is the lens distortion model, and the third is image noise. Superposition model; where the perspective projection model can be expressed as:
其中H3×3为靶标平面和图像平面之间的单应矩阵,ρ是常量系数,K为相机内参数矩阵,u0与v0是主点坐标.γ是图像u v坐标轴的倾斜因子.r1、r2、t分别为旋转矩阵的前两列以及 平移向量。镜头畸变模型可表示为:where H 3×3 is the homography matrix between the target plane and the image plane, ρ is the constant coefficient, K is the camera internal parameter matrix, u 0 and v 0 are the principal point coordinates. γ is the tilt factor of the uv coordinate axis of the image. r 1 , r 2 , and t are the first two columns of the rotation matrix and the translation vector, respectively. The lens distortion model can be expressed as:
其中k1,k2镜头前两阶畸变系数,[xn,yn]表示归一化的图像坐标。根据实际 经验,前两阶径向畸变已经足够精确地描述镜头畸变,达到较高精度。设由于图像噪声等原 因导致的图像偏差为Δu,Δv,则畸变点到实际成像点可表示为:in k 1 , k 2 lens front two-order distortion coefficients, [x n , y n ] represent normalized image coordinates. According to practical experience, the first two orders of radial distortion are sufficiently accurate to describe the lens distortion to achieve higher accuracy. Assuming that the image deviation due to image noise and other reasons is Δu, Δv, the distortion point to the actual imaging point can be expressed as:
靶标第i个摆放位置下,设靶标第j个点在靶标坐标系和图像坐标系下的齐次坐标分别 为Qj=[xj,yj,1]T和pij通过式(14),(15)计算出pu(ij),pu(ij)与Qj=[xj,yj,1]T通过 式13求解Hi矩阵,其中式(15)中Δuij,Δvij的初值为0。Under the ith placement position of the target, let the homogeneous coordinates of the jth point of the target in the target coordinate system and the image coordinate system be Q j = [x j , y j , 1] T and p ij calculates p u(ij) through equations (14) and (15), p u(ij) and Q j =[x j ,y j ,1] T solves the Hi matrix by
步骤152:建立靶标特征点图像与空间靶标之间反投影误差最小,光条标定点反投到空 间靶标平面且点到平面距离最小为目标方程,通过非线性优化方法得到特征点的定位偏差。Step 152: Establish the minimum back-projection error between the target feature point image and the space target, the light bar calibration point is back-projected to the space target plane and the minimum distance from the point to the plane is the target equation, and the positioning deviation of the feature point is obtained by a nonlinear optimization method.
根据第i个位置靶标平面到图像平面的映射矩阵Hi,Qj通过式(13)得到靶标第j个特 征点在图像坐标系下投影点的齐次坐标pn(ij)。以pij与pn(ij)之间距离最小和图像统计中心点距 离最小为约束建立第一个目标函数如下:According to the mapping matrix H i from the ith position target plane to the image plane, Q j obtains the homogeneous coordinate p n(ij) of the projection point of the jth feature point of the target in the image coordinate system by formula (13) . The first objective function is established with the minimum distance between p ij and p n(ij) and the minimum distance between the image statistical center points as follows:
其中D(pij,pn(ij))表示pij与pn(ij)距离,M表示靶标摆放位置数,N表示靶标特征点数。D(p ij , p n(ij) ) represents the distance between p ij and p n(ij) , M represents the number of target placement positions, and N represents the number of target feature points.
通过式(13)计算pij反向投影到靶标坐标系下点的齐次坐标以qj与之 间距离最小和靶标点与投影点统计中心距离最小为目标函数建立第二个目标函数:Calculate the homogeneous coordinates of p ij back-projected to the point under the target coordinate system by formula (13) with q j and The minimum distance and the minimum distance between the target point and the statistical center of the projection point are used to establish the second objective function as the objective function:
交比不变约束作为射影几何中的重要约束条件,可以更好的借助靶标刚性约束优化图像 特征点之间坐标,被广泛应用在相机内参数校准等环节。本发明将靶标特征点图像坐标与靶 标之间建立交比约束,建立第三个目标函数如下:As an important constraint in projective geometry, the intersection ratio invariant constraint can better optimize the coordinates between image feature points with the help of target rigid constraints, and is widely used in camera parameter calibration and other links. The present invention establishes a cross ratio constraint between the target feature point image coordinates and the target, and establishes the third objective function as follows:
式中CRh、CRv分别表示水平和竖直方向交比。为达到约束的强度,同时提高计算效率,水 平方向任取四个点计算交比,设共有K种组合;竖直方向同样任取四个点计算交比,设共有 L种组合,K和L具体数值由点数决定,保证靶标特征点均匀的覆盖整个像面即可。where CR h and CR v represent the cross ratio in the horizontal and vertical directions, respectively. In order to achieve the strength of the constraint and improve the calculation efficiency at the same time, four points are randomly selected in the horizontal direction to calculate the intersection ratio, and there are K kinds of combinations; The specific value is determined by the number of points, and it is enough to ensure that the target feature points evenly cover the entire image surface.
设光条标定点前向投影与靶标相交于空间三维点且与pij在靶标坐标系下投影点的 齐次坐标拟合的靶标平面距离最小,以点到平面之间距离最小建立第四个目标函数如下:Set light bar calibration point The forward projection intersects the target at a three-dimensional point in space and the homogeneous coordinates of the projected point of p ij in the target coordinate system The fitted target plane distance is the smallest, and the fourth objective function is established with the smallest distance between the point and the plane as follows:
其中Fi表示靶标摆放第i个位置靶标点拟合的平面方程,表示点到平面之间距离。where F i represents the plane equation fitted by the target point at the i-th position of the target, Indicates the distance between the point and the plane.
同时,设中第k列拟合直线为拟合直线为与交点为以与之间 距离最小建立第五个目标函数如下:At the same time, set The fitted straight line in the kth column is The fitted line is and The intersection is by and The minimum distance between the establishment of the fifth objective function is as follows:
联合四个目标函数可得:Combine the four objective functions to get:
E(a)=e1+e2+e3+e4+e5 (21)E(a)=e 1 +e 2 +e 3 +e 4 +e 5 (21)
针对Δuij,Δvij,加入了优化范围约束,如式22所示:For Δu ij , Δv ij , An optimization range constraint is added, as shown in Equation 22:
其中σu(ij)和σv(ij)为靶标第i个摆放位置处第j 个点在图像中特征点定位不确定度,为靶标i个位置处第k个光条标定点定位不 确定度。n为非零比例系数,根据大量反复试验,本发明设置为9。in σ u(ij) and σ v(ij) are the positioning uncertainty of the feature point in the image of the jth point at the ith position of the target, is the positioning uncertainty of the kth light bar calibration point at the i position of the target. n is a non-zero scale factor, which is set to 9 in the present invention based on extensive trial and error.
步骤16:基于步骤15中的非线性优化得到每个特征点的定位偏差,经补偿后得到精确 特征点定位坐标,通过去畸变得到无畸变坐标。Step 16: Obtain the positioning deviation of each feature point based on the nonlinear optimization in
步骤17:由靶标特征点确定的单应矩阵将光条标定点映射到三维空间,得到光平面三维 坐标点列表。采用RANSAC剔除粗大误差点,再利用最小二乘法得到光平面初值。最后通 过非线性优化得到光平面的最大似然解。Step 17: The homography matrix determined by the target feature points maps the calibration points of the light bar to the three-dimensional space, and obtains a list of three-dimensional coordinate points of the light plane. Use RANSAC to remove the coarse error points, and then use the least squares method to obtain the initial value of the light plane. Finally, the maximum likelihood solution of the light plane is obtained by nonlinear optimization.
设为去除畸变后的靶标特征点坐标,Qij为靶标特征点坐标,为靶标平面与图像平 面之间的单应矩阵;为去除畸变后的光条标定点坐标,为光条标定点在靶标坐标系点 坐标,为光平面与靶标交线在相机坐标下三维点坐标。则图像点与靶标点之间满足 其中s为非零系数。根据单应矩阵映射的可逆性,可以得出:Assume In order to remove the distorted target feature point coordinates, Q ij is the target feature point coordinates, is the homography matrix between the target plane and the image plane; In order to remove the distortion of the light bar calibration point coordinates, is the point coordinate of the light bar calibration point in the target coordinate system, The three-dimensional point coordinates in the camera coordinates for the intersection of the light plane and the target. Then the relationship between the image point and the target point satisfies where s is a non-zero coefficient. According to the invertibility of the homography matrix mapping, we can get:
设可分解为旋转矩阵与平移矢量则Assume can be decomposed into a rotation matrix with translation vector but
设光平面方程表示为ax+by+cz+d=0,其中,将代 入RANSAC约束的最小二乘法中,得到光平面方程初解基于光平面上点到平面距 离最小为约束,可建立如下目标函数,Let the light plane equation be expressed as ax+by+cz+d=0, where, Will Substitute into the least square method constrained by RANSAC to obtain the initial solution of the light plane equation Based on the constraint of the minimum distance from the point to the plane on the light plane, the following objective function can be established:
其中,a,b,c,d为光平面方程的四个系数,xik=[xik,yik,zik,1]是i个靶标位置,k表示光 条上光平面与靶标相交所得的k个标定点。S表示靶标摆放次数,M表示每个位置得到的光 条标定点数目。a,b,c,d的最优解可以通过非线性优化方法的最大似然估计得到。Among them, a, b, c, d are the four coefficients of the light plane equation, x ik = [x ik , y ik , zi ik , 1] is the i target position, k represents the intersection of the light plane and the target on the light bar. The k calibration points of . S represents the number of target placements, and M represents the number of calibration points of the light bar obtained at each position. The optimal solutions of a, b, c, d can be obtained by maximum likelihood estimation of nonlinear optimization methods.
Claims (6)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811619300.2A CN109827502B (en) | 2018-12-28 | 2018-12-28 | High-precision calibration method for line-structured light vision sensor for calibration point image compensation |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811619300.2A CN109827502B (en) | 2018-12-28 | 2018-12-28 | High-precision calibration method for line-structured light vision sensor for calibration point image compensation |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN109827502A CN109827502A (en) | 2019-05-31 |
| CN109827502B true CN109827502B (en) | 2020-03-17 |
Family
ID=66861331
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201811619300.2A Active CN109827502B (en) | 2018-12-28 | 2018-12-28 | High-precision calibration method for line-structured light vision sensor for calibration point image compensation |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN109827502B (en) |
Families Citing this family (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110942434B (en) * | 2019-11-22 | 2023-05-05 | 华兴源创(成都)科技有限公司 | Display compensation system and method of display panel |
| CN111412855B (en) * | 2019-12-31 | 2024-12-13 | 吉林大学 | Active visual reconstruction system and method of automobile shape based on point and line invariants |
| CN111311686B (en) * | 2020-01-15 | 2023-05-02 | 浙江大学 | A Defocus Correction Method for Projectors Based on Edge Sensing |
| CN111275770A (en) * | 2020-01-20 | 2020-06-12 | 南昌航空大学 | Global calibration method of four-eye stereo vision system based on one-dimensional target rotation motion |
| CN111207670A (en) * | 2020-02-27 | 2020-05-29 | 河海大学常州校区 | Line structured light calibration device and method |
| CN112229420A (en) * | 2020-08-31 | 2021-01-15 | 南京航空航天大学 | A line laser calibration method for aircraft skin seam measurement |
| WO2022088039A1 (en) * | 2020-10-30 | 2022-05-05 | Harman International Industries, Incorporated | Unified calibration between dvs and camera |
| CN112484746B (en) * | 2020-11-26 | 2023-04-28 | 上海电力大学 | Monocular vision auxiliary laser radar odometer method based on ground plane |
| CN112767492A (en) * | 2020-12-25 | 2021-05-07 | 江苏集萃智能光电系统研究所有限公司 | Railway wheel set size detection device and calibration method thereof |
| CN113155057A (en) * | 2021-03-16 | 2021-07-23 | 广西大学 | Line structured light plane calibration method using non-purpose-made target |
| CN113639633B (en) * | 2021-07-26 | 2023-07-07 | 中国航空工业集团公司北京航空精密机械研究所 | Clamp angular zero alignment method in multi-axis vision measurement device |
| CN115112048A (en) * | 2022-07-04 | 2022-09-27 | 长春师范大学 | Single-line and three-line laser combined three-dimensional structured light vision system and method |
| CN116295109A (en) * | 2022-12-23 | 2023-06-23 | 北京信息科技大学 | Line structure light plane calibration method based on two-dimensional circular target |
| CN116342717B (en) * | 2023-04-19 | 2025-08-19 | 中国科学院上海微系统与信息技术研究所 | Multi-camera calibration method without public view field based on line structured light |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7061628B2 (en) * | 2001-06-27 | 2006-06-13 | Southwest Research Institute | Non-contact apparatus and method for measuring surface profile |
| CN1250942C (en) * | 2003-06-11 | 2006-04-12 | 北京航空航天大学 | Construction optical visual sense transducer calibration method based on plane targets |
| CN101943563B (en) * | 2010-03-26 | 2012-04-25 | 天津大学 | Rapid calibration method of line-structured light vision sensor based on space plane restriction |
| CN104848801B (en) * | 2015-06-05 | 2017-06-13 | 北京航空航天大学 | A kind of line structured light vision sensor calibration method based on parallel bicylindrical target |
| CN106705849B (en) * | 2017-01-25 | 2019-06-21 | 上海新时达电气股份有限公司 | Calibrating Technique For The Light-strip Sensors |
| CN107255443B (en) * | 2017-07-14 | 2020-09-01 | 北京航空航天大学 | Field calibration method and device for binocular vision sensor in complex environment |
| CN107218904B (en) * | 2017-07-14 | 2020-03-17 | 北京航空航天大学 | Line structured light vision sensor calibration method based on sawtooth target |
-
2018
- 2018-12-28 CN CN201811619300.2A patent/CN109827502B/en active Active
Also Published As
| Publication number | Publication date |
|---|---|
| CN109827502A (en) | 2019-05-31 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109827502B (en) | High-precision calibration method for line-structured light vision sensor for calibration point image compensation | |
| CN107255443B (en) | Field calibration method and device for binocular vision sensor in complex environment | |
| CN108053450B (en) | A multi-constraint-based high-precision binocular camera calibration method | |
| CN109029299B (en) | Dual-camera measurement device and measurement method for docking angle of pin hole in cabin | |
| CN114119553B (en) | Binocular vision different-surface round hole detection method taking cross laser as reference | |
| CN109559355B (en) | Multi-camera global calibration device and method without public view field based on camera set | |
| CN111667536A (en) | Parameter calibration method based on zoom camera depth estimation | |
| CN104484648B (en) | Robot variable viewing angle obstacle detection method based on contour recognition | |
| CN107218904B (en) | Line structured light vision sensor calibration method based on sawtooth target | |
| CN103616016B (en) | Based on the pose vision measuring method of dotted line assemblage characteristic | |
| CN110296691A (en) | Merge the binocular stereo vision measurement method and system of IMU calibration | |
| CN103278138B (en) | Method for measuring three-dimensional position and posture of thin component with complex structure | |
| CN101231750A (en) | A Calibration Method for Binocular Stereo Measuring System | |
| CN101299270A (en) | Multiple video cameras synchronous quick calibration method in three-dimensional scanning system | |
| CN104036542B (en) | Spatial light clustering-based image surface feature point matching method | |
| CN109163657A (en) | A kind of circular target position and posture detection method rebuild based on binocular vision 3 D | |
| CN111640158A (en) | End-to-end camera based on corresponding mask and laser radar external reference calibration method | |
| CN113012234A (en) | High-precision camera calibration method based on plane transformation | |
| CN109974618A (en) | Global Calibration Method of Multi-sensor Vision Measurement System | |
| CN109754435A (en) | An online camera calibration method based on fuzzy images of small targets | |
| CN111080711A (en) | A Magnification-Based Calibration Method of Microscopic Imaging System in Nearly Parallel State | |
| CN111968182A (en) | Calibration method for binocular camera nonlinear model parameters | |
| CN112258583A (en) | Distortion calibration method for close-range images based on equal-distortion variable partitioning | |
| CN112362034A (en) | Solid engine multi-cylinder section butt joint guiding measurement algorithm based on binocular vision | |
| CN117249764A (en) | Vehicle body positioning method and device and electronic equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |