[go: up one dir, main page]

CN112465898B - Object 3D pose tag acquisition method based on checkerboard calibration plate - Google Patents

Object 3D pose tag acquisition method based on checkerboard calibration plate Download PDF

Info

Publication number
CN112465898B
CN112465898B CN202011309922.2A CN202011309922A CN112465898B CN 112465898 B CN112465898 B CN 112465898B CN 202011309922 A CN202011309922 A CN 202011309922A CN 112465898 B CN112465898 B CN 112465898B
Authority
CN
China
Prior art keywords
coordinate system
pose
checkerboard
camera
transformation matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011309922.2A
Other languages
Chinese (zh)
Other versions
CN112465898A (en
Inventor
庄春刚
熊振华
朱向阳
雷海波
王合胜
王浩宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiao Tong University
Original Assignee
Shanghai Jiao Tong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiao Tong University filed Critical Shanghai Jiao Tong University
Priority to CN202011309922.2A priority Critical patent/CN112465898B/en
Publication of CN112465898A publication Critical patent/CN112465898A/en
Application granted granted Critical
Publication of CN112465898B publication Critical patent/CN112465898B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a chessboard marking plate-based object 3D pose tag acquisition method, which relates to the technical field of object pose acquisition and comprises the following steps: establishing a position and pose template of the object and the chessboard pattern calibration plate by depending on the CAD model of the object; calibrating camera internal parameter, camera external parameter and radial distortion parameter; acquiring the pose of an object under a camera coordinate system; and verifying the correctness of pose acquisition. The invention provides a low-cost and high-precision method for acquiring the real pose of the object and a real verification label for an object pose estimation method by using a checkerboard calibration method on the basis of restraining the pose relationship between the object and the checkerboard.

Description

一种基于棋盘格标定板的物体3D位姿标签获取方法A Method for Obtaining 3D Pose Labels of Objects Based on Checkerboard Calibration Board

技术领域technical field

本发明涉及物体位姿获取技术领域,尤其涉及一种基于棋盘格标定板的物体位姿标签获取方法。The present invention relates to the technical field of object pose acquisition, in particular to a method for acquiring object pose labels based on a checkerboard calibration board.

背景技术Background technique

随着深度学习的发展,越来越多的网络关注物体空间位姿估计。空间位姿估计网络需要大量的具有位姿标签的训练数据,同时需要真实场景中的物体的位姿标签作为网络回归精度的计算依据。因此,物体真实位姿的获取在数据集构建和位姿估计准确性的评价方面具有非常重要的意义。With the development of deep learning, more and more networks focus on object space pose estimation. The spatial pose estimation network requires a large amount of training data with pose labels, and at the same time requires the pose labels of objects in the real scene as the basis for calculating the network regression accuracy. Therefore, the acquisition of the real pose of the object is of great significance in the construction of the dataset and the evaluation of the accuracy of the pose estimation.

现有物体位姿信息主要通过两种形式获取,其一是通过物理仿真引擎得到仿真数据集及其标签,其二是通过间接的方式获取真实场景中的位姿信息。Kilian Kleeberger等人在文章“Large-scale 6D Object Pose Estimation Dataset for Industrial Bin-Picking”中使用最近点迭代算法获取真实场景的物体标签。Romain Bregier等人在文章“Symmetry Aware Evaluation of 3D Object Detection and Pose Estimation inScenes of Many Parts in Bulk”中使用在贴在物体上的标记物获取物体的位姿信息。在验证位姿估计准确性时,Chien-Ming Lin等人在文章“Visual Object Recognition andPose Estimation Based on a Deep Semantic Segmentation Network”中使用两自由度转台提供物体在真实场景中的相对位姿的标签。The existing object pose information is mainly obtained in two ways, one is to obtain the simulation data set and its labels through the physical simulation engine, and the other is to obtain the pose information in the real scene indirectly. In the article "Large-scale 6D Object Pose Estimation Dataset for Industrial Bin-Picking", Kilian Kleeberger et al. use the closest point iteration algorithm to obtain object labels for real scenes. In the article "Symmetry Aware Evaluation of 3D Object Detection and Pose Estimation in Scenes of Many Parts in Bulk", Romain Bregier et al. use markers attached to the object to obtain the pose information of the object. When verifying the accuracy of pose estimation, Chien-Ming Lin et al. used a two-degree-of-freedom turntable to provide labels for the relative pose of objects in real scenes in the article "Visual Object Recognition and Pose Estimation Based on a Deep Semantic Segmentation Network".

间接算法获取位姿的方式,依赖完整可靠的场景三维信息和高鲁棒的模板匹配算法。在一些复杂场景下还需要手工去除干扰信息。使用物理引擎生成的数据,在泛化到真实场景的过程中会引起不可避免的精度损失。物体在相机坐标系下的绝对位姿的估计精度是影响物体抓取成功率的直接因素,因此使用相对位姿的评价不能够反映位姿估计在物体抓取任务中的表现。The way indirect algorithms acquire poses relies on complete and reliable 3D information of the scene and a highly robust template matching algorithm. In some complex scenarios, it is necessary to manually remove the interference information. Using the data generated by the physics engine will cause an inevitable loss of precision in the process of generalization to the real scene. The estimation accuracy of the absolute pose of an object in the camera coordinate system is a direct factor affecting the success rate of object grasping, so the evaluation using relative pose cannot reflect the performance of pose estimation in object grasping tasks.

因此,本领域的技术人员致力于开发一种基于棋盘格标定板的物体位姿标签获取方法,不依赖三维点云信息,不要求高鲁棒性匹配算法。Therefore, those skilled in the art are devoting themselves to developing a checkerboard calibration board-based object pose label acquisition method that does not rely on 3D point cloud information and does not require a highly robust matching algorithm.

发明内容Contents of the invention

有鉴于现有技术的上述缺陷,本发明所要解决的技术问题是为真实场景中物体的位姿获取提供一种低成本高精度的方法,为物体位姿估计方法提供了真实的验证标签。In view of the above-mentioned defects in the prior art, the technical problem to be solved by the present invention is to provide a low-cost and high-precision method for obtaining the pose of objects in real scenes, and provide a real verification label for object pose estimation methods.

为实现上述目的,本发明提供了一种基于棋盘格标定板的物体3D位姿标签获取方法,其特征在于,包括如下步骤:In order to achieve the above object, the present invention provides a method for obtaining object 3D pose labels based on a checkerboard calibration board, which is characterized in that it includes the following steps:

步骤1、建立物体的CAD模型;Step 1, establishing a CAD model of the object;

步骤2、将CAD模型等比例投影得到物体轮廓模板;Step 2, projecting the CAD model in equal proportions to obtain the object outline template;

步骤3、将所述物体轮廓模板和棋盘格标定板原点对齐;Step 3, aligning the origin of the object outline template with the checkerboard calibration plate;

步骤4、计算物体坐标系到棋盘格坐标系的变换矩阵,得到第一变换矩阵;Step 4, calculating the transformation matrix from the object coordinate system to the checkerboard coordinate system to obtain the first transformation matrix;

步骤5、将物体和棋盘格放置在模板上,标定相机外参、相机内参和径向畸变参数;Step 5. Place the object and the checkerboard on the template, and calibrate the camera extrinsic parameters, camera intrinsic parameters and radial distortion parameters;

步骤6、移动相机使物体与棋盘格同时在相机视野内,获取单幅图片;Step 6. Move the camera so that the object and the checkerboard are within the camera's field of view at the same time, and obtain a single picture;

步骤7、使用所述径向畸变参数修正所述单幅图片,计算棋盘格坐标系到相机坐标系的变换矩阵,得到第二变换矩阵;Step 7, using the radial distortion parameters to correct the single picture, calculating the transformation matrix from the checkerboard coordinate system to the camera coordinate system to obtain a second transformation matrix;

步骤8、根据所述第一变换矩阵和所述第二变换矩阵,计算物体坐标系到相机坐标系的变换矩阵,得到第三变换矩阵;Step 8. According to the first transformation matrix and the second transformation matrix, calculate the transformation matrix from the object coordinate system to the camera coordinate system to obtain a third transformation matrix;

步骤9、采样所述CAD模型,得到采样点在物体坐标系下的坐标值,为采样物体坐标值,使用所述第三变换矩阵,计算采样点在相机坐标系下的坐标值,得到采样相机坐标值;Step 9. Sampling the CAD model to obtain the coordinate values of the sampling points in the object coordinate system, which is the coordinate values of the sampling objects, using the third transformation matrix to calculate the coordinate values of the sampling points in the camera coordinate system to obtain the sampling camera coordinate value;

步骤10、利用物体位姿估计方法得到物体在相机坐标系下的位姿估计值,计算所述采样点在相机坐标系下的坐标值,得到采样估计坐标值,与所述采样相机坐标值对比,得到位姿估计精度指标。Step 10, using the object pose estimation method to obtain the estimated pose value of the object in the camera coordinate system, calculate the coordinate value of the sampling point in the camera coordinate system, obtain the sampled estimated coordinate value, and compare it with the sampled camera coordinate value , to get the pose estimation accuracy index.

进一步地,所述步骤3包括:在所述物体轮廓模板中选取模板特征点,将所述模板特征点与棋盘格标定板原点对齐,计算所述模板特征点的偏移量。Further, the step 3 includes: selecting template feature points in the object outline template, aligning the template feature points with the origin of the checkerboard calibration plate, and calculating the offset of the template feature points.

进一步地,所述第一变换矩阵为:Further, the first transformation matrix is:

Figure BDA0002789481110000021
Figure BDA0002789481110000021

式中xf、yf、zf为所述模板特征点在物体坐标系下的坐标值,Δx、Δy是所述模板特征点到棋盘格标定板原点的偏移量,Δh是高精度棋盘格标定板的厚度,在不使用高精度标定板时其值为0。In the formula, x f , y f , and z f are the coordinate values of the template feature points in the object coordinate system, Δx, Δy are the offsets from the template feature points to the origin of the checkerboard calibration plate, and Δh is the high-precision checkerboard The thickness of the grid calibration plate, its value is 0 when no high-precision calibration plate is used.

进一步地,相机外参是利用棋盘格角点和其实际坐标拟合得到,相机内参由张正友标定方法计算得到。Furthermore, the camera extrinsic parameters are obtained by fitting the corner points of the checkerboard with their actual coordinates, and the camera intrinsic parameters are calculated by Zhang Zhengyou's calibration method.

进一步地,所述相机内参和所述径向畸变参数分别为:Further, the internal parameters of the camera and the radial distortion parameters are respectively:

Figure BDA0002789481110000022
Figure BDA0002789481110000022

式中f是工业相机焦距,dx是像素横向比例,dy是像素纵向比例,u0、v0是图像的主点坐标。In the formula, f is the focal length of the industrial camera, dx is the horizontal ratio of the pixel, dy is the vertical ratio of the pixel, u 0 and v 0 are the principal point coordinates of the image.

进一步地,所述第二变换矩阵为:Further, the second transformation matrix is:

Figure BDA0002789481110000031
Figure BDA0002789481110000031

式中R是棋盘格坐标系到相机坐标系的旋转变换,t是棋盘格坐标系到相机坐标系的平移矢量。In the formula, R is the rotation transformation from the checkerboard coordinate system to the camera coordinate system, and t is the translation vector from the checkerboard coordinate system to the camera coordinate system.

进一步地,所述第三变换矩阵为:Further, the third transformation matrix is:

Tobj2camera=Tboard2camera·Tboard2obj -1 T obj2camera =T board2camera T board2obj -1

进一步地,所述采样点物体坐标值为:Further, the object coordinate value of the sampling point is:

Figure BDA0002789481110000032
Figure BDA0002789481110000032

式中

Figure BDA0002789481110000033
是第i个采样点在物体坐标系的坐标值。In the formula
Figure BDA0002789481110000033
is the coordinate value of the i-th sampling point in the object coordinate system.

进一步地,所述采样相机坐标值为:Further, the sampling camera coordinate value is:

Figure BDA0002789481110000034
Figure BDA0002789481110000034

式中

Figure BDA0002789481110000035
是使用所述第三变换矩阵得到的第i个采样点在相机坐标系的坐标值。In the formula
Figure BDA0002789481110000035
is the coordinate value of the i-th sampling point in the camera coordinate system obtained by using the third transformation matrix.

进一步地,所述采样估计坐标值为:Further, the estimated sampling coordinate value is:

Figure BDA0002789481110000036
Figure BDA0002789481110000036

式中

Figure BDA0002789481110000037
是使用位姿估计方法所得的变换矩阵计算的第i个采样点在相机坐标系的坐标值。In the formula
Figure BDA0002789481110000037
is the coordinate value of the i-th sampling point in the camera coordinate system calculated using the transformation matrix obtained by the pose estimation method.

进一步地,所述位姿估计精度指标的计算公式为:Further, the calculation formula of the pose estimation accuracy index is:

Figure BDA0002789481110000038
Figure BDA0002789481110000038

式中

Figure BDA0002789481110000039
Pcam i分别是第i个采样点的采样估计坐标值和采样相机坐标值。In the formula
Figure BDA0002789481110000039
P cam i are the sampling estimated coordinate value and the sampling camera coordinate value of the i-th sampling point, respectively.

与现有技术相比,本发明至少具有如下有益技术效果:Compared with the prior art, the present invention has at least the following beneficial technical effects:

1、可以在低成本条件下获取物体在相机坐标系下的姿态信息;1. The attitude information of the object in the camera coordinate system can be obtained at low cost;

2、在计算物体的位姿过程中直接使用棋盘格到相机的坐标变换,避免了机器人运动误差和其他现有方法中的其他误差;2. Directly use the coordinate transformation from the checkerboard to the camera in the process of calculating the pose of the object, avoiding robot motion errors and other errors in other existing methods;

3、本发明为物体位姿估计提供了真实场景下的标签,为物体位姿估计的真实数据集的建立和位姿估计的评价提供基础。3. The present invention provides tags in real scenes for object pose estimation, and provides a basis for the establishment of real data sets for object pose estimation and the evaluation of pose estimation.

以下将结合附图对本发明的构思、具体结构及产生的技术效果作进一步说明,以充分地了解本发明的目的、特征和效果。The idea, specific structure and technical effects of the present invention will be further described below in conjunction with the accompanying drawings, so as to fully understand the purpose, features and effects of the present invention.

附图说明Description of drawings

图1是本发明的一个较佳实施例的流程图;Fig. 1 is a flow chart of a preferred embodiment of the present invention;

图2是本发明的一个较佳实施例的CAD模型示意图;Fig. 2 is a CAD model schematic diagram of a preferred embodiment of the present invention;

图3是本发明的一个较佳实施例的物体轮廓模板示意图;Fig. 3 is a schematic diagram of an object outline template in a preferred embodiment of the present invention;

图4是本发明的一个较佳实施例的物体轮廓模板和棋盘格标定板的对齐示意图;Fig. 4 is a schematic diagram of the alignment of an object outline template and a checkerboard calibration plate according to a preferred embodiment of the present invention;

图5是本发明的一个较佳实施例的物体坐标系到棋盘格坐标系的关系示意图;Fig. 5 is a schematic diagram of the relationship between the object coordinate system and the checkerboard coordinate system in a preferred embodiment of the present invention;

图6是本发明的一个较佳实施例的眼在手标定模型示意图;Fig. 6 is a schematic diagram of an eye-in-hand calibration model according to a preferred embodiment of the present invention;

图7是本发明的一个较佳实施例的CAD模型采样示意图;Fig. 7 is a CAD model sampling schematic diagram of a preferred embodiment of the present invention;

图8是本发明的一个较佳实施例的采样点误差分布图。Fig. 8 is a distribution diagram of sampling point errors in a preferred embodiment of the present invention.

其中,1-连杆工件CAD模型,2-棋盘格标定板,3-工业相机,4-工业机器人。Among them, 1-connecting rod workpiece CAD model, 2-checkerboard calibration board, 3-industrial camera, 4-industrial robot.

具体实施方式detailed description

以下参考说明书附图介绍本发明的多个优选实施例,使其技术内容更加清楚和便于理解。本发明可以通过许多不同形式的实施例来得以体现,本发明的保护范围并非仅限于文中提到的实施例。The following describes several preferred embodiments of the present invention with reference to the accompanying drawings, so as to make the technical content clearer and easier to understand. The present invention can be embodied in many different forms of embodiments, and the protection scope of the present invention is not limited to the embodiments mentioned herein.

在附图中,结构相同的部件以相同数字标号表示,各处结构或功能相似的组件以相似数字标号表示。附图所示的每一组件的尺寸和厚度是任意示出的,本发明并没有限定每个组件的尺寸和厚度。为了使图示更清晰,附图中有些地方适当夸大了部件的厚度。In the drawings, components with the same structure are denoted by the same numerals, and components with similar structures or functions are denoted by similar numerals. The size and thickness of each component shown in the drawings are shown arbitrarily, and the present invention does not limit the size and thickness of each component. In order to make the illustration clearer, the thickness of parts is appropriately exaggerated in some places in the drawings.

本实施例的物体是工业场景中常见连杆工件,如图1所示,为本实施例的流程图,包括如下步骤:The object of this embodiment is a common connecting rod workpiece in an industrial scene, as shown in Figure 1, which is a flow chart of this embodiment, including the following steps:

步骤1、建立连杆工件CAD模型1,该模型由产品设计阶段得到,也可以使用重建系统或者测绘得到,CAD模型1如图2所示。Step 1. Establish the CAD model 1 of the connecting rod workpiece. This model is obtained in the product design stage, or can be obtained by using a reconstruction system or surveying and mapping. The CAD model 1 is shown in FIG. 2 .

步骤2、将连杆工件CAD模型1等比例投影得到连杆工件的轮廓模板,轮廓模板如图3所示。Step 2. Project the CAD model 1 of the connecting rod workpiece in equal proportions to obtain the contour template of the connecting rod workpiece. The contour template is shown in FIG. 3 .

步骤3、将步骤2中得到的轮廓模板和棋盘格标定板2原点对齐。本实施例中选取小轴孔中心作为轮廓模板的模板特征点,并固定特征点与棋盘格原点的距离Δx=0mm、Δy=-90mm,轮廓模板与棋盘格标定板2对齐示意图如图4所示。Step 3. Align the outline template obtained in step 2 with the origin of the checkerboard calibration plate 2. In this embodiment, the center of the small shaft hole is selected as the template feature point of the outline template, and the distance between the feature point and the origin of the checkerboard is fixed Δx=0mm, Δy=-90mm, and the outline template and the checkerboard calibration plate 2 are aligned as shown in Figure 4. Show.

步骤4、根据模板特征点在物体坐标系下的坐标值,以及模板特征点到棋盘格坐标原点的偏差,计算得到物体坐标系到棋盘格坐标系的变换矩阵。Step 4. Calculate the transformation matrix from the object coordinate system to the checkerboard coordinate system according to the coordinate values of the template feature points in the object coordinate system and the deviation from the template feature points to the checkerboard coordinate origin.

连杆工件的物体坐标系到棋盘格坐标系的关系如图5所示。连杆工件的物体坐标系{O1}建在工件的尺寸中心,即坐标原点在工件的尺寸中心。X轴从尺寸中心指向小轴圆孔中心、Y轴从尺寸中心指向工件正面上方,且坐标系符合右手定则。根据张正友标定法中的定义,棋盘格标定的坐标系{O2}中心位于图5所示的右下角点上,X轴由该角点指向棋格较多的方向,Y轴则由该角点指向棋格较少的方向,坐标系符合右手定则。The relationship between the object coordinate system of the connecting rod workpiece and the checkerboard coordinate system is shown in Figure 5. The object coordinate system {O 1 } of the connecting rod workpiece is established at the size center of the workpiece, that is, the coordinate origin is at the size center of the workpiece. The X-axis points from the center of the size to the center of the small axis hole, the Y-axis points from the center of the size to the top of the workpiece, and the coordinate system conforms to the right-hand rule. According to the definition in Zhang Zhengyou’s calibration method, the center of the coordinate system {O 2 } for checkerboard calibration is located at the lower right corner point shown in Figure 5. The point points to the direction with fewer squares, and the coordinate system conforms to the right-hand rule.

由连杆工件CAD模型1获得小轴中心的投影点在物体坐标系下的坐标为xf=50.5mm、yf=-12.5mm、zf=0mm,则物体坐标系到棋盘格坐标系的变换矩阵为:The coordinates of the projection point of the center of the small axis obtained from the connecting rod workpiece CAD model 1 in the object coordinate system are x f = 50.5 mm, y f = -12.5 mm, z f = 0 mm, then the coordinates of the object coordinate system to the checkerboard coordinate system The transformation matrix is:

Figure BDA0002789481110000051
Figure BDA0002789481110000051

其中Δx与物体坐标系的X轴方向同向时为正,Δy与物体坐标系Z轴反向时为正;Among them, Δx is positive when it is in the same direction as the X-axis of the object coordinate system, and positive when Δy is opposite to the Z-axis of the object coordinate system;

将连杆工件外轮廓和其轮廓模板对齐,如使用高精度棋盘格标定板2,则将高精度棋盘格标定板2与棋盘格模板对齐。若高精度棋盘格标定板2的厚度为Δh=3mm,此时物体坐标系到棋盘格坐标系的变换矩阵为:Align the outer contour of the connecting rod workpiece with its contour template. If the high-precision checkerboard calibration plate 2 is used, align the high-precision checkerboard calibration plate 2 with the checkerboard template. If the thickness of the high-precision checkerboard calibration plate 2 is Δh=3mm, the transformation matrix from the object coordinate system to the checkerboard coordinate system is:

Figure BDA0002789481110000052
Figure BDA0002789481110000052

其中Δh与物体坐标系Y轴同向时为正。Among them, Δh is positive when it is in the same direction as the Y axis of the object coordinate system.

步骤5、将连杆工件CAD模型1、标定板置于搭建的眼在手标定系统下,如图6所示,按照张正友相机标定流程,保持棋盘格标定板2不动,改变工业机器人4位形采集10~20副棋盘格图像。Step 5. Place the connecting rod workpiece CAD model 1 and the calibration plate under the built eye-in-hand calibration system, as shown in Figure 6, follow the camera calibration process of Zhang Zhengyou, keep the checkerboard calibration plate 2 still, and change the industrial robot 4 10 to 20 checkerboard images are collected in a single shape.

根据张正友相机标定方法,计算得到工业相机3的内部参数:According to Zhang Zhengyou's camera calibration method, the internal parameters of the industrial camera 3 are calculated:

Figure BDA0002789481110000053
Figure BDA0002789481110000053

其中f是工业相机3焦距,dx是像素横向比例,dy是像素纵向比例,u0、v0是图像的主点坐标;Where f is the focal length of the industrial camera 3, dx is the horizontal ratio of the pixel, dy is the vertical ratio of the pixel, u 0 and v 0 are the principal point coordinates of the image;

工业相机3的径向畸变参数:Radial distortion parameters of industrial camera 3:

Figure BDA0002789481110000061
Figure BDA0002789481110000061

步骤6、完成了工业相机3的内部参数和畸变参数的标定后,移动工业机器人4使连杆和标定板同时在相机的视野中,采集此时工业相机3呈像。为了获取连杆在相机坐标系下不同距离和姿态下的图像和标签,保持工件和棋盘格标定板2不动。改变工业机器人4位形,保证工件和棋盘格标定板2在相机视野内,使工件在相机坐标系下的姿态发生变化。Step 6. After the internal parameters and distortion parameters of the industrial camera 3 are calibrated, move the industrial robot 4 so that the connecting rod and the calibration plate are in the field of view of the camera at the same time, and collect the image of the industrial camera 3 at this time. In order to obtain images and labels of the connecting rod at different distances and attitudes in the camera coordinate system, keep the workpiece and the checkerboard calibration plate 2 stationary. Change the 4-position shape of the industrial robot to ensure that the workpiece and the checkerboard calibration plate 2 are within the camera field of view, so that the posture of the workpiece in the camera coordinate system changes.

步骤7、使用步骤5中获得的径向畸变参数k矫正步骤6中采集到的图像,矫正后图像像素坐标与原始图像像素坐标关系为:Step 7. Use the radial distortion parameter k obtained in step 5 to correct the image collected in step 6. The relationship between the pixel coordinates of the corrected image and the pixel coordinates of the original image is:

Figure BDA0002789481110000062
Figure BDA0002789481110000062

Figure BDA0002789481110000063
Figure BDA0002789481110000063

其中

Figure BDA0002789481110000064
是工业相机3产生径向畸变后的图像像素坐标,u、v是工业相机3在理想情况下呈像所得的图像像素坐标,u0、v0是图像主点坐标值,k1、k2是径向畸变参数,r是像素点与主点的实际距离;in
Figure BDA0002789481110000064
are the image pixel coordinates after the radial distortion of the industrial camera 3, u and v are the image pixel coordinates obtained by the industrial camera 3 under ideal conditions, u 0 and v 0 are the principal point coordinates of the image, k 1 , k 2 is the radial distortion parameter, r is the actual distance between the pixel point and the main point;

使用张正友标定方法,利用去畸变后的图像获得棋盘格坐标系到相机坐标系的变换矩阵:Using the Zhang Zhengyou calibration method, use the de-distorted image to obtain the transformation matrix from the checkerboard coordinate system to the camera coordinate system:

Figure BDA0002789481110000065
Figure BDA0002789481110000065

步骤8、根据步骤4和步骤7计算所得,获取图像中工件在图像坐标系下的位姿,即连杆的物体坐标系到相机坐标系的齐次变换矩阵为:Step 8, calculated according to step 4 and step 7, obtain the pose of the workpiece in the image under the image coordinate system, that is, the homogeneous transformation matrix from the object coordinate system of the connecting rod to the camera coordinate system is:

Tobj2camera=Tboard2camera·Tboard2obj -1 T obj2camera =T board2camera T board2obj -1

计算得到:Calculated to get:

Figure BDA0002789481110000066
Figure BDA0002789481110000066

步骤9、随机采样连杆工件CAD模型1,得到采样点在物体坐标系下的坐标值Pobj,采样点如图7所示,并计算采样点在相机坐标系下坐标值PcamStep 9. Randomly sample the CAD model 1 of the connecting rod workpiece to obtain the coordinate value P obj of the sampling point in the object coordinate system. The sampling point is shown in FIG. 7 , and calculate the coordinate value P cam of the sampling point in the camera coordinate system.

步骤10、为了评价现有的物体位姿估计网络的精度,使用该网络所估计的物体位姿结果计算步骤9中采样点在相机坐标下坐标值

Figure BDA0002789481110000067
Figure BDA0002789481110000068
与Pcam的平均距离作为该网络位姿估计的精度指标。本实施例中平均距离为4.337mm,采样各点误差如图8所示。该平均距离描述了网络所得的物体位姿结果与本文给出的位姿标签的误差,平均距离越大说明该网络预测的物体位姿误差大,精度低。Step 10. In order to evaluate the accuracy of the existing object pose estimation network, use the object pose estimated by the network to calculate the coordinate values of the sampling points under the camera coordinates in step 9
Figure BDA0002789481110000067
by
Figure BDA0002789481110000068
The average distance from P cam is used as the accuracy index for pose estimation of this network. In this embodiment, the average distance is 4.337 mm, and the errors of each sampling point are shown in FIG. 8 . The average distance describes the error between the object pose result obtained by the network and the pose label given in this paper. The larger the average distance, the greater the error and low accuracy of the object pose predicted by the network.

本实施例的基于棋盘格标定板2的物体位姿标签获取方法,为真实场景中物体的标签提供了一种低成本、高精度的获取方法。通过该标签计算的平均距离可以为网络训练提供损失定义,同时能够给出网络结果的精度指标。The object pose label acquisition method based on the checkerboard calibration board 2 in this embodiment provides a low-cost, high-precision acquisition method for object labels in real scenes. The average distance calculated by this label can provide a loss definition for network training, and at the same time can give an accuracy indicator of the network result.

以上详细描述了本发明的较佳具体实施例。应当理解,本领域的普通技术无需创造性劳动就可以根据本发明的构思作出诸多修改和变化。因此,凡本技术领域中技术人员依本发明的构思在现有技术的基础上通过逻辑分析、推理或者有限的实验可以得到的技术方案,皆应在由权利要求书所确定的保护范围内。The preferred specific embodiments of the present invention have been described in detail above. It should be understood that those skilled in the art can make many modifications and changes according to the concept of the present invention without creative efforts. Therefore, all technical solutions that can be obtained by those skilled in the art based on the concept of the present invention through logical analysis, reasoning or limited experiments on the basis of the prior art shall be within the scope of protection defined by the claims.

Claims (10)

1.一种基于棋盘格标定板的物体3D位姿标签获取方法,其特征在于,包括如下步骤:1. an object 3D pose label acquisition method based on a checkerboard calibration plate, is characterized in that, comprises the steps: 步骤1、建立物体的CAD模型;Step 1, establishing a CAD model of the object; 步骤2、将CAD模型等比例投影得到物体轮廓模板;Step 2, projecting the CAD model in equal proportions to obtain the object outline template; 步骤3、将所述物体轮廓模板和棋盘格标定板模板原点对齐;Step 3, aligning the origin of the object outline template and the checkerboard calibration plate template; 步骤4、将物体和所述物体轮廓模板对齐,将棋盘格标定板与所述棋盘格标定板模板对齐,计算物体坐标系到棋盘格坐标系的变换矩阵,得到第一变换矩阵;Step 4, aligning the object with the object outline template, aligning the checkerboard calibration plate with the checkerboard calibration plate template, calculating the transformation matrix from the object coordinate system to the checkerboard coordinate system, to obtain the first transformation matrix; 步骤5、标定相机外参、相机内参和径向畸变参数;Step 5, Calibrate camera extrinsic parameters, camera intrinsic parameters and radial distortion parameters; 步骤6、移动相机使物体与棋盘格同时在相机视野内,获取单幅图片;Step 6. Move the camera so that the object and the checkerboard are within the camera's field of view at the same time, and obtain a single picture; 步骤7、使用所述径向畸变参数修正所述单幅图片,计算棋盘格坐标系到相机坐标系的变换矩阵,得到第二变换矩阵;Step 7, using the radial distortion parameter to correct the single picture, calculating the transformation matrix from the checkerboard coordinate system to the camera coordinate system to obtain a second transformation matrix; 步骤8、根据所述第一变换矩阵和所述第二变换矩阵,计算物体坐标系到相机坐标系的变换矩阵,得到第三变换矩阵,即在当前相机视角下,基于棋盘格标定板的物体3D位姿标签;Step 8. According to the first transformation matrix and the second transformation matrix, calculate the transformation matrix from the object coordinate system to the camera coordinate system to obtain the third transformation matrix, that is, under the current camera viewing angle, the object based on the checkerboard calibration board 3D pose label; 步骤9、采样所述CAD模型,得到采样点在物体坐标系下的坐标值,为采样物体坐标值,使用所述第三变换矩阵,计算采样点在相机坐标系下的坐标值,得到采样相机坐标值;Step 9. Sampling the CAD model to obtain the coordinate values of the sampling points in the object coordinate system, which is the coordinate values of the sampling objects, using the third transformation matrix to calculate the coordinate values of the sampling points in the camera coordinate system to obtain the sampling camera coordinate value; 步骤10、利用物体位姿估计方法得到物体在相机坐标系下的位姿估计值,计算所述采样点在相机坐标系下的坐标值,得到采样估计坐标值,与所述采样相机坐标值对比,得到位姿估计精度指标。Step 10, using the object pose estimation method to obtain the estimated pose value of the object in the camera coordinate system, calculate the coordinate value of the sampling point in the camera coordinate system, obtain the sampled estimated coordinate value, and compare it with the sampled camera coordinate value , to get the pose estimation accuracy index. 2.如权利要求1所述的基于棋盘格标定板的物体3D位姿标签获取方法,其特征在于,所述步骤3包括:在所述物体轮廓模板中选取模板特征点,将所述模板特征点与棋盘格标定板原点对齐,计算所述模板特征点的偏移量。2. The object 3D pose label acquisition method based on checkerboard calibration board as claimed in claim 1, is characterized in that, described step 3 comprises: select template feature point in described object outline template, described template feature Points are aligned with the origin of the checkerboard calibration plate, and the offset of the feature points of the template is calculated. 3.如权利要求2所述的基于棋盘格标定板的物体3D位姿标签获取方法,其特征在于,所述第一变换矩阵为:3. the object 3D pose label acquisition method based on the checkerboard calibration board as claimed in claim 2, is characterized in that, described first transformation matrix is:
Figure FDA0003854547490000011
Figure FDA0003854547490000011
式中xf、yf、zf为所述模板特征点在物体坐标系下的坐标值,Δx、Δy是所述模板特征点到棋盘格标定板原点的偏移量,Δh是高精度棋盘格标定板的厚度,在不使用高精度标定板时其值为0。In the formula, x f , y f , and z f are the coordinate values of the template feature points in the object coordinate system, Δx, Δy are the offsets from the template feature points to the origin of the checkerboard calibration plate, and Δh is the high-precision checkerboard The thickness of the grid calibration plate, its value is 0 when no high-precision calibration plate is used.
4.如权利要求3所述的基于棋盘格标定板的物体3D位姿标签获取方法,其特征在于,所述相机内参和所述径向畸变参数分别为:4. the object 3D pose label acquisition method based on the checkerboard calibration board as claimed in claim 3, is characterized in that, described camera intrinsic parameter and described radial distortion parameter are respectively:
Figure FDA0003854547490000021
Figure FDA0003854547490000021
式中f是工业相机焦距,dx是像素横向比例,dy是像素纵向比例,u0、v0是图像的主点坐标;k1、k2是径向畸变参数。In the formula, f is the focal length of the industrial camera, dx is the horizontal ratio of the pixel, dy is the vertical ratio of the pixel, u 0 and v 0 are the principal point coordinates of the image; k 1 and k 2 are the radial distortion parameters.
5.如权利要求4所述的基于棋盘格标定板的物体3D位姿标签获取方法,其特征在于,所述第二变换矩阵为:5. the object 3D pose label acquisition method based on the checkerboard calibration board as claimed in claim 4, is characterized in that, described second transformation matrix is:
Figure FDA0003854547490000022
Figure FDA0003854547490000022
式中R是棋盘格坐标系到相机坐标系的旋转变换,t是棋盘格坐标系到相机坐标系的平移矢量。In the formula, R is the rotation transformation from the checkerboard coordinate system to the camera coordinate system, and t is the translation vector from the checkerboard coordinate system to the camera coordinate system.
6.如权利要求5所述的基于棋盘格标定板的物体3D位姿标签获取方法,其特征在于,所述第三变换矩阵为:6. the object 3D pose label acquisition method based on checkerboard calibration board as claimed in claim 5, is characterized in that, described the 3rd transformation matrix is: Tobj2camera=Tboard2camera·Tboard2obj -1T obj2camera =T board2camera ·T board2obj -1 . 7.如权利要求6所述的基于棋盘格标定板的物体3D位姿标签获取方法,其特征在于,所述采样点物体坐标值为:7. The object 3D pose label acquisition method based on the checkerboard calibration plate as claimed in claim 6, wherein the object coordinate value of the sampling point is:
Figure FDA0003854547490000023
Figure FDA0003854547490000023
式中
Figure FDA0003854547490000024
是第i个采样点在物体坐标系的坐标值。
In the formula
Figure FDA0003854547490000024
is the coordinate value of the i-th sampling point in the object coordinate system.
8.如权利要求7所述的基于棋盘格标定板的物体3D位姿标签获取方法,其特征在于,所述采样相机坐标值为:8. The object 3D pose label acquisition method based on the checkerboard calibration board as claimed in claim 7, wherein the sampling camera coordinate value is:
Figure FDA0003854547490000031
Figure FDA0003854547490000031
式中
Figure FDA0003854547490000032
是使用所述第三变换矩阵得到的第i个采样点在相机坐标系的坐标值。
In the formula
Figure FDA0003854547490000032
is the coordinate value of the i-th sampling point in the camera coordinate system obtained by using the third transformation matrix.
9.如权利要求8所述的基于棋盘格标定板的物体3D位姿标签获取方法,其特征在于,所述采样估计坐标值为:9. The object 3D pose label acquisition method based on the checkerboard calibration board as claimed in claim 8, wherein the sampling estimated coordinate value is:
Figure FDA0003854547490000033
Figure FDA0003854547490000033
式中
Figure FDA0003854547490000034
是使用位姿估计方法所得的变换矩阵计算的第i个采样点在相机坐标系的坐标值。
In the formula
Figure FDA0003854547490000034
is the coordinate value of the i-th sampling point in the camera coordinate system calculated using the transformation matrix obtained by the pose estimation method.
10.如权利要求9所述的基于棋盘格标定板的物体3D位姿标签获取方法,其特征在于,所述位姿估计精度指标的计算公式为:10. The object 3D pose label acquisition method based on the checkerboard calibration board as claimed in claim 9, wherein the calculation formula of the pose estimation accuracy index is:
Figure FDA0003854547490000035
Figure FDA0003854547490000035
式中
Figure FDA0003854547490000036
Pcam i分别是第i个采样点的采样估计坐标值和采样相机坐标值。
In the formula
Figure FDA0003854547490000036
P cam i are the sampling estimated coordinate value and the sampling camera coordinate value of the i-th sampling point, respectively.
CN202011309922.2A 2020-11-20 2020-11-20 Object 3D pose tag acquisition method based on checkerboard calibration plate Active CN112465898B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011309922.2A CN112465898B (en) 2020-11-20 2020-11-20 Object 3D pose tag acquisition method based on checkerboard calibration plate

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011309922.2A CN112465898B (en) 2020-11-20 2020-11-20 Object 3D pose tag acquisition method based on checkerboard calibration plate

Publications (2)

Publication Number Publication Date
CN112465898A CN112465898A (en) 2021-03-09
CN112465898B true CN112465898B (en) 2023-01-03

Family

ID=74798137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011309922.2A Active CN112465898B (en) 2020-11-20 2020-11-20 Object 3D pose tag acquisition method based on checkerboard calibration plate

Country Status (1)

Country Link
CN (1) CN112465898B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555889A (en) * 2019-08-27 2019-12-10 西安交通大学 CALTag and point cloud information-based depth camera hand-eye calibration method
CN110653820A (en) * 2019-09-29 2020-01-07 东北大学 Robot grabbing pose estimation method combined with geometric constraint
CN110689579A (en) * 2019-10-18 2020-01-14 华中科技大学 Rapid monocular vision pose measurement method and measurement system based on cooperative target

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2458927B (en) * 2008-04-02 2012-11-14 Eykona Technologies Ltd 3D Imaging system
CN108555908B (en) * 2018-04-12 2020-07-28 同济大学 A method for gesture recognition and picking of stacked workpieces based on RGBD cameras
CN109344882B (en) * 2018-09-12 2021-05-25 浙江科技学院 Convolutional neural network-based robot control target pose identification method
CN110375648A (en) * 2019-08-05 2019-10-25 华南农业大学 The spatial point three-dimensional coordinate measurement method that the single camera of gridiron pattern target auxiliary is realized
CN110706285A (en) * 2019-10-08 2020-01-17 中国人民解放军陆军工程大学 Object pose prediction method based on CAD model
CN111220126A (en) * 2019-11-19 2020-06-02 中国科学院光电技术研究所 Space object pose measurement method based on point features and monocular camera
CN111768447B (en) * 2020-07-01 2024-03-01 合肥哈工慧拣智能科技有限公司 Monocular camera object pose estimation method and system based on template matching

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555889A (en) * 2019-08-27 2019-12-10 西安交通大学 CALTag and point cloud information-based depth camera hand-eye calibration method
CN110653820A (en) * 2019-09-29 2020-01-07 东北大学 Robot grabbing pose estimation method combined with geometric constraint
CN110689579A (en) * 2019-10-18 2020-01-14 华中科技大学 Rapid monocular vision pose measurement method and measurement system based on cooperative target

Also Published As

Publication number Publication date
CN112465898A (en) 2021-03-09

Similar Documents

Publication Publication Date Title
CN110689579B (en) Rapid monocular vision pose measurement method and measurement system based on cooperative target
CN111127422B (en) Image labeling method, device, system and host
CN113920205A (en) Calibration method of non-coaxial camera
WO2021138990A1 (en) Adaptive detection method for checkerboard sub-pixel corner points
CN104333675A (en) Panoramic electronic image stabilization method based on spherical projection
CN109961485A (en) A method for target localization based on monocular vision
JP2009093611A (en) System and method for 3D object recognition
WO2024011764A1 (en) Calibration parameter determination method and apparatus, hybrid calibration board, device, and medium
CN113379845B (en) Camera calibration method and device, electronic device and storage medium
CN112815843B (en) On-line monitoring method for printing deviation of workpiece surface in 3D printing process
CN103632338B (en) A kind of image registration Evaluation Method based on match curve feature
CN109974618B (en) Global Calibration Method of Multi-sensor Vision Measurement System
CN108416385A (en) It is a kind of to be positioned based on the synchronization for improving Image Matching Strategy and build drawing method
CN110223355A (en) A kind of feature mark poiX matching process based on dual epipolar-line constraint
Su et al. A novel camera calibration method based on multilevel-edge-fitting ellipse-shaped analytical model
CN109003312A (en) A kind of camera calibration method based on nonlinear optimization
CN115861445A (en) Hand-eye calibration method based on calibration plate three-dimensional point cloud
CN115810055A (en) A Method of Cursor Calibration with Ring Structure Based on Plane Checkerboard
CN103886595A (en) Catadioptric camera self-calibration method based on generalized unified model
CN117409086A (en) An online calibration and optimization method for camera parameters
CN105787464A (en) A viewpoint calibration method of a large number of pictures in a three-dimensional scene
CN119027422B (en) Battery accessory quality detection method and system based on machine vision
CN108447092A (en) The method and device of vision positioning marker
CN110458951B (en) Modeling data acquisition method and related device for power grid pole tower
CN112465898B (en) Object 3D pose tag acquisition method based on checkerboard calibration plate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant