[go: up one dir, main page]

CN115439540B - Pose acquisition method, device and equipment - Google Patents

Pose acquisition method, device and equipment

Info

Publication number
CN115439540B
CN115439540B CN202211039220.6A CN202211039220A CN115439540B CN 115439540 B CN115439540 B CN 115439540B CN 202211039220 A CN202211039220 A CN 202211039220A CN 115439540 B CN115439540 B CN 115439540B
Authority
CN
China
Prior art keywords
image information
information
pose
obtaining
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211039220.6A
Other languages
Chinese (zh)
Other versions
CN115439540A (en
Inventor
屈桢深
于周顺泽
徐雨宁
于志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Shenzhen
Original Assignee
Harbin Institute of Technology Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Shenzhen filed Critical Harbin Institute of Technology Shenzhen
Priority to CN202211039220.6A priority Critical patent/CN115439540B/en
Publication of CN115439540A publication Critical patent/CN115439540A/en
Application granted granted Critical
Publication of CN115439540B publication Critical patent/CN115439540B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种位姿获取方法、装置及设备,涉及视觉位姿计算技术领域,所述位姿获取方法包括:获取所述视觉标识器产生的图像信息;根据所述图像信息中所述第一特征图案的位置信息,获得所述第一特征图案的第一投影图像信息和所述光栅区域的第二投影图像信息;根据所述第一特征图案的位置信息和所述第一投影图像信息,获得所述视觉标识器的粗精度位姿信息;根据所述第二投影图像信息获得所述光栅区域产生的第二特征图案的中心像素位置;根据所述第二特征图案的中心像素位置的变化,获取视线角;根据所述视线角和所述视觉标识器的粗精度位姿信息,获得所述视觉标识器的高精度位姿信息。与现有技术比较,本发明能够提高位姿计算精度。

The present invention provides a posture acquisition method, device, and equipment, relating to the field of visual posture calculation technology. The posture acquisition method includes: acquiring image information generated by the visual marker; obtaining first projection image information of the first feature pattern and second projection image information of the grating area based on the position information of the first feature pattern in the image information; obtaining coarse-precision posture information of the visual marker based on the position information of the first feature pattern and the first projection image information; obtaining the center pixel position of the second feature pattern generated by the grating area based on the second projection image information; obtaining the sight angle based on the change in the center pixel position of the second feature pattern; and obtaining high-precision posture information of the visual marker based on the sight angle and the coarse-precision posture information of the visual marker. Compared with the existing technology, the present invention can improve the accuracy of posture calculation.

Description

Pose acquisition method, device and equipment
Technical Field
The invention relates to the technical field of vision pose calculation, in particular to a pose acquisition method, a device and equipment.
Background
Pose estimation, an important component of computer vision, is the process of solving the translational and rotational relationships of a target coordinate system relative to a reference coordinate system. Common pose estimation technologies mainly comprise GPS, an inertial navigation system, vision and the like. The visual pose estimation technology has the advantages of low cost, small system complexity, easy and quick deployment, high automation degree and the like, and is widely applied to the fields of robot control and navigation, simultaneous positioning and mapping, weapon guidance and initial alignment and the like.
The most commonly used visual pose estimation technology is pose estimation based on feature points, and the method is to know the coordinates of n feature points in a target coordinate system and the pixel coordinates of corresponding points in an image pixel coordinate system under the condition of determining parameters in a camera, calculate the coordinates of the feature points in the camera coordinate system by utilizing a perspective projection relationship, and further solve a relative pose relationship. The method mainly uses the visual marks such as black and white checkerboard, circular patterns and the like, has the advantages of simple feature extraction, small limit and convenient operation, and is suitable for most measurement occasions. However, the feature extraction precision of the method directly determines the precision of the final pose result, when the position is changed in the estimation process, the change of the feature point position in the image is larger, the detection precision is higher, and when the pose is changed, especially when the camera is directly observed to the marker, the change of the pixel position of the feature point caused by the angle rotation is smaller, the measurement error of the feature information is larger, and then the effect of pose estimation is not ideal.
Disclosure of Invention
The problem to be solved by the invention is that the existing pose estimation method of the visual mark is insufficient in estimation precision when small pose changes are generated.
To solve the above technical problem, the present invention provides a pose acquisition method, the method being based on a pose acquisition device, the pose acquisition device including an out-of-plane grating-based visual marker, the visual marker including a grating region and a plurality of first feature patterns disposed around the grating region, and the grating region being used to generate a second feature pattern, the pose acquisition method comprising:
acquiring image information generated by the visual identifier;
Obtaining first projection image information of the first characteristic pattern and second projection image information of the grating region according to the position information of the first characteristic pattern in the image information;
Obtaining coarse precision pose information of the visual identifier according to the position information of the first characteristic pattern and the first projection image information;
Obtaining the central pixel position of a second characteristic pattern generated by the grating area according to the second projection image information;
acquiring a sight angle according to the change of the position of the central pixel of the second characteristic pattern;
And obtaining high-precision pose information of the visual identifier according to the sight angle and the coarse-precision pose information of the visual identifier.
Optionally, the acquiring the image information generated by the visual identifier includes:
Calibrating an image collector;
and acquiring image information generated by the visual identifier by using the calibrated image acquisition device.
Optionally, the obtaining the first projection image information of the first feature pattern and the second projection image information of the grating region according to the position information of the first feature pattern in the image information includes:
preprocessing the acquired image information to obtain image information with specified colors;
Extracting the image information with the specified color to obtain position information of a plurality of first characteristic patterns;
and obtaining first projection image information of the first characteristic patterns and second projection image information of the grating areas by utilizing homography transformation according to the position information of the plurality of first characteristic patterns.
Optionally, the acquired image information is color image information,
The preprocessing the obtained image information to obtain the image information with the specified color comprises the following steps:
Converting the acquired color image information into gray image information;
comparing the gray value of each image point pixel in the gray image information with a preset threshold value;
the color of each image point is set according to the result of the comparison, and image information having a specified color including foreground image information and background image information distinguished by color is obtained.
Optionally, the image extracting the image information with the designated color to obtain position information of a plurality of first feature patterns includes:
Extracting contour information of a plurality of first characteristic patterns in the foreground image information;
and acquiring the central position information of each first characteristic pattern by utilizing a centroid extraction method according to the contour information of the plurality of first characteristic patterns, and taking the central position information as the position information of the corresponding first characteristic pattern.
Optionally, the obtaining the line of sight angle according to the change of the central pixel position of the second characteristic pattern includes:
acquiring an initial position and a current position of a central pixel of the second characteristic pattern;
Acquiring the movement distance between the observation angle of the visual marker and the second characteristic pattern;
obtaining a preset value according to the observation angle and the moving distance of the second characteristic pattern;
and obtaining the sight angle according to the initial position, the current position and the preset value of the central pixel of the second characteristic pattern.
Optionally, the obtaining the high-precision pose information of the visual identifier according to the sight angle and the coarse-precision pose information of the visual identifier includes:
obtaining a rotation axis vector and a rotation angle of the image collector according to the sight angle;
And obtaining high-precision pose information of the visual marker according to the rotation axis vector, the rotation angle and the coarse-precision pose information of the visual marker.
Compared with the prior art, the pose acquisition method has the advantages that:
According to the pose acquisition method, on one hand, the second characteristic patterns generated by the grating area in the visual marker, such as moire fringes, can generate remarkable fringe position movement according to tiny pose transformation, the situation that the pose sensitivity of the existing pose estimation marker and the existing pose estimation method is low under the condition of correcting is overcome, the pose calculation precision is effectively improved, on the other hand, the high-precision pose information of the visual marker can be obtained according to the sight angle and the coarse precision pose information of the visual marker, the calculated amount of pose calculation is small, the method only comprises one conventional pose estimation (coarse precision pose estimation) and pose correction based on the viewpoint position, and the real-time requirement is guaranteed, and in addition, the method can restrain the interference of a complex environment and effectively improve the anti-interference performance of the pose acquisition equipment by taking the conventional pose estimation result as the coarse precision calculation result.
In order to solve the above technical problem, the present invention further provides a pose acquisition device, disposed on pose acquisition equipment, the pose acquisition equipment including an out-of-plane grating-based visual identifier, the visual identifier including a grating region and a plurality of first feature patterns disposed around the grating region, and the grating region being used for generating a second feature pattern, the pose acquisition device comprising:
The acquisition unit is used for acquiring the image information generated by the visual identifier;
the information processing unit is used for obtaining first projection image information of the first characteristic pattern and second projection image information of the grating area according to the position information of the first characteristic pattern in the image information;
the information processing unit is further used for obtaining coarse precision pose information of the visual identifier according to the position information of the first characteristic pattern and the first projection image information,
The information processing unit is further configured to obtain a center pixel position of a second feature pattern generated by the raster region based on the second projection image information,
The information processing unit is further used for acquiring a sight angle according to the change of the position of the central pixel of the second characteristic pattern;
and the calculating unit is used for obtaining the high-precision pose information of the visual identifier according to the sight angle and the coarse-precision pose information of the visual identifier.
The pose acquisition device and the pose acquisition method have the same advantages as those of the prior art, and are not described in detail herein.
In order to solve the technical problem, the invention also provides pose acquisition equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the steps of the pose acquisition method when executing the computer program.
Optionally, the pose acquisition device further includes:
an out-of-plane grating-based visual marker comprising a grating region and a plurality of first feature patterns disposed around the grating region, and the grating region is used to generate a second feature pattern;
the image acquisition device is used for acquiring image information generated by the visual identifier;
The visual identifier and the image collector are connected with the pose acquisition base, and the visual identifier is suitable for translation or rotation on the pose acquisition base;
The pose acquirer is used for receiving the image information acquired by the image acquirer and processing the image information to obtain the high-precision pose information of the visual identifier.
The pose acquisition device and the pose acquisition method have the same advantages as those of the prior art, and are not described in detail herein.
Drawings
FIG. 1 is a flow chart of a pose acquisition method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a pose acquisition device according to an embodiment of the present invention;
fig. 3 is a schematic structural view of a pose acquisition apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the overall structure of a visual marker according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a grating region according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of determining an accurate camera viewpoint based on a line of sight angle in an embodiment of the present invention;
Fig. 7 is a schematic diagram of posture correction from a rough viewpoint to a precise viewpoint in an embodiment of the present invention.
Reference numerals illustrate:
1. an image collector, 2, a pose calculating table, 3, a visual identifier, 4, a pose acquirer, 5, a first characteristic pattern, 6, a grating area, 7, a grid grating, 8 and a light-transmitting medium plate.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly and thoroughly described below with reference to the accompanying drawings.
In the description of embodiments of the present application, the term "description of some embodiments" means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same implementations or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
As shown in conjunction with fig. 1, an embodiment of the present invention provides a pose acquisition method, which is based on a pose acquisition device, the pose acquisition device including an out-of-plane grating-based visual marker 3, the visual marker 3 including a grating region 6 and a plurality of first feature patterns 5 disposed around the grating region 6, and the grating region 6 being used to generate a second feature pattern, the pose acquisition method including:
Step S1, obtaining image information generated by the visual identifier 3;
step S2, obtaining first projection image information of the first characteristic pattern 5 and second projection image information of the grating region 6 according to the position information of the first characteristic pattern 5 in the image information;
Step S3, coarse precision pose information of the visual identifier 3 is obtained according to the position information of the first characteristic pattern 5 and the first projection image information;
Step S4, obtaining the central pixel position of the second characteristic pattern generated by the grating region 6 according to the second projection image information;
s5, acquiring a sight angle according to the change of the position of the central pixel of the second characteristic pattern;
And S6, obtaining high-precision pose information of the visual identifier 3 according to the sight angle and the coarse-precision pose information of the visual identifier 3.
It should be noted that, in this embodiment, the visual identifier 3 can perform one-dimensional translation and two-dimensional rotation, the first feature patterns 5 of the visual identifier 3 are four circular patterns with the same diameter, the second feature pattern is moire fringes, the grating area 6 is a first rectangular area, four circle centers of the four circular patterns are all on diagonal lines of the first rectangular area, and the circle centers of the four circular patterns are sequentially connected to form a second rectangular area. Therefore, the pose acquisition method of the embodiment can generate remarkable fringe position movement according to the tiny pose transformation of the visual marker 3 in the motion process by utilizing the moire fringes generated by the grating area 6 in the visual marker 3, overcomes the condition that the pose sensitivity of the existing pose estimation mark and method is low under the condition of orthoscopy, and effectively improves the pose calculation precision.
In some embodiments, in step S1, the acquiring the image information generated by the visual identifier 3 includes:
Step S11, calibrating the image collector 1;
And step S12, acquiring the image information generated by the visual identifier 3 by using the calibrated image acquisition device 1.
Preferably, in step S11, the image collector 1 is an industrial camera, and the industrial camera is calibrated by the following steps:
Step S111, analyzing a camera imaging process, and establishing a pinhole imaging model of the camera to obtain an internal reference matrix and an external reference matrix of the industrial camera;
In this step, a solid transformation from a world coordinate system to a camera coordinate system, and a similar transformation and a translational transformation of the camera coordinate system are set for a certain spatial point P, and finally the spatial point P is converted into an image point P of a pixel coordinate system, so that a pinhole imaging model of the camera can be obtained according to the above relation.
Wherein [ u, v,1] and [ X W,YW,ZW, 1] are respectively the homogeneous coordinates of the image point P of the spatial point P in the pixel coordinate system and the homogeneous coordinates of the spatial point P in the world coordinate system.
Order the
Wherein M 1 and M 2 are respectively an internal reference matrix and an external reference matrix of the industrial camera, dx and dy are lengths of 1 pixel in X and Y directions, f is a focal length, a x and a y are scale factors of the camera in X and Y axes respectively, u 0,v0 is an abscissa of an origin of an image plane coordinate system under a pixel coordinate system, R is a rotation matrix, represents a gesture, T is a translation vector, and represents a position.
Solving an internal reference matrix M 1 by using Zhang Zhengyou checkerboard calibration method:
Firstly, using a calibration plate formed by two-dimensional square grids, shooting pictures of different poses of the calibration plate by using a camera, and then inputting the number of corner points, the distance between the corner points and the pictures of different poses of the calibration plate into a camera calibration tool box of Matlab, so that a camera internal reference matrix M 1 can be obtained through calculation.
In step S12, the step of obtaining the image information generated by the visual identifier 3 by using the calibrated image collector 1 includes obtaining the image information generated by the visual identifier 3 in the one-dimensional translation and two-dimensional rotation processes by using the calibrated image collector 1, so that the visual identifier 3 generates significant stripe position movement in the micro pose transformation in the motion process, and the calculation accuracy is improved.
In some embodiments, in step S2, the obtaining the first projection image information of the first feature pattern 5 and the second projection image information of the grating region 6 according to the position information of the first feature pattern 5 in the image information includes:
Step S21, preprocessing the obtained image information to obtain image information with a specified color;
step S22, extracting the image information with the specified color to obtain the position information of a plurality of first characteristic patterns 5;
Step S23, obtaining first projection image information of the first feature pattern 5 and second projection image information of the grating region 6 by homography according to the position information of the plurality of first feature patterns 5.
Wherein the acquired image information is color image information,
In step S21, the preprocessing the acquired image information to obtain image information with a specified color includes:
step S211, converting the acquired color image information into gray image information;
Step S212, comparing the gray value of each image point pixel in the gray image information with a preset threshold value;
Step S213 of setting the color of each image point according to the result of the comparison, obtaining image information having a specified color including foreground image information and background image information distinguished by color.
The gray value of each image point pixel in the gray image information is obtained by weighting the brightness of RGB three color channels on each image point pixel in the color image information.
In some preferred embodiments, foreground image information and background image information are typically distinguished in white or black, with color differentiation being evident, facilitating differentiation.
In some specific embodiments, in step S22, the image extracting the image information with the specified color to obtain the position information of the plurality of first feature patterns 5 includes:
step S221 of extracting contour information of a plurality of the first feature patterns 5 in the foreground image information;
step S222, according to the contour information of the plurality of first feature patterns 5, obtaining central position information of each first feature pattern 5 by using a centroid extraction method, and taking the central position information as the position information of the corresponding first feature pattern 5.
Specifically, a region growing method is adopted, from a group of seed points, neighborhood pixels similar to the seed points are added into the current seeds to form a growing region, and after contour information of the first characteristic pattern 5 is successfully extracted, the first characteristic pattern 5 is distinguished from an image according to a connected region formed by growth.
Specifically, in step S23, the homography transform formula is as follows:
After determining the pixel positions of four first feature patterns 5, namely circular patterns, the transformed four-point coordinates are set to conform to the actual size proportion, so that a homography matrix can be solved, first projection image information of the four circular patterns is obtained, a grating area 6 surrounded by the four circular patterns is converted into a square image, second projection image information of the grating area 6 is obtained, and the actual position of the central pixel position of the second feature pattern relative to the marker is conveniently obtained later.
In some preferred embodiments, in step S4, obtaining the center pixel position of the second feature pattern generated by the grating region 6 according to the second projection image information includes:
Step S41, preprocessing the acquired second projection image information to obtain projection image information with a specified color;
Step S42, extracting the image of the projection image information with the specified color to obtain the outline information of the second characteristic pattern;
Step S43, obtaining the center pixel position of the second feature pattern by using the zero-order moment and the first moment of the image.
In step S42, since the second projection image information further includes 1/4 circular patterns at four corners, the image is further extracted by using the mask to obtain contour information including only the second feature pattern, i.e., moire fringes. The method is simple.
Wherein, in step S43,
The zero-order moment of the image (pixel weighting of the points of the image) is calculated by the following formula:
The first moment of the image (pixel weighting of points of the image in the x and y directions) is calculated by the following formula:
the centroid of the binary image, namely the center pixel position of the second characteristic pattern, can be obtained through the formula:
x c is the coordinate in the x-direction of the center pixel of the second feature pattern and y c is the coordinate in the y-direction of the center pixel of the second feature pattern.
In a preferred embodiment, step S5, the obtaining the line of sight angle according to the change of the central pixel position of the second feature pattern includes:
Step S51, obtaining the initial position and the current position of the central pixel of the second characteristic pattern;
step S52, obtaining the movement distance between the observation angle of the visual marker 3 and the second characteristic pattern;
step S53, obtaining a preset value according to the movement distance of the observation angle and the second characteristic pattern;
and step S54, obtaining the sight angle according to the initial position, the current position and the preset value of the central pixel of the second characteristic pattern.
In this embodiment, the viewpoint is set as the industrial camera position, the line of sight is a straight line connecting the industrial camera viewpoint and the center of the cross moire fringe, the line of sight angle is an angle between the line of sight and the perpendicular to the plane of the visual marker 3, and the initial position of the cross moire fringe is the relative position of the center of the cross moire fringe when the line of sight angle from the industrial camera to the cross moire fringe is 0.
When the visual marker 3 moves, the cross moire fringes will move relatively, and at this time, the line of sight angle and the relative movement distance of the fringes are in a linear relationship:
According to the formula, the sight angle theta can be calculated. Wherein x and y are the current positions of the center pixels of the second feature pattern, x 0 and y 0 are the initial positions of the center pixels of the second feature pattern, and the slope k can be obtained by calculating with a high-precision turntable, that is, according to the observation angle and the movement distance of the second feature pattern.
In a preferred embodiment, in step S6, the obtaining the high-precision pose information of the visual identifier 3 according to the line of sight angle and the coarse-precision pose information of the visual identifier 3 includes:
Step S61, obtaining a rotation axis vector and a rotation angle of the image collector 1 according to the line of sight angle;
Step S62, obtaining high-precision pose information of the visual marker 3 according to the rotation axis vector, the rotation angle and the rough-precision pose information of the visual marker 3.
In this embodiment, the coarse-precision viewpoint of the industrial camera is rotated to a high-precision position with the center of the visual identifier as the center, as shown in fig. 7.
Specifically, the rotation axis vector is obtained by the following calculation formula:
wherein P and Pc are respectively coarse-precision and high-precision position coordinates of the industrial camera, O is a central coordinate of the marker, OP and OPc are respectively vectors from the center of the visual marker 3 to the coarse-precision and high-precision positions of the industrial camera, a is a rotation axis vector, and the module length of the axial quantity is a rotation angle rho.
In this embodiment, the coarse precision position coordinates of the industrial camera may be obtained from the coarse precision pose information of the visual marker 3.
In addition, in the present embodiment, since the distance d from the industrial camera to the center of the visual marker 3 is known, triangles of the industrial camera, the center of the visual marker 3, and the center of the current stripe can be constructed, and as shown in fig. 6, the high-precision position coordinates P C of the industrial camera can be obtained.
Specifically, in step S62, high-precision pose information of the visual marker 3 is obtained from the rotation axis vector, the rotation angle, and coarse-precision pose information of the visual marker 3, including:
Step S621, obtaining a homogeneous transformation matrix H in the rotation process according to the rotation axis vector and the rotation angle;
step S622, obtaining high-precision pose information of the visual identifier 3 according to the homogeneous transformation matrix H and the coarse-precision pose information of the visual identifier 3.
In step S621, the calculation formula of the homogeneous transformation matrix H is as follows:
wherein ρ is the rotation angle,
cosρ=c,sinρ=s,(1-cosρ)=C。
In step S622, high-precision pose information H C is obtained according to the following calculation formula:
HC=HWH,
Wherein H W is coarse precision pose information of the visual marker 3, and the external reference matrix of the industrial camera can be obtained according to the high precision pose information H C.
Therefore, the pose acquisition method of the embodiment can effectively improve the pose calculation precision by utilizing the second characteristic patterns such as moire fringes generated by the grating area 6 in the visual marker 3, generating obvious fringe position movement according to tiny pose transformation, overcoming the condition that the pose sensitivity of the existing pose estimation mark and method is low under the condition of correcting, and can obtain the high-precision pose information of the visual marker 3 according to the sight angle and the coarse-precision pose information of the visual marker 3, wherein the calculated amount of pose calculation is small, the pose calculation method only comprises one conventional pose estimation (coarse-precision pose estimation) and pose correction based on the viewpoint position, and the real-time requirement is ensured.
As shown in conjunction with fig. 2, another embodiment of the present invention further provides a pose acquisition apparatus provided on a pose acquisition device, the pose acquisition device including an out-of-plane grating-based visual marker 3, the visual marker 3 including a grating region 6 and a plurality of first feature patterns 5 disposed around the grating region 6, and the grating region 6 for generating a second feature pattern, the pose acquisition apparatus comprising:
An acquisition unit 210, wherein the acquisition unit 210 is used for acquiring the image information generated by the visual identifier 3;
An information processing unit 220, where the information processing unit 220 is configured to obtain first projection image information of the first feature pattern 5 and second projection image information of the grating region 6 according to position information of the first feature pattern 5 in the image information;
The information processing unit 220 is further configured to obtain coarse precision pose information of the visual marker 3 based on the position information of the first feature pattern 5 and the first projection image information,
The information processing unit 220 is further configured to obtain a center pixel position of a second feature pattern generated by the raster region 6 based on the second projection image information,
The information processing unit 220 is further configured to obtain a line of sight angle according to a change of a center pixel position of the second feature pattern;
A calculating unit 230, wherein the calculating unit 230 is configured to obtain high-precision pose information of the visual identifier 3 according to the line-of-sight angle and coarse-precision pose information of the visual identifier 3.
Specifically, the acquiring unit 210 is configured to acquire image information generated by the visual identifier 3, including;
Calibrating the image collector 1;
and acquiring the image information generated by the visual identifier 3 by using the calibrated image acquisition device 1.
Specifically, the information processing unit 220 is configured to obtain, according to the position information of the first feature pattern 5 in the image information, first projection image information of the first feature pattern 5 and second projection image information of the grating region 6, including;
preprocessing the acquired image information to obtain image information with specified colors;
Extracting the image information with the specified color to obtain position information of a plurality of first characteristic patterns 5;
based on the position information of the plurality of first feature patterns 5, the first projection image information of the first feature patterns 5 and the second projection image information of the grating region 6 are obtained by homography.
Specifically, the information processing unit 220 is further configured to obtain a viewing angle according to a change of a central pixel position of the second feature pattern, including;
acquiring an initial position and a current position of a central pixel of the second characteristic pattern;
acquiring the movement distance between the observation angle of the visual marker 3 and the second characteristic pattern;
obtaining a preset value according to the observation angle and the moving distance of the second characteristic pattern;
and obtaining the sight angle according to the initial position, the current position and the preset value of the central pixel of the second characteristic pattern.
Specifically, the calculating unit 230 is configured to obtain high-precision pose information of the visual identifier 3 according to the line-of-sight angle and coarse-precision pose information of the visual identifier 3, and includes:
The calculating unit 230 is configured to obtain a rotation axis vector and a rotation angle of the image collector 1 according to the line of sight angle;
the calculating unit 230 is configured to obtain high-precision pose information of the visual marker 3 according to the rotation axis vector, the rotation angle, and coarse-precision pose information of the visual marker 3.
The pose acquisition device and the pose acquisition method have the same advantages as those of the prior art, and are not described in detail herein.
Another embodiment of the present invention also provides a pose acquisition apparatus, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the pose acquisition method when executing the computer program.
As shown in fig. 3 to 5, in this embodiment, the pose acquisition apparatus further includes:
An out-of-plane grating-based visual marker 3, the visual marker 3 comprising a grating region 6 and a plurality of first feature patterns 5 arranged around the grating region 6, and the grating region 6 being for generating a second feature pattern;
the image acquisition device 1 is used for acquiring image information generated by the visual identifier 3;
the visual identifier 3 and the image collector 1 are connected with the pose acquisition base, and the visual identifier 3 is suitable for translation or rotation on the pose acquisition base;
The pose acquirer 4 is configured to receive the image information acquired by the image acquirer 1, and process the image information to obtain high-precision pose information of the visual identifier 3.
In a preferred embodiment, the image collector 1 is an industrial camera, which is low in cost.
In some embodiments, the visual identifier 3 can perform one-dimensional translation and two-dimensional rotation, and includes a substrate, on which a grating area 6 and a plurality of first feature patterns 5 surrounding the grating area 6 are disposed, where the plurality of first feature patterns 5 are four circular patterns with the same diameter, the grating area 6 is a first rectangular area, four circle centers of the four circular patterns are all on diagonal lines of the first rectangular area, and the circle centers of the four circular patterns are sequentially connected to form a second rectangular area. The structure is simple.
Preferably, the substrate is made of organic glass, so that the cost is low and the material is easy to obtain.
In some embodiments, the grating region 6 comprises a two-layer grid grating 7 and a light-transmitting medium plate 8 arranged on the two-layer grid grating 7 for quality inspection. Can generate moire fringes far larger than the mesh size, is in the shape of dark large cross fringes, and can perform large relative movement along with small change of the pose.
The pose acquisition device and the pose acquisition device have the same advantages as those of the prior art, and are not described in detail herein.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of additional identical elements in a process, method, article, or apparatus that comprises the element.
Although the present disclosure is described above, the scope of protection of the present disclosure is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the disclosure, and these changes and modifications will fall within the scope of the invention.

Claims (10)

1. A pose acquisition method, characterized in that the method is based on a pose acquisition device comprising an out-of-plane grating based visual marker (3), the visual marker (3) comprising a grating region (6) and a plurality of first feature patterns (5) arranged around the grating region (6), and the grating region (6) being used to generate a second feature pattern, the method comprising:
Acquiring image information generated by the visual identifier (3);
According to the position information of the first characteristic pattern (5) in the image information, obtaining first projection image information of the first characteristic pattern (5) and second projection image information of the grating region (6);
obtaining coarse precision pose information of the visual identifier (3) according to the position information of the first characteristic pattern (5) and the first projection image information;
obtaining a center pixel position of a second characteristic pattern generated by the grating region (6) according to the second projection image information;
acquiring a sight angle according to the change of the position of the central pixel of the second characteristic pattern;
And obtaining high-precision pose information of the visual identifier (3) according to the sight angle and the coarse-precision pose information of the visual identifier (3).
2. The pose acquisition method according to claim 1, characterized in that said acquiring image information generated by said visual marker (3) comprises:
calibrating the image collector (1);
and acquiring image information generated by the visual identifier (3) by using the calibrated image acquisition device (1).
3. The pose acquisition method according to claim 1, wherein the obtaining of the first projected image information of the first feature pattern (5) and the second projected image information of the grating region (6) from the position information of the first feature pattern (5) in the image information includes:
preprocessing the acquired image information to obtain image information with specified colors;
Extracting the image information with the specified color to obtain position information of a plurality of first characteristic patterns (5);
And obtaining first projection image information of the first characteristic patterns (5) and second projection image information of the grating areas (6) by utilizing homography transformation according to the position information of a plurality of the first characteristic patterns (5).
4. The method according to claim 3, wherein the acquired image information is color image information,
The preprocessing the obtained image information to obtain the image information with the specified color comprises the following steps:
Converting the acquired color image information into gray image information;
comparing the gray value of each image point pixel in the gray image information with a preset threshold value;
the color of each image point is set according to the result of the comparison, and image information having a specified color including foreground image information and background image information distinguished by color is obtained.
5. The pose acquisition method according to claim 4, wherein said image extracting the image information having the specified color to obtain the position information of the plurality of the first feature patterns (5) includes:
extracting contour information of a plurality of the first feature patterns (5) in the foreground image information;
According to the contour information of the plurality of first characteristic patterns (5), center position information of each first characteristic pattern (5) is obtained by utilizing a centroid extraction method, and the center position information is used as position information of the corresponding first characteristic pattern (5).
6. The pose acquisition method according to claim 1, wherein the acquiring the line of sight angle from the change in the center pixel position of the second feature pattern includes:
acquiring an initial position and a current position of a central pixel of the second characteristic pattern;
Acquiring the movement distance between the observation angle of the visual marker (3) and the second characteristic pattern;
obtaining a preset value according to the observation angle and the moving distance of the second characteristic pattern;
and obtaining the sight angle according to the initial position, the current position and the preset value of the central pixel of the second characteristic pattern.
7. The pose acquisition method according to claim 2, wherein the obtaining high-precision pose information of the visual marker (3) from the line-of-sight angle and coarse-precision pose information of the visual marker (3) includes:
obtaining a rotation axis vector and a rotation angle of the image collector (1) according to the sight angle;
and obtaining high-precision pose information of the visual marker (3) according to the rotation axis vector, the rotation angle and the coarse-precision pose information of the visual marker (3).
8. Pose acquisition device, characterized in that it is arranged on a pose acquisition device comprising an out-of-plane grating based visual marker (3), said visual marker (3) comprising a grating area (6) and a plurality of first feature patterns (5) arranged around said grating area (6), and said grating area (6) being used for generating a second feature pattern, said pose acquisition device comprising:
An acquisition unit for acquiring image information generated by the visual marker (3);
An information processing unit for obtaining first projected image information of the first feature pattern (5) and second projected image information of the grating region (6) according to position information of the first feature pattern (5) in the image information;
The information processing unit is further used for obtaining coarse precision pose information of the visual identifier (3) according to the position information of the first characteristic pattern (5) and the first projection image information,
The information processing unit is further adapted to obtain a center pixel position of a second feature pattern generated by the raster region (6) from the second projection image information,
The information processing unit is further used for acquiring a sight angle according to the change of the position of the central pixel of the second characteristic pattern;
The computing unit is used for obtaining high-precision pose information of the visual identifier (3) according to the sight angle and the rough-precision pose information of the visual identifier (3).
9. A pose acquisition device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the pose acquisition method according to any of claims 1 to 7 when the computer program is executed by the processor.
10. The pose acquisition device of claim 9, further comprising:
an out-of-plane grating-based visual marker (3), the visual marker (3) comprising a grating region (6) and a plurality of first feature patterns (5) arranged around the grating region (6), and the grating region (6) being for generating a second feature pattern;
The image acquisition device (1) is used for acquiring image information generated by the visual identifier (3);
the visual identifier (3) and the image collector (1) are connected with the pose acquisition base, and the visual identifier (3) is suitable for translation or rotation on the pose acquisition base;
The pose acquirer (4) is used for receiving the image information acquired by the image acquirer (1) and processing the image information to obtain high-precision pose information of the visual identifier (3).
CN202211039220.6A 2022-08-29 2022-08-29 Pose acquisition method, device and equipment Active CN115439540B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211039220.6A CN115439540B (en) 2022-08-29 2022-08-29 Pose acquisition method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211039220.6A CN115439540B (en) 2022-08-29 2022-08-29 Pose acquisition method, device and equipment

Publications (2)

Publication Number Publication Date
CN115439540A CN115439540A (en) 2022-12-06
CN115439540B true CN115439540B (en) 2025-08-15

Family

ID=84243945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211039220.6A Active CN115439540B (en) 2022-08-29 2022-08-29 Pose acquisition method, device and equipment

Country Status (1)

Country Link
CN (1) CN115439540B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612794A (en) * 2020-04-15 2020-09-01 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) High-precision 3D pose estimation method and system for parts based on multi-2D vision
CN112700501A (en) * 2020-12-12 2021-04-23 西北工业大学 Underwater monocular sub-pixel relative pose estimation method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109029257B (en) * 2018-07-12 2020-11-06 中国科学院自动化研究所 Large-scale workpiece pose measurement system and method based on stereoscopic vision and structured light vision
CN112074868A (en) * 2018-12-29 2020-12-11 河南埃尔森智能科技有限公司 Industrial robot positioning method and device based on structured light, controller and medium
CN112097689B (en) * 2020-09-11 2022-02-22 苏州中科全象智能科技有限公司 Calibration method of 3D structured light system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612794A (en) * 2020-04-15 2020-09-01 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) High-precision 3D pose estimation method and system for parts based on multi-2D vision
CN112700501A (en) * 2020-12-12 2021-04-23 西北工业大学 Underwater monocular sub-pixel relative pose estimation method

Also Published As

Publication number Publication date
CN115439540A (en) 2022-12-06

Similar Documents

Publication Publication Date Title
CN110021046B (en) External parameter calibration method and system for camera and laser radar combined sensor
Zhang et al. A robust and rapid camera calibration method by one captured image
US9495749B2 (en) Method and system for detecting pose of marker
CN104735444B (en) The system and method for executing vision system plane hand-eye calibration according to linear feature
CN109272555B (en) A method of obtaining and calibrating external parameters of RGB-D camera
Tanaka et al. A high-accuracy visual marker based on a microlens array
CN109443209A (en) A kind of line-structured light system calibrating method based on homography matrix
CN110763204B (en) Planar coding target and pose measurement method thereof
CN105066962B (en) A kind of high-precision photogrammetric apparatus of the big angle of visual field of multiresolution
CN106548489A (en) The method for registering of a kind of depth image and coloured image, three-dimensional image acquisition apparatus
JP2007256091A (en) Method and apparatus for calibrating range finder
CN111768453A (en) Navigation and positioning device and method in spacecraft cluster ground simulation system
CN109448043A (en) Standing tree height extracting method under plane restriction
CN112229323A (en) Six degrees of freedom measurement method of checkerboard cooperation target based on monocular vision of mobile phone and its application
CN106971408A (en) A kind of camera marking method based on space-time conversion thought
CN112767494A (en) Precise measurement positioning method based on calibration algorithm
CN115201883A (en) Moving target video positioning and speed measuring system and method
CN115810055A (en) A Method of Cursor Calibration with Ring Structure Based on Plane Checkerboard
JP2006098065A (en) Calibration device and method, and three-dimensional modelling device and system capable of using the same
Boochs et al. Increasing the accuracy of untaught robot positions by means of a multi-camera system
CN112050752B (en) Projector calibration method based on secondary projection
Genovese Single-image camera calibration with model-free distortion correction
CN113989368B (en) A method and system for high-precision positioning of object surface
Zexiao et al. A novel approach for the field calibration of line structured-light sensors
CN115439540B (en) Pose acquisition method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant