Disclosure of Invention
The problem to be solved by the invention is that the existing pose estimation method of the visual mark is insufficient in estimation precision when small pose changes are generated.
To solve the above technical problem, the present invention provides a pose acquisition method, the method being based on a pose acquisition device, the pose acquisition device including an out-of-plane grating-based visual marker, the visual marker including a grating region and a plurality of first feature patterns disposed around the grating region, and the grating region being used to generate a second feature pattern, the pose acquisition method comprising:
acquiring image information generated by the visual identifier;
Obtaining first projection image information of the first characteristic pattern and second projection image information of the grating region according to the position information of the first characteristic pattern in the image information;
Obtaining coarse precision pose information of the visual identifier according to the position information of the first characteristic pattern and the first projection image information;
Obtaining the central pixel position of a second characteristic pattern generated by the grating area according to the second projection image information;
acquiring a sight angle according to the change of the position of the central pixel of the second characteristic pattern;
And obtaining high-precision pose information of the visual identifier according to the sight angle and the coarse-precision pose information of the visual identifier.
Optionally, the acquiring the image information generated by the visual identifier includes:
Calibrating an image collector;
and acquiring image information generated by the visual identifier by using the calibrated image acquisition device.
Optionally, the obtaining the first projection image information of the first feature pattern and the second projection image information of the grating region according to the position information of the first feature pattern in the image information includes:
preprocessing the acquired image information to obtain image information with specified colors;
Extracting the image information with the specified color to obtain position information of a plurality of first characteristic patterns;
and obtaining first projection image information of the first characteristic patterns and second projection image information of the grating areas by utilizing homography transformation according to the position information of the plurality of first characteristic patterns.
Optionally, the acquired image information is color image information,
The preprocessing the obtained image information to obtain the image information with the specified color comprises the following steps:
Converting the acquired color image information into gray image information;
comparing the gray value of each image point pixel in the gray image information with a preset threshold value;
the color of each image point is set according to the result of the comparison, and image information having a specified color including foreground image information and background image information distinguished by color is obtained.
Optionally, the image extracting the image information with the designated color to obtain position information of a plurality of first feature patterns includes:
Extracting contour information of a plurality of first characteristic patterns in the foreground image information;
and acquiring the central position information of each first characteristic pattern by utilizing a centroid extraction method according to the contour information of the plurality of first characteristic patterns, and taking the central position information as the position information of the corresponding first characteristic pattern.
Optionally, the obtaining the line of sight angle according to the change of the central pixel position of the second characteristic pattern includes:
acquiring an initial position and a current position of a central pixel of the second characteristic pattern;
Acquiring the movement distance between the observation angle of the visual marker and the second characteristic pattern;
obtaining a preset value according to the observation angle and the moving distance of the second characteristic pattern;
and obtaining the sight angle according to the initial position, the current position and the preset value of the central pixel of the second characteristic pattern.
Optionally, the obtaining the high-precision pose information of the visual identifier according to the sight angle and the coarse-precision pose information of the visual identifier includes:
obtaining a rotation axis vector and a rotation angle of the image collector according to the sight angle;
And obtaining high-precision pose information of the visual marker according to the rotation axis vector, the rotation angle and the coarse-precision pose information of the visual marker.
Compared with the prior art, the pose acquisition method has the advantages that:
According to the pose acquisition method, on one hand, the second characteristic patterns generated by the grating area in the visual marker, such as moire fringes, can generate remarkable fringe position movement according to tiny pose transformation, the situation that the pose sensitivity of the existing pose estimation marker and the existing pose estimation method is low under the condition of correcting is overcome, the pose calculation precision is effectively improved, on the other hand, the high-precision pose information of the visual marker can be obtained according to the sight angle and the coarse precision pose information of the visual marker, the calculated amount of pose calculation is small, the method only comprises one conventional pose estimation (coarse precision pose estimation) and pose correction based on the viewpoint position, and the real-time requirement is guaranteed, and in addition, the method can restrain the interference of a complex environment and effectively improve the anti-interference performance of the pose acquisition equipment by taking the conventional pose estimation result as the coarse precision calculation result.
In order to solve the above technical problem, the present invention further provides a pose acquisition device, disposed on pose acquisition equipment, the pose acquisition equipment including an out-of-plane grating-based visual identifier, the visual identifier including a grating region and a plurality of first feature patterns disposed around the grating region, and the grating region being used for generating a second feature pattern, the pose acquisition device comprising:
The acquisition unit is used for acquiring the image information generated by the visual identifier;
the information processing unit is used for obtaining first projection image information of the first characteristic pattern and second projection image information of the grating area according to the position information of the first characteristic pattern in the image information;
the information processing unit is further used for obtaining coarse precision pose information of the visual identifier according to the position information of the first characteristic pattern and the first projection image information,
The information processing unit is further configured to obtain a center pixel position of a second feature pattern generated by the raster region based on the second projection image information,
The information processing unit is further used for acquiring a sight angle according to the change of the position of the central pixel of the second characteristic pattern;
and the calculating unit is used for obtaining the high-precision pose information of the visual identifier according to the sight angle and the coarse-precision pose information of the visual identifier.
The pose acquisition device and the pose acquisition method have the same advantages as those of the prior art, and are not described in detail herein.
In order to solve the technical problem, the invention also provides pose acquisition equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the steps of the pose acquisition method when executing the computer program.
Optionally, the pose acquisition device further includes:
an out-of-plane grating-based visual marker comprising a grating region and a plurality of first feature patterns disposed around the grating region, and the grating region is used to generate a second feature pattern;
the image acquisition device is used for acquiring image information generated by the visual identifier;
The visual identifier and the image collector are connected with the pose acquisition base, and the visual identifier is suitable for translation or rotation on the pose acquisition base;
The pose acquirer is used for receiving the image information acquired by the image acquirer and processing the image information to obtain the high-precision pose information of the visual identifier.
The pose acquisition device and the pose acquisition method have the same advantages as those of the prior art, and are not described in detail herein.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly and thoroughly described below with reference to the accompanying drawings.
In the description of embodiments of the present application, the term "description of some embodiments" means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same implementations or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
As shown in conjunction with fig. 1, an embodiment of the present invention provides a pose acquisition method, which is based on a pose acquisition device, the pose acquisition device including an out-of-plane grating-based visual marker 3, the visual marker 3 including a grating region 6 and a plurality of first feature patterns 5 disposed around the grating region 6, and the grating region 6 being used to generate a second feature pattern, the pose acquisition method including:
Step S1, obtaining image information generated by the visual identifier 3;
step S2, obtaining first projection image information of the first characteristic pattern 5 and second projection image information of the grating region 6 according to the position information of the first characteristic pattern 5 in the image information;
Step S3, coarse precision pose information of the visual identifier 3 is obtained according to the position information of the first characteristic pattern 5 and the first projection image information;
Step S4, obtaining the central pixel position of the second characteristic pattern generated by the grating region 6 according to the second projection image information;
s5, acquiring a sight angle according to the change of the position of the central pixel of the second characteristic pattern;
And S6, obtaining high-precision pose information of the visual identifier 3 according to the sight angle and the coarse-precision pose information of the visual identifier 3.
It should be noted that, in this embodiment, the visual identifier 3 can perform one-dimensional translation and two-dimensional rotation, the first feature patterns 5 of the visual identifier 3 are four circular patterns with the same diameter, the second feature pattern is moire fringes, the grating area 6 is a first rectangular area, four circle centers of the four circular patterns are all on diagonal lines of the first rectangular area, and the circle centers of the four circular patterns are sequentially connected to form a second rectangular area. Therefore, the pose acquisition method of the embodiment can generate remarkable fringe position movement according to the tiny pose transformation of the visual marker 3 in the motion process by utilizing the moire fringes generated by the grating area 6 in the visual marker 3, overcomes the condition that the pose sensitivity of the existing pose estimation mark and method is low under the condition of orthoscopy, and effectively improves the pose calculation precision.
In some embodiments, in step S1, the acquiring the image information generated by the visual identifier 3 includes:
Step S11, calibrating the image collector 1;
And step S12, acquiring the image information generated by the visual identifier 3 by using the calibrated image acquisition device 1.
Preferably, in step S11, the image collector 1 is an industrial camera, and the industrial camera is calibrated by the following steps:
Step S111, analyzing a camera imaging process, and establishing a pinhole imaging model of the camera to obtain an internal reference matrix and an external reference matrix of the industrial camera;
In this step, a solid transformation from a world coordinate system to a camera coordinate system, and a similar transformation and a translational transformation of the camera coordinate system are set for a certain spatial point P, and finally the spatial point P is converted into an image point P of a pixel coordinate system, so that a pinhole imaging model of the camera can be obtained according to the above relation.
Wherein [ u, v,1] and [ X W,YW,ZW, 1] are respectively the homogeneous coordinates of the image point P of the spatial point P in the pixel coordinate system and the homogeneous coordinates of the spatial point P in the world coordinate system.
Order the
Wherein M 1 and M 2 are respectively an internal reference matrix and an external reference matrix of the industrial camera, dx and dy are lengths of 1 pixel in X and Y directions, f is a focal length, a x and a y are scale factors of the camera in X and Y axes respectively, u 0,v0 is an abscissa of an origin of an image plane coordinate system under a pixel coordinate system, R is a rotation matrix, represents a gesture, T is a translation vector, and represents a position.
Solving an internal reference matrix M 1 by using Zhang Zhengyou checkerboard calibration method:
Firstly, using a calibration plate formed by two-dimensional square grids, shooting pictures of different poses of the calibration plate by using a camera, and then inputting the number of corner points, the distance between the corner points and the pictures of different poses of the calibration plate into a camera calibration tool box of Matlab, so that a camera internal reference matrix M 1 can be obtained through calculation.
In step S12, the step of obtaining the image information generated by the visual identifier 3 by using the calibrated image collector 1 includes obtaining the image information generated by the visual identifier 3 in the one-dimensional translation and two-dimensional rotation processes by using the calibrated image collector 1, so that the visual identifier 3 generates significant stripe position movement in the micro pose transformation in the motion process, and the calculation accuracy is improved.
In some embodiments, in step S2, the obtaining the first projection image information of the first feature pattern 5 and the second projection image information of the grating region 6 according to the position information of the first feature pattern 5 in the image information includes:
Step S21, preprocessing the obtained image information to obtain image information with a specified color;
step S22, extracting the image information with the specified color to obtain the position information of a plurality of first characteristic patterns 5;
Step S23, obtaining first projection image information of the first feature pattern 5 and second projection image information of the grating region 6 by homography according to the position information of the plurality of first feature patterns 5.
Wherein the acquired image information is color image information,
In step S21, the preprocessing the acquired image information to obtain image information with a specified color includes:
step S211, converting the acquired color image information into gray image information;
Step S212, comparing the gray value of each image point pixel in the gray image information with a preset threshold value;
Step S213 of setting the color of each image point according to the result of the comparison, obtaining image information having a specified color including foreground image information and background image information distinguished by color.
The gray value of each image point pixel in the gray image information is obtained by weighting the brightness of RGB three color channels on each image point pixel in the color image information.
In some preferred embodiments, foreground image information and background image information are typically distinguished in white or black, with color differentiation being evident, facilitating differentiation.
In some specific embodiments, in step S22, the image extracting the image information with the specified color to obtain the position information of the plurality of first feature patterns 5 includes:
step S221 of extracting contour information of a plurality of the first feature patterns 5 in the foreground image information;
step S222, according to the contour information of the plurality of first feature patterns 5, obtaining central position information of each first feature pattern 5 by using a centroid extraction method, and taking the central position information as the position information of the corresponding first feature pattern 5.
Specifically, a region growing method is adopted, from a group of seed points, neighborhood pixels similar to the seed points are added into the current seeds to form a growing region, and after contour information of the first characteristic pattern 5 is successfully extracted, the first characteristic pattern 5 is distinguished from an image according to a connected region formed by growth.
Specifically, in step S23, the homography transform formula is as follows:
After determining the pixel positions of four first feature patterns 5, namely circular patterns, the transformed four-point coordinates are set to conform to the actual size proportion, so that a homography matrix can be solved, first projection image information of the four circular patterns is obtained, a grating area 6 surrounded by the four circular patterns is converted into a square image, second projection image information of the grating area 6 is obtained, and the actual position of the central pixel position of the second feature pattern relative to the marker is conveniently obtained later.
In some preferred embodiments, in step S4, obtaining the center pixel position of the second feature pattern generated by the grating region 6 according to the second projection image information includes:
Step S41, preprocessing the acquired second projection image information to obtain projection image information with a specified color;
Step S42, extracting the image of the projection image information with the specified color to obtain the outline information of the second characteristic pattern;
Step S43, obtaining the center pixel position of the second feature pattern by using the zero-order moment and the first moment of the image.
In step S42, since the second projection image information further includes 1/4 circular patterns at four corners, the image is further extracted by using the mask to obtain contour information including only the second feature pattern, i.e., moire fringes. The method is simple.
Wherein, in step S43,
The zero-order moment of the image (pixel weighting of the points of the image) is calculated by the following formula:
The first moment of the image (pixel weighting of points of the image in the x and y directions) is calculated by the following formula:
the centroid of the binary image, namely the center pixel position of the second characteristic pattern, can be obtained through the formula:
x c is the coordinate in the x-direction of the center pixel of the second feature pattern and y c is the coordinate in the y-direction of the center pixel of the second feature pattern.
In a preferred embodiment, step S5, the obtaining the line of sight angle according to the change of the central pixel position of the second feature pattern includes:
Step S51, obtaining the initial position and the current position of the central pixel of the second characteristic pattern;
step S52, obtaining the movement distance between the observation angle of the visual marker 3 and the second characteristic pattern;
step S53, obtaining a preset value according to the movement distance of the observation angle and the second characteristic pattern;
and step S54, obtaining the sight angle according to the initial position, the current position and the preset value of the central pixel of the second characteristic pattern.
In this embodiment, the viewpoint is set as the industrial camera position, the line of sight is a straight line connecting the industrial camera viewpoint and the center of the cross moire fringe, the line of sight angle is an angle between the line of sight and the perpendicular to the plane of the visual marker 3, and the initial position of the cross moire fringe is the relative position of the center of the cross moire fringe when the line of sight angle from the industrial camera to the cross moire fringe is 0.
When the visual marker 3 moves, the cross moire fringes will move relatively, and at this time, the line of sight angle and the relative movement distance of the fringes are in a linear relationship:
According to the formula, the sight angle theta can be calculated. Wherein x and y are the current positions of the center pixels of the second feature pattern, x 0 and y 0 are the initial positions of the center pixels of the second feature pattern, and the slope k can be obtained by calculating with a high-precision turntable, that is, according to the observation angle and the movement distance of the second feature pattern.
In a preferred embodiment, in step S6, the obtaining the high-precision pose information of the visual identifier 3 according to the line of sight angle and the coarse-precision pose information of the visual identifier 3 includes:
Step S61, obtaining a rotation axis vector and a rotation angle of the image collector 1 according to the line of sight angle;
Step S62, obtaining high-precision pose information of the visual marker 3 according to the rotation axis vector, the rotation angle and the rough-precision pose information of the visual marker 3.
In this embodiment, the coarse-precision viewpoint of the industrial camera is rotated to a high-precision position with the center of the visual identifier as the center, as shown in fig. 7.
Specifically, the rotation axis vector is obtained by the following calculation formula:
wherein P and Pc are respectively coarse-precision and high-precision position coordinates of the industrial camera, O is a central coordinate of the marker, OP and OPc are respectively vectors from the center of the visual marker 3 to the coarse-precision and high-precision positions of the industrial camera, a is a rotation axis vector, and the module length of the axial quantity is a rotation angle rho.
In this embodiment, the coarse precision position coordinates of the industrial camera may be obtained from the coarse precision pose information of the visual marker 3.
In addition, in the present embodiment, since the distance d from the industrial camera to the center of the visual marker 3 is known, triangles of the industrial camera, the center of the visual marker 3, and the center of the current stripe can be constructed, and as shown in fig. 6, the high-precision position coordinates P C of the industrial camera can be obtained.
Specifically, in step S62, high-precision pose information of the visual marker 3 is obtained from the rotation axis vector, the rotation angle, and coarse-precision pose information of the visual marker 3, including:
Step S621, obtaining a homogeneous transformation matrix H in the rotation process according to the rotation axis vector and the rotation angle;
step S622, obtaining high-precision pose information of the visual identifier 3 according to the homogeneous transformation matrix H and the coarse-precision pose information of the visual identifier 3.
In step S621, the calculation formula of the homogeneous transformation matrix H is as follows:
wherein ρ is the rotation angle,
cosρ=c,sinρ=s,(1-cosρ)=C。
In step S622, high-precision pose information H C is obtained according to the following calculation formula:
HC=HWH,
Wherein H W is coarse precision pose information of the visual marker 3, and the external reference matrix of the industrial camera can be obtained according to the high precision pose information H C.
Therefore, the pose acquisition method of the embodiment can effectively improve the pose calculation precision by utilizing the second characteristic patterns such as moire fringes generated by the grating area 6 in the visual marker 3, generating obvious fringe position movement according to tiny pose transformation, overcoming the condition that the pose sensitivity of the existing pose estimation mark and method is low under the condition of correcting, and can obtain the high-precision pose information of the visual marker 3 according to the sight angle and the coarse-precision pose information of the visual marker 3, wherein the calculated amount of pose calculation is small, the pose calculation method only comprises one conventional pose estimation (coarse-precision pose estimation) and pose correction based on the viewpoint position, and the real-time requirement is ensured.
As shown in conjunction with fig. 2, another embodiment of the present invention further provides a pose acquisition apparatus provided on a pose acquisition device, the pose acquisition device including an out-of-plane grating-based visual marker 3, the visual marker 3 including a grating region 6 and a plurality of first feature patterns 5 disposed around the grating region 6, and the grating region 6 for generating a second feature pattern, the pose acquisition apparatus comprising:
An acquisition unit 210, wherein the acquisition unit 210 is used for acquiring the image information generated by the visual identifier 3;
An information processing unit 220, where the information processing unit 220 is configured to obtain first projection image information of the first feature pattern 5 and second projection image information of the grating region 6 according to position information of the first feature pattern 5 in the image information;
The information processing unit 220 is further configured to obtain coarse precision pose information of the visual marker 3 based on the position information of the first feature pattern 5 and the first projection image information,
The information processing unit 220 is further configured to obtain a center pixel position of a second feature pattern generated by the raster region 6 based on the second projection image information,
The information processing unit 220 is further configured to obtain a line of sight angle according to a change of a center pixel position of the second feature pattern;
A calculating unit 230, wherein the calculating unit 230 is configured to obtain high-precision pose information of the visual identifier 3 according to the line-of-sight angle and coarse-precision pose information of the visual identifier 3.
Specifically, the acquiring unit 210 is configured to acquire image information generated by the visual identifier 3, including;
Calibrating the image collector 1;
and acquiring the image information generated by the visual identifier 3 by using the calibrated image acquisition device 1.
Specifically, the information processing unit 220 is configured to obtain, according to the position information of the first feature pattern 5 in the image information, first projection image information of the first feature pattern 5 and second projection image information of the grating region 6, including;
preprocessing the acquired image information to obtain image information with specified colors;
Extracting the image information with the specified color to obtain position information of a plurality of first characteristic patterns 5;
based on the position information of the plurality of first feature patterns 5, the first projection image information of the first feature patterns 5 and the second projection image information of the grating region 6 are obtained by homography.
Specifically, the information processing unit 220 is further configured to obtain a viewing angle according to a change of a central pixel position of the second feature pattern, including;
acquiring an initial position and a current position of a central pixel of the second characteristic pattern;
acquiring the movement distance between the observation angle of the visual marker 3 and the second characteristic pattern;
obtaining a preset value according to the observation angle and the moving distance of the second characteristic pattern;
and obtaining the sight angle according to the initial position, the current position and the preset value of the central pixel of the second characteristic pattern.
Specifically, the calculating unit 230 is configured to obtain high-precision pose information of the visual identifier 3 according to the line-of-sight angle and coarse-precision pose information of the visual identifier 3, and includes:
The calculating unit 230 is configured to obtain a rotation axis vector and a rotation angle of the image collector 1 according to the line of sight angle;
the calculating unit 230 is configured to obtain high-precision pose information of the visual marker 3 according to the rotation axis vector, the rotation angle, and coarse-precision pose information of the visual marker 3.
The pose acquisition device and the pose acquisition method have the same advantages as those of the prior art, and are not described in detail herein.
Another embodiment of the present invention also provides a pose acquisition apparatus, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the pose acquisition method when executing the computer program.
As shown in fig. 3 to 5, in this embodiment, the pose acquisition apparatus further includes:
An out-of-plane grating-based visual marker 3, the visual marker 3 comprising a grating region 6 and a plurality of first feature patterns 5 arranged around the grating region 6, and the grating region 6 being for generating a second feature pattern;
the image acquisition device 1 is used for acquiring image information generated by the visual identifier 3;
the visual identifier 3 and the image collector 1 are connected with the pose acquisition base, and the visual identifier 3 is suitable for translation or rotation on the pose acquisition base;
The pose acquirer 4 is configured to receive the image information acquired by the image acquirer 1, and process the image information to obtain high-precision pose information of the visual identifier 3.
In a preferred embodiment, the image collector 1 is an industrial camera, which is low in cost.
In some embodiments, the visual identifier 3 can perform one-dimensional translation and two-dimensional rotation, and includes a substrate, on which a grating area 6 and a plurality of first feature patterns 5 surrounding the grating area 6 are disposed, where the plurality of first feature patterns 5 are four circular patterns with the same diameter, the grating area 6 is a first rectangular area, four circle centers of the four circular patterns are all on diagonal lines of the first rectangular area, and the circle centers of the four circular patterns are sequentially connected to form a second rectangular area. The structure is simple.
Preferably, the substrate is made of organic glass, so that the cost is low and the material is easy to obtain.
In some embodiments, the grating region 6 comprises a two-layer grid grating 7 and a light-transmitting medium plate 8 arranged on the two-layer grid grating 7 for quality inspection. Can generate moire fringes far larger than the mesh size, is in the shape of dark large cross fringes, and can perform large relative movement along with small change of the pose.
The pose acquisition device and the pose acquisition device have the same advantages as those of the prior art, and are not described in detail herein.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of additional identical elements in a process, method, article, or apparatus that comprises the element.
Although the present disclosure is described above, the scope of protection of the present disclosure is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the disclosure, and these changes and modifications will fall within the scope of the invention.