Disclosure of Invention
Aiming at the problem that the existing vision-based robot kinematics calibration method can only improve the absolute positioning accuracy of a robot in a limited space due to the limitation of a visual field, the invention provides the vision-based industrial robot kinematics calibration method in a large-size working space.
The invention discloses a visual-based industrial robot kinematics calibration method in a large-size working space, which comprises the following steps:
laying out ArUco markers in the workspace according to the industrial robot configuration;
capturing ArUco the marked images by using a monocular vision system according to the set shooting path, and generating a fully covered ArUco map by utilizing the ArUco marked images by utilizing an image stitching technology;
capturing ArUco marked images by the robot under different poses by using a monocular vision system, identifying the tail end position of the robot according to the fully covered ArUco map, and constructing an absolute position model of the tail end of the robot;
calculating the tail end nominal pose according to the joint angle of the robot and the nominal value of the kinematic parameter, and obtaining the tail end measuring pose by utilizing the absolute position model of the tail end of the robot to obtain an error matrix of the tail end nominal pose and the measuring pose;
and establishing a pose error model, and obtaining the robot kinematics parameter error according to the error matrix.
Preferably, the method of laying out ArUco markers within the workspace according to an industrial robot configuration comprises:
Establishing an objective function F (O) of the arrangement scheme:
F(O)=α·V(T)+β·U(D)+γ·P(R)-δ·C(S)
Wherein O represents the arrangement scheme of the marks,
V (T) represents a ArUco-marked visibility parameter, and alpha represents the weight of V (T);
U (D) represents a uniformity parameter for ArUco marker arrangements, and β represents the weight of U (D);
p (R) represents the average positioning precision of all positions of the robot in the movement range, and gamma represents the weight of P (R);
C (S) represents the arrangement cost, and delta represents the weight of C (S);
And optimizing the ArUco marked arrangement scheme O by adopting a genetic algorithm to maximize the objective function F (O), and continuously and iteratively updating the positions, the numbers and the directions of the marks in the optimization process until an arrangement scheme capable of enabling the F (O) to reach the maximum value is found.
As a preferred alternative to this,
Where D represents the set of marker densities for each checkpoint in the workspace and σ 2 (D) is the variance of D.
As a preferred alternative to this,
Where v i denotes the visibility of the i ArUco th marker, a i denotes the visible area of the i ArUco th marker in the camera field of view, a total denotes the total area of the camera field of view, and n denotes the total number of ArUco markers, depending on the angle and distance between the ArUco marker and the camera.
As a preferred alternative to this,
Where m is the total number of positioning operations, d j is the positioning error in the jth operation, and d max is the maximum error distance allowed.
Preferably, the method for generating a full-coverage ArUco map by capturing ArUco marked images according to a set shooting path by using a monocular vision system and utilizing an image stitching technology to generate ArUco marked images comprises the following steps:
according to the set closed loop shooting path, sequentially shooting ArUco mark images, and ensuring that the same ArUco mark exists in adjacent ArUco mark images;
determining the position of each ArUco mark, and adopting an L-M algorithm to minimize the re-projection error of ArUco mark corner points of the target to obtain an initial ArUco map;
the target function of the reprojection error is
And
Wherein ψ (δ, γ t,γi·cj) represents the projection of the corner points from three-dimensional space coordinates to pixel coordinates, δ represents the camera internal reference matrix, γ t represents the external reference of the camera, γ i represents the transformation matrix of the marker coordinate system to the world coordinate system, c j represents the three-dimensional coordinates of the marker corner points,The pixel coordinates representing the marker corner, K being the total number of images in the closed loop shooting path, x k being the pose of the kth image, z k being the pose transformation from the kth image to the K +1 image,Is the operator between bits.
Preferably, the method for constructing the absolute position model of the tail end of the robot comprises the following steps:
establishing a coordinate conversion relation between a ArUco image coordinate system and a camera coordinate system according to the full-coverage ArUco map and ArUco marked images captured by the robot under different poses by using a monocular vision system;
and calculating the relative pose relation between the robot tail end coordinate system and the camera coordinate system by using the hand eye calibration model, so as to determine the absolute position model of the robot tail end.
Preferably, a pose error model is established, and the error matrix is processed through Python programming to obtain the robot kinematics parameter error.
The invention has the advantages that,
The invention adopts the monocular vision system and ArUco marked map, thereby not only greatly reducing the application cost of the high-precision calibration technology, but also simplifying the calibration process, and leading the system installation and operation to be simpler and quicker. The characteristic makes the invention especially suitable for the production environment which needs to be calibrated or adjusted frequently, and saves a great deal of time and economic cost for users. The invention effectively solves the problem that the traditional vision calibration method is limited by the camera field angle, and can realize the robot positioning with higher precision in a larger working space. The improvement not only improves the production efficiency, but also improves the control quality, and is particularly suitable for application scenes with extremely high requirements on positioning accuracy. The calibration method is suitable for various industrial robot models and brands, and has good universality. The flexible design of ArUco marked maps enables the calibration system to be adjusted according to actual demands, and has good expandability. Meanwhile, the enhanced information and redundancy provided by the method ensure that higher calibration precision can be maintained even if part of the marks are blocked or damaged, so that the robustness of the system in a complex environment is enhanced.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
The invention is further described below with reference to the drawings and specific examples, which are not intended to be limiting.
The method for calibrating the kinematics of the industrial robot based on vision in the large-size working space in the embodiment comprises the following steps:
step 1, according to the configuration of an industrial robot, laying ArUco marks in a working space and optimizing the layout;
Step 2, capturing ArUco marked images by using a monocular vision system according to a set shooting path, and generating a fully covered ArUco map by utilizing the ArUco marked images by using an image stitching technology;
Step 3, capturing ArUco mark images of the robot under different poses by using a monocular vision system, identifying the tail end position of the robot according to the fully covered ArUco map, and constructing a tail end absolute position model of the robot;
Step 4, calculating the nominal pose of the tail end according to the joint angle of the robot and the nominal value of the kinematic parameter, and obtaining the measured pose of the tail end by using the absolute position model of the tail end of the robot to obtain an error matrix of the nominal pose of the tail end and the measured pose;
And 5, establishing a pose error model, and obtaining a robot kinematics parameter error according to the error matrix.
According to the embodiment, the monocular vision system and the ArUco marked map are adopted, so that the application cost of a high-precision calibration technology is greatly reduced, and the calibration process is simplified, and the system is simpler and faster to install and operate. The characteristic makes the invention especially suitable for the production environment which needs to be calibrated or adjusted frequently, and saves a great deal of time and economic cost for users.
The present embodiment then determines the layout of field devices (industrial robots, work tables, processing equipment, etc.), and selects an appropriate industrial camera according to the working distance and measurement accuracy. The mounting positions and angles of the camera and the tail end of the robot are carefully designed to optimize the quality of image capturing;
A ArUco marker with a unique ID is placed in place within the robot workspace and an image stitching technique is used to generate a ArUco map (calibration standard) of the full coverage workspace. During the shooting process, at least one complete ArUco marker is located entirely within the field of view of the camera;
The robot pose is selected to cover the whole working range of the robot as much as possible, and the motion characteristics of the robot and the capturing capability of a vision system are considered. The robot performs a motion at a predetermined pose point, acquires six joint angle data at each pose of the industrial robot, and simultaneously, an industrial camera mounted at the end of the robot captures an image containing ArUco marks. The image data of each attitude point and the joint angle data of the robot are recorded and stored for subsequent data analysis and processing;
the layout of ArUco marks in step 1 is specifically:
first, the layout of ArUco marks is optimized within the workspace of the industrial robot, according to the specific configuration and job requirements of the robot. In this embodiment, the layout of the marks follows the following principle:
(1) Marker placement uniformity by rationally planning the position and spacing of markers, ensures that markers cover the entire workspace and maintain similar marker densities at different locations. This is achieved by optimizing the uniformity U (D) of the target function ArUco marker arrangement, where
D represents the set of marker densities for each checkpoint in the workspace, σ 2 (D) is the variance of D;
(2) ArUco marker visibility-parameters and field of view of the camera are taken into account when selecting the placement location of the markers, avoiding placement of ArUco markers in occlusion areas of the robot or other object. The total visibility V (T) of ArUco markers is calculated by the following formula to ensure that the camera can accurately monitor and locate each marker;
(3) Robot pose change to increase the pose range of the robot, arUco marks are placed at various positions including non-planar positions. Therefore, the method not only improves the pose range, but also increases the robustness and stability of calibration, and is particularly suitable for complex working space and multi-pose operation.
Comprehensively considering the visibility, uniformity and positioning accuracy of the robot of the mark, an optimization objective function is defined:
F(O)=α·V(T)+β·U(D)+γ·P(R)-δ·C(S)
Where F (O) is an optimization objective function and O represents the placement scheme of the labels. Alpha represents the weight of V (T), beta represents the weight of U (D), gamma represents the weight of P (R), and P (R) represents the average positioning precision of all the positions of the robot in the movement range, and the formula is adopted Calculation, where m is the total number of positioning operations, d j is the positioning error in the jth operation, and d max is the maximum error distance allowed. C (S) represents the arrangement cost by the formulaCalculation, where C k is the placement cost of the kth marker, δ represents the weight of C (S).
The ArUco-labeled placement scheme O was optimized using a genetic algorithm to maximize the objective function F (O). In the optimization process, the positions, the numbers and the directions of the marks are iteratively updated continuously until an arrangement scheme enabling F (O) to reach the maximum value is found.
The map creation in step 2 ArUco is specifically:
(1) The robot shoots ArUco the markers at different poses, ensuring that the same markers are present in adjacent images. In the robot pose 1, when the robot detects the mark 1 and the mark 3 in the image at the same time, the pose relationship between the camera and the mark 1 can be established Gesture relationship between markers 3Thus we can derive the gesture transformation matrix between marker 1 and marker 3 in gesture 1In gesture 2, when mark 3 and mark 4 are detected simultaneously, the gesture transformation matrix therebetween can be determinedFurthermore, since there is a common marker 3, the posture transformation matrix between the markers 4 and 1 can be calculatedAlso, by common markers in the image captured at each location, we can determine the relative positional relationship of any two ArUco markers in space. By designating one of the ArUco coordinate systems as the reference world coordinate system, the relative positions of all ArUco markers to the reference coordinate system can be determined.
(2) Due to insufficient illumination, rapid camera movement, low resolution, poor focusing and the like, the pixel coordinates of the two-dimensional code corner points become inaccurate, and errors occur in the calculated position matrix. In short, observation noise results in a very inaccurate simple projection relationship and a large accumulated error. Thus, where the camera observations are inaccurate, the precise location of each ArUco is determined. This is an optimization problem that minimizes the re-projection error:
Wherein ψ (δ, γ t,γi·cj) represents the projection of the corner points from three-dimensional space coordinates to pixel coordinates, δ represents the camera internal reference matrix, γ t represents the external reference of the camera, γ i represents the transformation matrix of the marker coordinate system to the world coordinate system, c j represents the three-dimensional coordinates of the marker corner points, Representing the pixel coordinates of the marker corner, K being the total number of images in the closed loop shooting path,
(3) Meanwhile, considering that the increase of the image stitching times can cause error accumulation, in order to alleviate the problem, setting a closed-loop image acquisition route, optimizing the total error and introducing a closed-loop constraint formula:
Where K is the total number of images in the closed loop, x k is the pose of the kth image, z k is the pose transformation from the kth image to the (k+1) th image, Is the operator between bits.
In step 3, the robot captures ArUco the marker images using a monocular vision system in different poses,
In the step 3, the construction of the absolute position model of the tail end of the robot is specifically as follows:
(1) The measurement system comprises a plurality ArUco of markers in space, an industrial robot, and an industrial camera mounted at the end of the industrial robot. The camera captures and carries out attitude estimation on ArUco marks through an image recognition technology, and a coordinate conversion relation between a ArUco code image coordinate system and a camera coordinate system is established. Wherein, pnP algorithm is used for confirming the 6 degrees of freedom gesture of camera relative to the calibration board. In the algorithm, the three-dimensional to two-dimensional correspondence of four corner points indicated by ArUco marks is described by the following formula:
(2) And calculating the relative pose relation between the KUKA kr500 robot end coordinate system and the large constant industrial camera coordinate system by using the hand eye calibration model, so as to determine the absolute position model of the robot end. The hand-eye matrix obtained by calibration in this embodiment is:
and 4, obtaining an error matrix of the measured pose, wherein the error matrix is specifically as follows:
(1) Establishing a kinematic model according to industrial robot theoretical DH parameters, and calculating the nominal pose of the tail end of the robot under each pose;
(2) Obtaining a terminal measurement pose by using a robot terminal absolute position model, and obtaining an error matrix of a terminal nominal pose and the measurement pose;
in step 5, a pose error model is established, and the error matrix is processed through Python programming, so that a robot kinematics parameter error is obtained.
In order to verify the effectiveness and superiority of the present invention, a ArUco map-based calibration method is compared with a conventional checkerboard-based calibration method. The operation space of the industrial robot is further divided into 9 areas, 10 verification points are randomly arranged in each area, and the advantages of the method are further proved by comparing the positioning accuracy of the two methods in different areas and analyzing the error distribution.
The identified robot kinematic parameter errors are shown in the table:
TABLE 1 kinematic parameter error results
Analyzing the processed data, and evaluating the improvement effect of the method on the absolute positioning accuracy of the robot. The error distribution in different areas and the comparison result with the traditional method are particularly focused, so that the effectiveness and the application value of the method are comprehensively verified.
In summary, the present invention successfully proposes and implements an efficient method for kinematic calibration of industrial robots within large-sized workspaces using ArUco maps and monocular vision systems. By innovatively combining the wide deployment of ArUco marks, advanced image stitching technology, accurate image processing and mark recognition, and advanced absolute position model construction and kinematic parameter calibration technology, the invention remarkably improves the positioning precision and the operating efficiency of the industrial robot in a large-size working area. The system is simple and convenient to operate, high in cost efficiency, good in adaptability and expansibility and capable of meeting the requirements of various high-precision control tasks. By implementing the invention, powerful technical support can be provided for various industrial applications, and the invention has wide application prospect and remarkable economic and social values particularly in the fields of aerospace, automobile manufacturing, precision machining and the like.
Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims. It should be understood that the different dependent claims and the features described herein may be combined in ways other than as described in the original claims. It is also to be understood that features described in connection with separate embodiments may be used in other described embodiments.