[go: up one dir, main page]

CN118322213B - Vision-based kinematic calibration method for industrial robots in large workspaces - Google Patents

Vision-based kinematic calibration method for industrial robots in large workspaces

Info

Publication number
CN118322213B
CN118322213B CN202410623842.6A CN202410623842A CN118322213B CN 118322213 B CN118322213 B CN 118322213B CN 202410623842 A CN202410623842 A CN 202410623842A CN 118322213 B CN118322213 B CN 118322213B
Authority
CN
China
Prior art keywords
aruco
robot
vision
error
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410623842.6A
Other languages
Chinese (zh)
Other versions
CN118322213A (en
Inventor
高栋
尹远浩
邓柯楠
路勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Shenzhen
Original Assignee
Harbin Institute of Technology Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Shenzhen filed Critical Harbin Institute of Technology Shenzhen
Priority to CN202410623842.6A priority Critical patent/CN118322213B/en
Publication of CN118322213A publication Critical patent/CN118322213A/en
Application granted granted Critical
Publication of CN118322213B publication Critical patent/CN118322213B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

大尺寸工作空间下基于视觉的工业机器人运动学标定方法,解决了针对现有基于视觉的机器人运动学标定方法由于视场限制,仅能在有限空间内提升机器人的绝对定位精度的问题,属于机器人运动学标定技术领域。本发明包括:根据工业机器人配置,在工作空间内布局ArUco标记;根据设定的拍摄路径使用单目视觉系统捕获ArUco标记图像,生成一个全覆盖的ArUco地图;机器人在不同位姿下使用单目视觉系统捕获ArUco标记图像,构建机器人末端绝对位置模型,得到测量位姿;根据机器人关节角度及运动学参数名义值,计算末端名义位姿和测量位姿的误差矩阵;建立位姿误差模型,根据误差矩阵,得到机器人运动学参数误差。

A vision-based kinematic calibration method for industrial robots in large workspaces solves the problem that existing vision-based robot kinematic calibration methods can only improve the absolute positioning accuracy of robots within a limited space due to field of view limitations, and belongs to the field of robot kinematic calibration technology. The present invention includes: laying out ArUco markers in the workspace according to the configuration of the industrial robot; using a monocular vision system to capture ArUco marker images according to a set shooting path to generate a fully covered ArUco map; using a monocular vision system to capture ArUco marker images in different robot postures, constructing an absolute position model of the robot end, and obtaining a measured posture; calculating the error matrix between the end nominal posture and the measured posture based on the robot joint angles and nominal values of the kinematic parameters; establishing a posture error model, and obtaining the robot kinematic parameter error based on the error matrix.

Description

Vision-based industrial robot kinematics calibration method in large-size working space
Technical Field
The invention relates to a visual-based industrial robot kinematic calibration method in a large-size working space, and belongs to the technical field of robot kinematic calibration.
Background
Along with the requirements of advanced manufacturing on high-speed, high-precision and large-bearing industrial robots, the requirements on the absolute positioning precision of the robots are also higher and higher, and research on improving the absolute positioning precision of the robots through a calibration technology becomes a hot problem. The robot kinematics calibration technology based on vision is favored because of low cost and simple operation. Currently, robot kinematic calibration relies mainly on specific markers or feature points (such as checkerboards and precision balls) as calibration references. In document "Kinematic identification of industrial robot using end-effector mounted monocular camera bypassing measurement of 3-d pose[J].IEEE/ASME Transactions on Mechatronics 27.1(2021):383-394.", an industrial robot kinematic recognition method is described, which uses a monocular camera mounted on an end effector, directly uses a two-dimensional image of a checkerboard calibration plate without performing three-dimensional pose measurement, simplifies the process to a single stage estimation, but is limited by the camera field of view and the robot calibration space. A novel visual calibration framework is proposed in literature "A novel vision-based calibration framework for industrial robotic manipulators[J].Robotics and Computer-Integrated Manufacturing 73(2022):102248.", using a single camera fixed externally and ArUco markers at the end of the robot to achieve calibration of an industrial robot arm, but this approach does not allow for both camera field of view and measurement distance, which limits flexibility of application in large-size workspaces.
In a word, the current robot kinematics calibration method based on vision has a common problem that the camera view field limits the moving range of the robot in the calibration process, so that the improvement of the absolute positioning accuracy of the robot is mainly concentrated in a limited measurement area, and therefore, when the robot is applied in a large working space, the absolute positioning accuracy of the robot is difficult to meet the application requirement of full space high accuracy.
Disclosure of Invention
Aiming at the problem that the existing vision-based robot kinematics calibration method can only improve the absolute positioning accuracy of a robot in a limited space due to the limitation of a visual field, the invention provides the vision-based industrial robot kinematics calibration method in a large-size working space.
The invention discloses a visual-based industrial robot kinematics calibration method in a large-size working space, which comprises the following steps:
laying out ArUco markers in the workspace according to the industrial robot configuration;
capturing ArUco the marked images by using a monocular vision system according to the set shooting path, and generating a fully covered ArUco map by utilizing the ArUco marked images by utilizing an image stitching technology;
capturing ArUco marked images by the robot under different poses by using a monocular vision system, identifying the tail end position of the robot according to the fully covered ArUco map, and constructing an absolute position model of the tail end of the robot;
calculating the tail end nominal pose according to the joint angle of the robot and the nominal value of the kinematic parameter, and obtaining the tail end measuring pose by utilizing the absolute position model of the tail end of the robot to obtain an error matrix of the tail end nominal pose and the measuring pose;
and establishing a pose error model, and obtaining the robot kinematics parameter error according to the error matrix.
Preferably, the method of laying out ArUco markers within the workspace according to an industrial robot configuration comprises:
Establishing an objective function F (O) of the arrangement scheme:
F(O)=α·V(T)+β·U(D)+γ·P(R)-δ·C(S)
Wherein O represents the arrangement scheme of the marks,
V (T) represents a ArUco-marked visibility parameter, and alpha represents the weight of V (T);
U (D) represents a uniformity parameter for ArUco marker arrangements, and β represents the weight of U (D);
p (R) represents the average positioning precision of all positions of the robot in the movement range, and gamma represents the weight of P (R);
C (S) represents the arrangement cost, and delta represents the weight of C (S);
And optimizing the ArUco marked arrangement scheme O by adopting a genetic algorithm to maximize the objective function F (O), and continuously and iteratively updating the positions, the numbers and the directions of the marks in the optimization process until an arrangement scheme capable of enabling the F (O) to reach the maximum value is found.
As a preferred alternative to this,
Where D represents the set of marker densities for each checkpoint in the workspace and σ 2 (D) is the variance of D.
As a preferred alternative to this,
Where v i denotes the visibility of the i ArUco th marker, a i denotes the visible area of the i ArUco th marker in the camera field of view, a total denotes the total area of the camera field of view, and n denotes the total number of ArUco markers, depending on the angle and distance between the ArUco marker and the camera.
As a preferred alternative to this,
Where m is the total number of positioning operations, d j is the positioning error in the jth operation, and d max is the maximum error distance allowed.
Preferably, the method for generating a full-coverage ArUco map by capturing ArUco marked images according to a set shooting path by using a monocular vision system and utilizing an image stitching technology to generate ArUco marked images comprises the following steps:
according to the set closed loop shooting path, sequentially shooting ArUco mark images, and ensuring that the same ArUco mark exists in adjacent ArUco mark images;
determining the position of each ArUco mark, and adopting an L-M algorithm to minimize the re-projection error of ArUco mark corner points of the target to obtain an initial ArUco map;
the target function of the reprojection error is
And
Wherein ψ (δ, γ ti·cj) represents the projection of the corner points from three-dimensional space coordinates to pixel coordinates, δ represents the camera internal reference matrix, γ t represents the external reference of the camera, γ i represents the transformation matrix of the marker coordinate system to the world coordinate system, c j represents the three-dimensional coordinates of the marker corner points,The pixel coordinates representing the marker corner, K being the total number of images in the closed loop shooting path, x k being the pose of the kth image, z k being the pose transformation from the kth image to the K +1 image,Is the operator between bits.
Preferably, the method for constructing the absolute position model of the tail end of the robot comprises the following steps:
establishing a coordinate conversion relation between a ArUco image coordinate system and a camera coordinate system according to the full-coverage ArUco map and ArUco marked images captured by the robot under different poses by using a monocular vision system;
and calculating the relative pose relation between the robot tail end coordinate system and the camera coordinate system by using the hand eye calibration model, so as to determine the absolute position model of the robot tail end.
Preferably, a pose error model is established, and the error matrix is processed through Python programming to obtain the robot kinematics parameter error.
The invention has the advantages that,
The invention adopts the monocular vision system and ArUco marked map, thereby not only greatly reducing the application cost of the high-precision calibration technology, but also simplifying the calibration process, and leading the system installation and operation to be simpler and quicker. The characteristic makes the invention especially suitable for the production environment which needs to be calibrated or adjusted frequently, and saves a great deal of time and economic cost for users. The invention effectively solves the problem that the traditional vision calibration method is limited by the camera field angle, and can realize the robot positioning with higher precision in a larger working space. The improvement not only improves the production efficiency, but also improves the control quality, and is particularly suitable for application scenes with extremely high requirements on positioning accuracy. The calibration method is suitable for various industrial robot models and brands, and has good universality. The flexible design of ArUco marked maps enables the calibration system to be adjusted according to actual demands, and has good expandability. Meanwhile, the enhanced information and redundancy provided by the method ensure that higher calibration precision can be maintained even if part of the marks are blocked or damaged, so that the robustness of the system in a complex environment is enhanced.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a schematic diagram of a kinematic model of a robot according to the present invention;
FIG. 3 is a schematic diagram of map creation of the present invention ArUco;
Fig. 4 is a schematic diagram of a robot kinematic calibration platform according to the present invention.
Fig. 5 is a schematic diagram of the positioning accuracy zoning verification of the robot of the present invention.
Fig. 6 is a diagram showing alignment accuracy of the robot before and after calibration in the embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
The invention is further described below with reference to the drawings and specific examples, which are not intended to be limiting.
The method for calibrating the kinematics of the industrial robot based on vision in the large-size working space in the embodiment comprises the following steps:
step 1, according to the configuration of an industrial robot, laying ArUco marks in a working space and optimizing the layout;
Step 2, capturing ArUco marked images by using a monocular vision system according to a set shooting path, and generating a fully covered ArUco map by utilizing the ArUco marked images by using an image stitching technology;
Step 3, capturing ArUco mark images of the robot under different poses by using a monocular vision system, identifying the tail end position of the robot according to the fully covered ArUco map, and constructing a tail end absolute position model of the robot;
Step 4, calculating the nominal pose of the tail end according to the joint angle of the robot and the nominal value of the kinematic parameter, and obtaining the measured pose of the tail end by using the absolute position model of the tail end of the robot to obtain an error matrix of the nominal pose of the tail end and the measured pose;
And 5, establishing a pose error model, and obtaining a robot kinematics parameter error according to the error matrix.
According to the embodiment, the monocular vision system and the ArUco marked map are adopted, so that the application cost of a high-precision calibration technology is greatly reduced, and the calibration process is simplified, and the system is simpler and faster to install and operate. The characteristic makes the invention especially suitable for the production environment which needs to be calibrated or adjusted frequently, and saves a great deal of time and economic cost for users.
The present embodiment then determines the layout of field devices (industrial robots, work tables, processing equipment, etc.), and selects an appropriate industrial camera according to the working distance and measurement accuracy. The mounting positions and angles of the camera and the tail end of the robot are carefully designed to optimize the quality of image capturing;
A ArUco marker with a unique ID is placed in place within the robot workspace and an image stitching technique is used to generate a ArUco map (calibration standard) of the full coverage workspace. During the shooting process, at least one complete ArUco marker is located entirely within the field of view of the camera;
The robot pose is selected to cover the whole working range of the robot as much as possible, and the motion characteristics of the robot and the capturing capability of a vision system are considered. The robot performs a motion at a predetermined pose point, acquires six joint angle data at each pose of the industrial robot, and simultaneously, an industrial camera mounted at the end of the robot captures an image containing ArUco marks. The image data of each attitude point and the joint angle data of the robot are recorded and stored for subsequent data analysis and processing;
the layout of ArUco marks in step 1 is specifically:
first, the layout of ArUco marks is optimized within the workspace of the industrial robot, according to the specific configuration and job requirements of the robot. In this embodiment, the layout of the marks follows the following principle:
(1) Marker placement uniformity by rationally planning the position and spacing of markers, ensures that markers cover the entire workspace and maintain similar marker densities at different locations. This is achieved by optimizing the uniformity U (D) of the target function ArUco marker arrangement, where
D represents the set of marker densities for each checkpoint in the workspace, σ 2 (D) is the variance of D;
(2) ArUco marker visibility-parameters and field of view of the camera are taken into account when selecting the placement location of the markers, avoiding placement of ArUco markers in occlusion areas of the robot or other object. The total visibility V (T) of ArUco markers is calculated by the following formula to ensure that the camera can accurately monitor and locate each marker;
(3) Robot pose change to increase the pose range of the robot, arUco marks are placed at various positions including non-planar positions. Therefore, the method not only improves the pose range, but also increases the robustness and stability of calibration, and is particularly suitable for complex working space and multi-pose operation.
Comprehensively considering the visibility, uniformity and positioning accuracy of the robot of the mark, an optimization objective function is defined:
F(O)=α·V(T)+β·U(D)+γ·P(R)-δ·C(S)
Where F (O) is an optimization objective function and O represents the placement scheme of the labels. Alpha represents the weight of V (T), beta represents the weight of U (D), gamma represents the weight of P (R), and P (R) represents the average positioning precision of all the positions of the robot in the movement range, and the formula is adopted Calculation, where m is the total number of positioning operations, d j is the positioning error in the jth operation, and d max is the maximum error distance allowed. C (S) represents the arrangement cost by the formulaCalculation, where C k is the placement cost of the kth marker, δ represents the weight of C (S).
The ArUco-labeled placement scheme O was optimized using a genetic algorithm to maximize the objective function F (O). In the optimization process, the positions, the numbers and the directions of the marks are iteratively updated continuously until an arrangement scheme enabling F (O) to reach the maximum value is found.
The map creation in step 2 ArUco is specifically:
(1) The robot shoots ArUco the markers at different poses, ensuring that the same markers are present in adjacent images. In the robot pose 1, when the robot detects the mark 1 and the mark 3 in the image at the same time, the pose relationship between the camera and the mark 1 can be established Gesture relationship between markers 3Thus we can derive the gesture transformation matrix between marker 1 and marker 3 in gesture 1In gesture 2, when mark 3 and mark 4 are detected simultaneously, the gesture transformation matrix therebetween can be determinedFurthermore, since there is a common marker 3, the posture transformation matrix between the markers 4 and 1 can be calculatedAlso, by common markers in the image captured at each location, we can determine the relative positional relationship of any two ArUco markers in space. By designating one of the ArUco coordinate systems as the reference world coordinate system, the relative positions of all ArUco markers to the reference coordinate system can be determined.
(2) Due to insufficient illumination, rapid camera movement, low resolution, poor focusing and the like, the pixel coordinates of the two-dimensional code corner points become inaccurate, and errors occur in the calculated position matrix. In short, observation noise results in a very inaccurate simple projection relationship and a large accumulated error. Thus, where the camera observations are inaccurate, the precise location of each ArUco is determined. This is an optimization problem that minimizes the re-projection error:
Wherein ψ (δ, γ ti·cj) represents the projection of the corner points from three-dimensional space coordinates to pixel coordinates, δ represents the camera internal reference matrix, γ t represents the external reference of the camera, γ i represents the transformation matrix of the marker coordinate system to the world coordinate system, c j represents the three-dimensional coordinates of the marker corner points, Representing the pixel coordinates of the marker corner, K being the total number of images in the closed loop shooting path,
(3) Meanwhile, considering that the increase of the image stitching times can cause error accumulation, in order to alleviate the problem, setting a closed-loop image acquisition route, optimizing the total error and introducing a closed-loop constraint formula:
Where K is the total number of images in the closed loop, x k is the pose of the kth image, z k is the pose transformation from the kth image to the (k+1) th image, Is the operator between bits.
In step 3, the robot captures ArUco the marker images using a monocular vision system in different poses,
In the step 3, the construction of the absolute position model of the tail end of the robot is specifically as follows:
(1) The measurement system comprises a plurality ArUco of markers in space, an industrial robot, and an industrial camera mounted at the end of the industrial robot. The camera captures and carries out attitude estimation on ArUco marks through an image recognition technology, and a coordinate conversion relation between a ArUco code image coordinate system and a camera coordinate system is established. Wherein, pnP algorithm is used for confirming the 6 degrees of freedom gesture of camera relative to the calibration board. In the algorithm, the three-dimensional to two-dimensional correspondence of four corner points indicated by ArUco marks is described by the following formula:
(2) And calculating the relative pose relation between the KUKA kr500 robot end coordinate system and the large constant industrial camera coordinate system by using the hand eye calibration model, so as to determine the absolute position model of the robot end. The hand-eye matrix obtained by calibration in this embodiment is:
and 4, obtaining an error matrix of the measured pose, wherein the error matrix is specifically as follows:
(1) Establishing a kinematic model according to industrial robot theoretical DH parameters, and calculating the nominal pose of the tail end of the robot under each pose;
(2) Obtaining a terminal measurement pose by using a robot terminal absolute position model, and obtaining an error matrix of a terminal nominal pose and the measurement pose;
in step 5, a pose error model is established, and the error matrix is processed through Python programming, so that a robot kinematics parameter error is obtained.
In order to verify the effectiveness and superiority of the present invention, a ArUco map-based calibration method is compared with a conventional checkerboard-based calibration method. The operation space of the industrial robot is further divided into 9 areas, 10 verification points are randomly arranged in each area, and the advantages of the method are further proved by comparing the positioning accuracy of the two methods in different areas and analyzing the error distribution.
The identified robot kinematic parameter errors are shown in the table:
TABLE 1 kinematic parameter error results
Analyzing the processed data, and evaluating the improvement effect of the method on the absolute positioning accuracy of the robot. The error distribution in different areas and the comparison result with the traditional method are particularly focused, so that the effectiveness and the application value of the method are comprehensively verified.
In summary, the present invention successfully proposes and implements an efficient method for kinematic calibration of industrial robots within large-sized workspaces using ArUco maps and monocular vision systems. By innovatively combining the wide deployment of ArUco marks, advanced image stitching technology, accurate image processing and mark recognition, and advanced absolute position model construction and kinematic parameter calibration technology, the invention remarkably improves the positioning precision and the operating efficiency of the industrial robot in a large-size working area. The system is simple and convenient to operate, high in cost efficiency, good in adaptability and expansibility and capable of meeting the requirements of various high-precision control tasks. By implementing the invention, powerful technical support can be provided for various industrial applications, and the invention has wide application prospect and remarkable economic and social values particularly in the fields of aerospace, automobile manufacturing, precision machining and the like.
Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims. It should be understood that the different dependent claims and the features described herein may be combined in ways other than as described in the original claims. It is also to be understood that features described in connection with separate embodiments may be used in other described embodiments.

Claims (9)

1. The method for calibrating the kinematics of the industrial robot based on the vision in the large-size working space is characterized by comprising the following steps of:
laying out ArUco markers in the workspace according to the industrial robot configuration;
capturing ArUco the marked images by using a monocular vision system according to the set shooting path, and generating a fully covered ArUco map by utilizing the ArUco marked images by utilizing an image stitching technology;
capturing ArUco marked images by the robot under different poses by using a monocular vision system, identifying the tail end position of the robot according to the fully covered ArUco map, and constructing an absolute position model of the tail end of the robot;
calculating the tail end nominal pose according to the joint angle of the robot and the nominal value of the kinematic parameter, and obtaining the tail end measuring pose by utilizing the absolute position model of the tail end of the robot to obtain an error matrix of the tail end nominal pose and the measuring pose;
Establishing a pose error model, and obtaining a robot kinematics parameter error according to an error matrix;
The method for generating a full-coverage ArUco map by capturing ArUco marked images according to a set shooting path by using a monocular vision system and utilizing an image stitching technology to generate ArUco marked images comprises the following steps:
according to the set closed loop shooting path, sequentially shooting ArUco mark images, and ensuring that the same ArUco mark exists in adjacent ArUco mark images;
determining the position of each ArUco mark, and adopting an L-M algorithm to minimize the re-projection error of ArUco mark corner points of the target to obtain an initial ArUco map;
the target function of the reprojection error is
And
Wherein ψ (δ, γ ti·cj) represents the projection of the corner points from three-dimensional space coordinates to pixel coordinates, δ represents the camera internal reference matrix, γ t represents the external reference of the camera, γ i represents the transformation matrix of the marker coordinate system to the world coordinate system, c j represents the three-dimensional coordinates of the marker corner points,The pixel coordinates representing the marker corner, K is the total number of images in the closed loop shooting path, x k is the pose of the kth image, z k is the pose transformation from the kth image to the k+1th image, and x is the operator between bits.
2. The vision-based industrial robot kinematic calibration method for a large-scale workspace of claim 1, wherein the method of laying out ArUco the markers within the workspace according to the industrial robot configuration comprises:
Establishing an objective function F (O) of the arrangement scheme:
F(O)=α·V(T)+β·U(D)+γ·P(R)-δ·C(S)
Wherein O represents the arrangement scheme of the marks,
V (T) represents a ArUco-marked visibility parameter, and alpha represents the weight of V (T);
U (D) represents a uniformity parameter for ArUco marker arrangements, and β represents the weight of U (D);
p (R) represents the average positioning precision of all positions of the robot in the movement range, and gamma represents the weight of P (R);
C (S) represents the arrangement cost, and delta represents the weight of C (S);
And optimizing the ArUco marked arrangement scheme O by adopting a genetic algorithm to maximize the objective function F (O), and continuously and iteratively updating the positions, the numbers and the directions of the marks in the optimization process until an arrangement scheme capable of enabling the F (O) to reach the maximum value is found.
3. The method for calibrating the kinematics of the industrial robot based on the vision in the large-size working space according to claim 2,
Where D represents the set of marker densities for each checkpoint in the workspace and σ 2 (D) is the variance of D.
4. The method for calibrating the kinematics of the industrial robot based on the vision in the large-size working space according to claim 2,
Where v i denotes the visibility of the i ArUco th marker, a i denotes the visible area of the i ArUco th marker in the camera field of view, a total denotes the total area of the camera field of view, and n denotes the total number of ArUco markers, depending on the angle and distance between the ArUco marker and the camera.
5. The method for calibrating the kinematics of the industrial robot based on the vision in the large-size working space according to claim 2,
Where m is the total number of positioning operations, d j is the positioning error in the jth operation, and d max is the maximum error distance allowed.
6. The method for calibrating the kinematics of the industrial robot based on the vision in the large-sized working space according to claim 1, wherein the method for constructing the absolute position model of the tail end of the robot is as follows:
establishing a coordinate conversion relation between a ArUco image coordinate system and a camera coordinate system according to the full-coverage ArUco map and ArUco marked images captured by the robot under different poses by using a monocular vision system;
and calculating the relative pose relation between the robot tail end coordinate system and the camera coordinate system by using the hand eye calibration model, so as to determine the absolute position model of the robot tail end.
7. The vision-based industrial robot kinematic calibration method in a large-size working space according to claim 1, wherein a pose error model is established, and the error matrix is processed through Python programming to obtain a robot kinematic parameter error.
8. A computer-readable storage device storing a computer program, characterized in that the computer program, when executed by a processor, implements a vision-based industrial robot kinematic calibration method in a large-sized working space according to any one of claims 1 to 7.
9. A vision-based industrial robot kinematic calibration device in a large-size working space, comprising a storage device, a processor and a computer program stored in the storage device and executable on the processor, characterized in that the processor executes the computer program to implement the vision-based industrial robot kinematic calibration method in a large-size working space according to any one of claims 1 to 7.
CN202410623842.6A 2024-05-20 2024-05-20 Vision-based kinematic calibration method for industrial robots in large workspaces Active CN118322213B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410623842.6A CN118322213B (en) 2024-05-20 2024-05-20 Vision-based kinematic calibration method for industrial robots in large workspaces

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410623842.6A CN118322213B (en) 2024-05-20 2024-05-20 Vision-based kinematic calibration method for industrial robots in large workspaces

Publications (2)

Publication Number Publication Date
CN118322213A CN118322213A (en) 2024-07-12
CN118322213B true CN118322213B (en) 2025-10-03

Family

ID=91772697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410623842.6A Active CN118322213B (en) 2024-05-20 2024-05-20 Vision-based kinematic calibration method for industrial robots in large workspaces

Country Status (1)

Country Link
CN (1) CN118322213B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120080067B (en) * 2025-03-27 2025-09-05 大唐华银电力股份有限公司金竹山火力发电分公司 A vision-based welding trajectory acquisition method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284224A (en) * 2021-04-20 2021-08-20 北京行动智能科技有限公司 Automatic mapping method and device based on simplex code, and positioning method and equipment
CN117830422A (en) * 2023-12-15 2024-04-05 之江实验室 Multi-camera external parameter calibration method and system based on multiple Aruco codes

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201936164U (en) * 2010-12-20 2011-08-17 齐齐哈尔二机床(集团)有限责任公司 Control device for thermal deformation compensation of numerical control machine tool ram
DE102017213638A1 (en) * 2017-08-07 2019-02-07 Siemens Aktiengesellschaft marker
CN111070199A (en) * 2018-10-18 2020-04-28 杭州海康威视数字技术股份有限公司 Hand-eye calibration assessment method and robot
GB2582139B (en) * 2019-03-11 2021-07-21 Arrival Ltd A method for determining positional error within a robotic cell environment
CN212218483U (en) * 2020-04-15 2020-12-25 昆山市工研院智能制造技术有限公司 Visual operation system of compound robot
CN116188264A (en) * 2023-02-10 2023-05-30 广州南方高速铁路测量技术有限公司 Point Cloud Stitching Method Based on Visual Markers

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284224A (en) * 2021-04-20 2021-08-20 北京行动智能科技有限公司 Automatic mapping method and device based on simplex code, and positioning method and equipment
CN117830422A (en) * 2023-12-15 2024-04-05 之江实验室 Multi-camera external parameter calibration method and system based on multiple Aruco codes

Also Published As

Publication number Publication date
CN118322213A (en) 2024-07-12

Similar Documents

Publication Publication Date Title
CN111127568B (en) Camera pose calibration method based on spatial point location information
US8095237B2 (en) Method and apparatus for single image 3D vision guided robotics
US6816755B2 (en) Method and apparatus for single camera 3D vision guided robotics
CN109153125B (en) Method for orienting an industrial robot and the industrial robot
JP6855492B2 (en) Robot system, robot system control device, and robot system control method
CN114012731A (en) Hand-eye calibration method and device, computer equipment and storage medium
JP6855491B2 (en) Robot system, robot system control device, and robot system control method
CN109397282A (en) Method and system for machining robot arm and computer readable recording medium
CN107428009A (en) Method, the industrial robot system using this method and control system for industrial robot debugging
CN118322213B (en) Vision-based kinematic calibration method for industrial robots in large workspaces
CN114800574A (en) Robot automatic welding system and method based on double three-dimensional cameras
JP2023069253A (en) Robot teaching system
CN112958960A (en) Robot hand-eye calibration device based on optical target
CN111390910A (en) Manipulator target grabbing and positioning method, computer readable storage medium and manipulator
JP7112528B2 (en) Work coordinate creation device
CN117381788A (en) High-precision positioning and intelligent operation guiding method for composite robot
CN116398065B (en) Vision-based positioning method for automatic loading and unloading of drill rods by tunnel drilling robots in coal mines
CN118752507A (en) A high-precision hole-making method and device for a vision-guided robot
JPH0847881A (en) Robot remote control method
JP7660686B2 (en) ROBOT CONTROL DEVICE, ROBOT CONTROL SYSTEM, AND ROBOT CONTROL METHOD
CN113858214B (en) Positioning method and control system for robot operation
JP7657936B2 (en) ROBOT CONTROL DEVICE, ROBOT CONTROL SYSTEM, AND ROBOT CONTROL METHOD
Kana et al. Robot-sensor calibration for a 3D vision assisted drawing robot
CN114939867A (en) Calibration method and system for mechanical arm external irregular asymmetric tool based on stereoscopic vision
JP7583942B2 (en) ROBOT CONTROL DEVICE, ROBOT CONTROL SYSTEM, AND ROBOT CONTROL METHOD

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant