[go: up one dir, main page]

CN118411401A - Part inspection method, device, equipment and storage medium based on augmented reality - Google Patents

Part inspection method, device, equipment and storage medium based on augmented reality Download PDF

Info

Publication number
CN118411401A
CN118411401A CN202410630262.XA CN202410630262A CN118411401A CN 118411401 A CN118411401 A CN 118411401A CN 202410630262 A CN202410630262 A CN 202410630262A CN 118411401 A CN118411401 A CN 118411401A
Authority
CN
China
Prior art keywords
pose
model
real
virtual
inspected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410630262.XA
Other languages
Chinese (zh)
Inventor
陈峥廷
黄崇权
张文权
樊娜娜
刘爱明
唐杨
李哿
邬帆
安建龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Aircraft Industrial Group Co Ltd
Original Assignee
Chengdu Aircraft Industrial Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Aircraft Industrial Group Co Ltd filed Critical Chengdu Aircraft Industrial Group Co Ltd
Priority to CN202410630262.XA priority Critical patent/CN118411401A/en
Publication of CN118411401A publication Critical patent/CN118411401A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to the technical field of part inspection, in particular to a part inspection method, device, equipment and storage medium based on augmented reality. The method of the application comprises the steps of: constructing a three-dimensional model of the part to be inspected, extracting a contour line frame of the three-dimensional model, and marking size parameters on the three-dimensional model to generate a virtual part model; acquiring an image of a real part in real time, and acquiring the pose of the real part based on the image in real time; based on the pose of the real part, adjusting the pose of the virtual part model to enable the virtual part model to have the same pose as the real part; and based on the augmented reality technology, superposing a virtual part model on the real part to generate an auxiliary inspection picture. The virtual outline frame, the size information and the real parts are accurately overlapped and fused, virtual and real fused pictures are transmitted to operators, both-hand operation and information acquisition are simultaneously carried out, and the inspection work efficiency is improved.

Description

Part inspection method, device, equipment and storage medium based on augmented reality
Technical Field
The application relates to the technical field of part inspection, in particular to a part inspection method, device, equipment and storage medium based on augmented reality.
Background
The checking detection is an important step after the production and processing of the product are finished, and the meaning of the checking detection is that the size of the numerical control machined part meets the design requirement. The inspection plan is the basis for guiding operators to self-inspect and inspection staff to accept products, is the standard for judging whether the products meet the design requirements, and plays roles of verifying correctness and preventing errors in the manufacturing process of the products.
The current inspection plan is prepared by converting a three-dimensional model of a product into a two-dimensional engineering drawing, and the sizes are required to be marked one by one, so that the preparation efficiency is low. In addition, before using the inspection plan, the inspection personnel needs to spend time to digest and understand the drawing, the drawing and the measured parts are often required to be checked feature by feature in the inspection process, the operation process can be forced to be interrupted, and the inspection efficiency is low.
Disclosure of Invention
The application mainly aims to provide a part inspection method, device, equipment and storage medium based on augmented reality, which aim to solve the problem of low size inspection efficiency of the existing numerical control machining parts.
In order to achieve the above object, the present application provides a part inspection method based on augmented reality, comprising the steps of:
Constructing a three-dimensional model of a part to be inspected, extracting a contour line frame of the three-dimensional model, and marking the three-dimensional model with dimension parameters to generate a virtual part model;
performing image recognition on the obtained real part image of the part to be inspected to obtain the real part pose of the part to be inspected;
Based on the real part pose, adjusting the pose of the virtual part model to enable the virtual part model to have the same pose as that in the real part image;
and based on an augmented reality technology, superposing the virtual part model on the part to be inspected to generate an auxiliary inspection picture.
Optionally, the image recognition is performed on the obtained real part image of the part to be inspected to obtain the real part pose of the part to be inspected, which comprises the following steps:
acquiring a real part image of the part to be inspected through image acquisition equipment, wherein the image acquisition equipment comprises an inertial measurement unit;
calculating the initial pose relation of the part to be inspected relative to the image acquisition equipment based on the real part image;
And calculating to obtain first pose data of the part to be inspected based on the initial pose relation, and obtaining second pose data of the part to be inspected based on the inertial measurement unit.
Optionally, after the first pose data and the second pose data are obtained, the first pose data and the second pose data are fused to obtain fused pose data.
Optionally, the image acquisition device is a head-mounted display with a camera; the adjusting the pose of the virtual part model based on the pose of the real part to enable the virtual part model to have the same pose as the real part comprises the following steps:
establishing a camera coordinate system and a real part coordinate system, and obtaining a rotation matrix and a translation vector between the real part coordinate system and the camera coordinate system based on the real part pose;
establishing a virtual space coordinate system based on the three-dimensional model, and acting the values of the rotation matrix and the translation vector on the virtual space coordinate system to obtain a conversion relation from the three-dimensional model to a real space;
Based on the conversion relation from the three-dimensional model to the real space, the pose of the virtual part model is adjusted, so that the virtual part model has the same pose as that in the real part image.
Optionally, based on the augmented reality technology, the virtual part model is superimposed on the part to be inspected, and an auxiliary inspection picture is generated, which includes the steps of:
creating an augmented reality application program, and importing the three-dimensional model into the application program;
And establishing a plane projection coordinate system of the head-mounted display, and based on the rotation matrix and the translation vector, projecting the outline frame and the dimension parameter into the plane projection coordinate system, so that the outline frame and the dimension parameter are overlapped on the real part, and generating an augmented reality auxiliary inspection picture.
Optionally, onePose algorithm is used to calculate the initial pose relationship of the real part relative to the image acquisition device.
Optionally, the step of constructing a three-dimensional model of the part to be inspected, extracting the outline frame of the three-dimensional model, and labeling the three-dimensional model with dimension parameters to generate a virtual part model includes the steps of:
constructing a three-dimensional model of a part to be inspected, extracting a contour line frame of the three-dimensional model in CATIA three-dimensional modeling software, and defining a three-dimensional coordinate system of the contour line frame;
marking all dimension lines to be inspected on the three-dimensional model;
And combining the dimension line and the contour line frame into a whole to generate a virtual part model.
Based on the same inventive concept, the application also provides an augmented reality technology-assisted part inspection device, which comprises:
Model construction module: the method comprises the steps of constructing a three-dimensional model of a part to be inspected, extracting a contour line frame of the three-dimensional model, marking size parameters on the three-dimensional model, and generating a virtual part model;
Pose tracking module: performing image recognition on the obtained real part image of the part to be inspected to obtain the real part pose of the part to be inspected;
Model matching module: based on the real part pose, adjusting the pose of the virtual part model to enable the virtual part model to have the same pose as that in the real part image;
and a virtual-real fusion module: and based on an augmented reality technology, superposing the virtual part model on the part to be inspected to generate an auxiliary inspection picture.
Based on the same inventive concept, the application further provides an electronic device comprising:
a memory for storing computer executable instructions or computer programs;
and the processor is used for realizing the part inspection method based on augmented reality when executing the computer executable instructions or the computer programs stored in the memory.
Based on the same inventive concept, the application also provides a computer readable storage medium storing computer executable instructions which when executed by a processor implement the above-mentioned augmented reality-based part inspection method.
Compared with the prior art, the application has the beneficial effects that:
According to the application, the virtual outline frame and the dimension parameters are generated based on the three-dimensional model of the part to be inspected, the pose of the real part is accurately estimated in real time through the pose tracking algorithm, the virtual outline frame and the dimension information are accurately overlapped and fused with the real part, the combination of the augmented reality technology and the inspection process of the aviation numerical control machined part is realized, the virtual and real fused picture is transmitted to an operator, the operator can simultaneously see the part in the real scene and the virtual dimension information, and the two-hand operation and the information acquisition are simultaneously carried out, so that the inspection work efficiency is improved. And through visual size suggestion, effectively reduced operating personnel's study burden.
Drawings
FIG. 1 is a schematic flow chart of a part inspection method based on augmented reality according to an embodiment of the present application;
fig. 2 is a schematic diagram of a coordinate system transformation relationship according to an embodiment of the application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that all directional indicators (such as up, down, left, right, front, and rear … …) in the embodiments of the present invention are merely used to explain the relative positional relationship, movement, etc. between the components in a particular posture (as shown in the drawings), and if the particular posture is changed, the directional indicator is changed accordingly.
In the present invention, unless specifically stated and limited otherwise, the terms "connected," "affixed," and the like are to be construed broadly, and for example, "affixed" may be a fixed connection, a removable connection, or an integral body; can be mechanically or electrically connected; either directly or indirectly, through intermediaries, or both, may be in communication with each other or in interaction with each other, unless expressly defined otherwise. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present invention, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the meaning of "and/or" as it appears throughout includes three parallel schemes, for example "A and/or B", including the A scheme, or the B scheme, or the scheme where A and B are satisfied simultaneously. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
The inspection and detection is an important step after the production and processing of the aviation parts are completed, the inspection and detection have the significance of ensuring that the characteristic size of the numerical control processed parts meets the design requirement, the current inspection plan is compiled by converting a three-dimensional model of a product into a two-dimensional engineering drawing, the sizes are required to be marked one by one, and the compiling efficiency is low. In addition, before using the inspection plan, the inspection personnel needs to spend time to digest and understand the drawing, the drawing and the measured parts need to be checked one by one characteristic in the inspection process, the operation process is forced to be interrupted, and the inspection efficiency is low. Thus, conventional inspection schedules have limited to a large extent the production and delivery efficiency of aeronautical digitally controlled processed products.
In order to solve the problems of low efficiency of inspection planning, large cognitive burden of inspection operators and low inspection efficiency, a first embodiment of the application provides a part inspection method based on augmented reality, and referring to fig. 1, the method comprises the following steps:
S10, constructing a three-dimensional model of a part to be inspected, extracting a contour line frame of the three-dimensional model, and marking the three-dimensional model with dimension parameters to generate a virtual part model;
S20, performing image recognition on the obtained real part image of the part to be inspected to obtain the real part pose of the part to be inspected;
S30, adjusting the pose of the virtual part model based on the pose of the real part, so that the virtual part model has the same pose as that in the real part image;
s40, based on the augmented reality technology, the virtual part model is overlapped on the part to be inspected, and an auxiliary inspection picture is generated.
According to the method, the virtual part outline frame and the size parameters are overlapped on the real part to be inspected through an Augmented Reality (AR) technology, and the virtual-real combined picture can be transmitted to an operator through display equipment (AR glasses). The operating personnel wear AR glasses, and the size parameters of the parts can be intuitively obtained through the glasses, so that interruption in the operation process is avoided, and the inspection and detection efficiency of the aviation numerical control machining parts is improved.
As an alternative embodiment, based on the above embodiment, the step S10 includes:
Constructing a three-dimensional model of a part to be inspected based on CATIA three-dimensional modeling software, extracting a contour line frame of the three-dimensional model in the CATIA three-dimensional modeling software by using a 'WIREFRAME AND Surfaces' tool, and defining a three-dimensional coordinate system of the contour line frame;
And marking all dimension lines to be inspected on the three-dimensional model, and taking the dimension lines and the three-dimensional contour line frame as a whole to generate the virtual part model.
For the conventional part, a three-dimensional model is provided, and the three-dimensional model is imported into CATIA three-dimensional modeling software without re-modeling.
As an example, the specific operation steps are as follows:
s11: in CATIA, a three-dimensional model of a part to be detected is opened, and information such as the relation, shape, size and the like among the parts is combed.
S12: the "WIREFRAME AND Surfaces" module in the CATIA software is entered and the "wire frame profile" function in the toolbar is used for contour extraction. And moving the mouse onto the surface of the target part, automatically extracting the geometric shape of the three-dimensional model according to the outline, generating an outline frame, and setting the outline frame to be green so as to clearly show the fusion effect of the virtual part model and the real part in the subsequent visualization. And adjusting and optimizing the automatically generated outline frames.
S13: and a 'PART DESIGN' module in CATIA software is entered, the dimension is added in the three-dimensional model, and important dimension parameters such as the length, width, radius of a round corner, wall thickness and the like of the part to be detected are marked for subsequent visual display. In other embodiments, the feature automatic identification plug-in is utilized to automatically identify and label important features in the part.
S14: after the three-dimensional model outline frame is established and marked in size, the outline frame is stored in a computer (augmented reality equipment), and file formats can be STL, CATPart and the like.
As an alternative embodiment, based on the above embodiment, the step S20 includes:
S21, acquiring a real part image of a part to be inspected through an image acquisition device, and estimating an initial pose relation of a real part coordinate system relative to the image acquisition device by utilizing OnePose algorithm based on the real part image;
As an alternative embodiment, the image capturing device is a head-mounted display, on which a camera is arranged, and as an example, a holonens head-mounted display is used.
And running software in the head-mounted display to shoot and process the real scene.
After the image of the real part is obtained, the obtained image is processed by utilizing OnePose algorithm based on deep learning, and the initial pose of the real part is obtained.
The OnePose algorithm is a mature object posture algorithm, and the pose information of the part is obtained through the processes of data acquisition and annotation, object sparse point cloud reconstruction, 2D characteristic point selection in each process, 2D characteristic point matching with 3D characteristic point matching, pnP algorithm solving and the like.
In order to estimate the pose of a part, a deep learning frame is installed in a computer, a 2D feature point and a 3D feature point matching network are trained in the computer, the trained model is subjected to light weight processing, and a light weight neural network is deployed on a head-mounted display. And finally, inputting the image acquired by the head-mounted display in real time into OnePose feature point matching networks, performing feature point matching on the image by the networks, and solving the initial pose information of the real part by combining a PnP algorithm.
As an example, the above-mentioned image obtained is processed by the OnePose algorithm based on deep learning, which mainly comprises the following steps:
S21.1, collecting and annotating data, placing a real part on a platform, ensuring that the real part is in a static state in the whole process, and carrying out omnibearing video scanning on a target part by using mobile equipment. For each real part, pose definition is performed in advance. The process would collect a scanned video with RGB frames I i and camera pose ζ i and a 3D object bounding box B labeling the object.
S21.2, reconstructing a sparse point cloud of the object, and after the object is scanned by RGB, in a mapping stage, extracting a set of given RGB images { I i } from the scanned video to construct an SfM model consisting of sparse feature points so as to reconstruct the sparse point cloud { P j } of the object.
S21.3, selecting characteristic points in each process, and constructing a corresponding graph { G j } in the SfM process, wherein the corresponding graph { G j } represents the correspondence between 2D and 3D characteristic points in the SfM graph. Wherein one 3D feature point of the object has different 2D feature points at different viewing angles, so one 3D point corresponds to a plurality of 2D points. OnePose is that feature points in the SfM model can be accurately corresponding to feature points in the picture to be detected.
S21.4, training GATs the graph annotation network, firstly, enabling 2D characteristic points in the corresponding graph { G j }Aggregation of attention layers into 3D feature points by aggregation Then and 2D characteristic points in the image needing to estimate the poseMatching is performed, a matching prediction M 3D of the 2D-3D feature points is generated, and the GATs graph attention network is trained through self-attention and cross-attention mechanisms.
S21.5, solving a PnP problem through M 3D, firstly calculating a geometric relation by utilizing probability data of 3D points in M 3D and 2D points corresponding to and matched with the 3D points, namely calculating a covariance matrix of a sparse point cloud and a relation between the covariance matrix and 2D coordinates, solving an essential matrix E, and then obtaining an initial pose of a real part coordinate system relative to a camera coordinate system, a transformation relation R (rotation matrix) and a transformation relation T (translation vector) by using a singular value decomposition algorithm SVD.
S22, based on the initial pose relation, acquiring first pose data of the part to be inspected in real time by adopting an optical flow algorithm, and acquiring second pose data of the part to be inspected in real time by adopting an inertial measurement unit of image acquisition equipment.
The specific method comprises the following steps:
Acquiring continuous image frames through a camera of a head-mounted display, carrying out preprocessing such as denoising on the images, extracting characteristic points such as corner points and edge points of the images by utilizing a characteristic point extraction algorithm, solving and obtaining the change condition of the pose of a real part according to a gray scale invariant assumption by utilizing a KLT sparse optical flow tracking algorithm, namely obtaining first pose data of the part to be inspected in real time;
Meanwhile, the motion of display equipment is sensed and recorded in real time by utilizing an inertial measurement unit arranged in the head-mounted display, acceleration, angular velocity and the like are measured by an accelerometer, a gyroscope and the like, and the measured value is subjected to integral solution to obtain position change information of the head-mounted display, and then the position change information of the real part obtained through the image is combined to calculate the pose change information of the corresponding head-mounted display relative to the real part, namely second pose data of the part to be inspected is obtained in real time. Specifically, there are two cases of calculating the second pose data, one is that the part itself does not move during inspection, and the second pose data can be obtained by combining the initial pose information with the calculation result of the inertial measurement unit; and the other is that the part moves during inspection, and at the moment, the image of the part needs to be acquired in real time, and the second pose data is calculated by combining the image acquired in real time with the calculation result of the inertial measurement unit.
As an optional implementation mode, a KLT sparse optical flow algorithm is adopted to track real parts, and real-time pose information of the real parts is obtained, wherein the specific principle is as follows:
at time t, (x, y) is the coordinate position of a point in the image, and its gray value is denoted as I (x, y, t). With the change in viewing angle, at time t+Δt, the point (x, y) moves to (x+dx, y+dy), and the gray value thereof is expressed as I (x+dx, y+dy, t+dt). dx and d y are displacement amounts of pixel points in the images in the x and y directions, and dt is a time interval between the two images. According to the assumption that the gray scale is unchanged by the optical flow method, in a local area, the gray scale of the pixel point is kept unchanged between continuous image frames, namely:
I(x+dx,y+dy,t+dt)=I(x,y,t)
performing Taylor expansion on the gray value at the time of t+deltat, and reserving one item to obtain:
since the gray scale is unchanged, it is possible to obtain:
This can be achieved by:
Wherein, Representing the gradients of the image in the x, y directions, respectively.Representing the speed of movement in the x, y directions, respectively. By combining multiple points, one can solveThe tracking of the part to be inspected can be realized.
As an alternative embodiment, the inertial measurement unit (Inertial Measurement Unit, IMU) is used to sense and record the motion of the head-mounted display device in real time, and calculate the corresponding pose change information, which includes the following steps:
Specifically, in step S22, the inertial measurement unit (Inertial Measurement Unit, IMU) is used to sense and record the motion of the head-mounted display device in real time, and calculate the corresponding pose change information, which includes the following steps:
S22.1, preprocessing the data, wherein the preprocessing is needed for noise and deviation contained in IMU data. The preprocessing steps comprise zero offset calibration, sensor axis alignment, unit calibration and the like. Through preprocessing, the linear acceleration and angular velocity measurement values after calibration and normalization can be obtained.
S22.2, integrating and solving the speed and the position, and estimating the speed and the position of the head-mounted display by double integration of the acceleration measured values. For time t, the speed of the head mounted display is denoted v (t), the position is denoted p (t), and the integration is performed using the following formula:
where Δt represents the time interval, a is the acceleration, and ba represents the zero offset calibration value of the accelerometer.
S22.3, pose estimation, by integrating the angular velocity measurements, the pose of the device can be estimated. For time t, the rotation of the device is expressed as a quaternion or rotation matrix q (t), and is integrated using the following formula:
Wherein b ω denotes the zero offset calibration value of the gyroscope, Representing the multiplication operation of the quaternion.
The first pose data obtained by the machine vision tracking method and the second pose data obtained by the hardware tracking method can be used as the basis for tracking the real part, but as a more preferable implementation mode, the application fuses the first pose data and the second pose data, thereby realizing the efficient and accurate estimation of the pose of the real part, and the method comprises the following specific steps:
the first pose data and the second pose data are fused by using a Kalman filtering algorithm to obtain fused pose data, and the method comprises the following steps:
The filtering system is initialized, defining variables and observations of the system, and noise and uncertainty of the system. Defining a machine vision observation, a hardware-based tracking observation, a state transition matrix, an observation model matrix, and a covariance matrix of state transition process noise and observation noise.
And weighting the first pose data and the second pose data according to the state and noise of the system, and predicting and updating the pose estimation result to obtain a pose transformation matrix of the real part coordinate system relative to the camera coordinate system.
The state variables of the system are defined using position and velocity as state variables, i.e. x k, where k represents the time step. The state transition equation of the system is defined as:
xkkxk-1+wk
Where F k is the state transition matrix and w k is the state transition process noise.
An observation variable of two different sensors is defined. Assuming that the machine vision observation is Z vision,k, the hardware-based tracking observation is Z hardware,k. These two observations can be expressed as:
A state transition matrix F k, an observation model matrix H k, and covariance matrices Q k and R k of the process noise w k and the observation noise v k are defined.
The state transition matrix F k is used to describe how the state transitions from one time step to the next;
The observation model matrix H k is used for projecting the state into an observation space;
process noise w k is used to describe the uncertainty and noise in the state transition process;
The observation noise v k is used to describe uncertainty and noise of the observation.
Initializing an initial state of the filter, including initializing state estimationAnd a state covariance P 0. In each time step k, a kalman filter algorithm is performed according to the following steps:
prediction step (prediction state and covariance):
Update step (update state and covariance):
Pk|k=(I-KkHk)Pk|k-1
Pk|k=(I-KkHk)Pk|k-1
wherein K k is the Kalman gain, Is an updated state estimate and p k|k is an updated state covariance.
As an alternative embodiment, the step S30 includes:
establishing a camera coordinate system, a real part coordinate system and a plane projection coordinate system of the head-mounted display;
obtaining the conversion relation among the camera coordinate system, the display plane coordinate system and the real part coordinate system;
Specifically, the conversion relationship between the camera coordinate system, the plane projection coordinate system of the display and the real part coordinate system is shown in fig. 2. O-XYZ is the real part coordinate system; o '-X' Y 'Z' represents the viewing space coordinate system, i.e., the camera coordinate system on the head mounted display; o v-XvYvZv represents a virtual space coordinate system used to geometrically describe the added virtual part model; the planar projection coordinate system o-uv is a two-dimensional coordinate system representing the planar coordinate system of the projected image in the head-mounted display.
The transformation relation between the real part coordinate system O-XYZ and the camera coordinate system O '-X' Y 'Z' is as follows:
(x′,y′,z′)T=R(x,y,z)T+T
wherein the rotation matrix R and the translation vector T are respectively:
T=(tx,ty,tz)
x′=rxxx+rxyy+rxzz+tx
y′=ryxx+ryyy+ryzz+ty
z′=rzxx+rzyy+rzzz+tz
Where r is the rotational component and t is the translational component.
The pose of the real part relative to the camera can be known by solving the rotation matrix R and the translation vector T through an algorithm, and the real part coordinate system can be aligned with the camera coordinate system.
The rotation matrix R and the translation vector T are obtained from the pose tracking result of the real part in the step S20.
Since the desired effect is that the added virtual part model is merged with the real part in space overlapping, it is necessary to transform the virtual part model into the real space coordinate system and have the same pose as the real part. After the transformation relation R, T between the real part coordinate system O-XYZ and the camera coordinate system O '-X' Y 'Z' is obtained through the solving, the R, T value is acted on the virtual space coordinate system O v-XvYvZv, and then the transformation relation from the virtual part model to the real space can be obtained. At the moment, the virtual part model is projected into a plane projection coordinate system o-uv in the head-mounted display, so that virtual-real fusion of the augmented reality system is completed.
As an alternative embodiment, the step S40 includes:
S41, creating an augmented reality application program by using the Unity3D as a development tool and combining an augmented reality platform Vufronia, and importing the three-dimensional model obtained in the step S10 into the augmented reality application program. And (3) adjusting the position and the size of the three-dimensional model, and setting a coordinate system and a visual angle.
S42, writing an augmented reality application program, wherein the program is required to realize that a camera captures a real scene, and projection of virtual objects such as a contour line frame, a dimension parameter and the like into a plane coordinate system in the head-mounted display is realized by combining the pose transformation matrix obtained in the step S30.
S43, the user adjusts the position and angle of the head-mounted display to calibrate, the augmented reality application program is issued to the head-mounted display, and finally an augmented reality picture is generated.
The operator can experience the effect of virtual-real fusion by wearing the head-mounted equipment, and can see the information such as the outline frame, the size parameter and the like through the augmented reality picture while seeing the real part, so that the structure and the characteristics of the part are better understood, and each size parameter is intuitively known.
Based on the same inventive concept, the application also provides an augmented reality technology-assisted part inspection device, which comprises:
Model construction module: the method comprises the steps of constructing a three-dimensional model of a part to be inspected, extracting a contour line frame of the three-dimensional model, marking size parameters on the three-dimensional model, and generating a virtual part model;
Pose tracking module: performing image recognition on the obtained real part image of the part to be inspected to obtain the real part pose of the part to be inspected;
Model matching module: based on the pose of the real part, adjusting the pose of the virtual part model to enable the pose of the virtual part model to be the same as the pose in the real part image;
and a virtual-real fusion module: based on the augmented reality technology, the outline frame and the dimension parameters of the virtual part model are overlapped on the part to be inspected through the display, and an auxiliary inspection picture is generated.
In specific implementation, part contour and size information for auxiliary detection are generated in advance through a model building module. The pose tracking module transmits the image information of the real part in the visual field of the operator to the virtual-real fusion module of the head-mounted equipment, processes the corresponding image information by using the deep learning model, estimates the pose of the part in the real scene, and transmits the coordinate system conversion relationship to the virtual-real fusion module. And the virtual-real fusion module presents the contour and the size information of the part in the visual field of the inspector in a virtual mode, so that the visual display of the size information is realized. When the visual field of the inspector moves, the pose of the object is solved again by a method combining machine vision tracking and hardware tracking, so that the pose and rendering condition of the virtual model are changed, and the virtual information can follow the visual field change in real time.
Based on the same inventive concept, the application also provides an electronic device, which is characterized in that the electronic device comprises:
a memory for storing computer executable instructions or computer programs;
and the processor is used for realizing the part inspection method based on augmented reality when executing the computer executable instructions or the computer programs stored in the memory.
Based on the same inventive concept, the application also provides a computer readable storage medium storing computer executable instructions, which is characterized in that the computer executable instructions realize the above-mentioned part inspection method based on augmented reality when being executed by a processor.
It should be noted that the logic and/or steps represented in the flowcharts or otherwise described herein, for example, may be considered as a ordered listing of executable instructions for implementing logical functions, may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium may even be paper or other suitable medium upon which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. The part inspection method based on augmented reality is characterized by comprising the following steps of:
Constructing a three-dimensional model of a part to be inspected, extracting a contour line frame of the three-dimensional model, and marking the three-dimensional model with dimension parameters to generate a virtual part model;
performing image recognition on the obtained real part image of the part to be inspected to obtain the real part pose of the part to be inspected;
based on the pose of the real part, adjusting the pose of the virtual part model to enable the virtual part model to have the same pose as that in the real part image;
and based on an augmented reality technology, superposing the virtual part model on the part to be inspected to generate an auxiliary inspection picture.
2. The augmented reality-based part inspection method according to claim 1, wherein the image recognition of the acquired real part image of the part to be inspected to obtain the real part pose of the part to be inspected comprises the steps of:
acquiring a real part image of the part to be inspected through image acquisition equipment, wherein the image acquisition equipment comprises an inertial measurement unit;
calculating the initial pose relation of the part to be inspected relative to the image acquisition equipment based on the real part image;
And calculating to obtain first pose data of the part to be inspected based on the initial pose relation, and obtaining second pose data of the part to be inspected based on the inertial measurement unit.
3. The augmented reality-based part inspection method of claim 2, wherein after the first pose data and the second pose data are obtained, the first pose data and the second pose data are fused to obtain fused pose data.
4. The augmented reality-based part inspection method of claim 2, wherein the image acquisition device is a head-mounted display with a camera;
The adjusting the pose of the virtual part model based on the pose of the real part to enable the virtual part model to have the same pose as that in the real part image comprises the following steps:
establishing a camera coordinate system and a real part coordinate system, and obtaining a rotation matrix and a translation vector between the real part coordinate system and the camera coordinate system based on the real part pose;
establishing a virtual space coordinate system based on the three-dimensional model, and acting the values of the rotation matrix and the translation vector on the virtual space coordinate system to obtain a conversion relation from the three-dimensional model to a real space;
Based on the conversion relation from the three-dimensional model to the real space, the pose of the virtual part model is adjusted, so that the virtual part model has the same pose as that in the real part image.
5. The augmented reality-based part inspection method of claim 4, wherein the superimposing the virtual part model on the part to be inspected based on the augmented reality technology generates an auxiliary inspection screen, comprising the steps of:
And establishing a plane projection coordinate system of the head-mounted display, and based on the rotation matrix and the translation vector, projecting the outline frame and the dimension parameter into the plane projection coordinate system, so that the outline frame and the dimension parameter are overlapped on the real part, and generating an augmented reality auxiliary inspection picture.
6. The augmented reality-based part inspection method of claim 2, wherein an algorithm OnePose is used to calculate an initial pose relationship of the real part relative to the image acquisition device.
7. The augmented reality-based part inspection method according to claim 1, wherein the steps of constructing a three-dimensional model of a part to be inspected, extracting the three-dimensional model contour line frame, and labeling the three-dimensional model with dimensional parameters to generate a virtual part model, include the steps of:
constructing a three-dimensional model of a part to be inspected, extracting a contour line frame of the three-dimensional model in CATIA three-dimensional modeling software, and defining a three-dimensional coordinate system of the contour line frame;
marking all dimension lines to be inspected on the three-dimensional model;
And combining the dimension line and the contour line frame into a whole to generate a virtual part model.
8. An augmented reality-assisted part inspection device, the device comprising:
model construction module: the method comprises the steps of constructing a three-dimensional model of a part to be inspected, extracting a contour line frame of the three-dimensional model, and marking size parameters on the three-dimensional model to generate a virtual part model;
Pose tracking module: the method comprises the steps of carrying out image recognition on an obtained real part image of the part to be inspected to obtain a real part pose of the part to be inspected;
Model matching module: based on the pose of the real part, adjusting the pose of the virtual part model to enable the virtual part model to have the same pose as that in the real part image;
and a virtual-real fusion module: and based on an augmented reality technology, superposing the virtual part model on the part to be inspected to generate an auxiliary inspection picture.
9. An electronic device, the electronic device comprising:
a memory for storing computer executable instructions or computer programs;
a processor for implementing the augmented reality-based part inspection method of any one of claims 1 to 6 when executing computer-executable instructions or computer programs stored in the memory.
10. A computer readable storage medium storing computer executable instructions which when executed by a processor implement the augmented reality based part inspection method of any one of claims 1 to 6.
CN202410630262.XA 2024-05-21 2024-05-21 Part inspection method, device, equipment and storage medium based on augmented reality Pending CN118411401A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410630262.XA CN118411401A (en) 2024-05-21 2024-05-21 Part inspection method, device, equipment and storage medium based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410630262.XA CN118411401A (en) 2024-05-21 2024-05-21 Part inspection method, device, equipment and storage medium based on augmented reality

Publications (1)

Publication Number Publication Date
CN118411401A true CN118411401A (en) 2024-07-30

Family

ID=92032224

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410630262.XA Pending CN118411401A (en) 2024-05-21 2024-05-21 Part inspection method, device, equipment and storage medium based on augmented reality

Country Status (1)

Country Link
CN (1) CN118411401A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119048718A (en) * 2024-09-04 2024-11-29 上海安比来科技有限公司 Augmented reality three-dimensional registration method and electronic equipment
CN119516549A (en) * 2025-01-21 2025-02-25 广州思德医疗科技有限公司 Image annotation method, device, equipment, medium and computer program product

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119048718A (en) * 2024-09-04 2024-11-29 上海安比来科技有限公司 Augmented reality three-dimensional registration method and electronic equipment
CN119516549A (en) * 2025-01-21 2025-02-25 广州思德医疗科技有限公司 Image annotation method, device, equipment, medium and computer program product

Similar Documents

Publication Publication Date Title
US20210012520A1 (en) Distance measuring method and device
EP3067861B1 (en) Determination of a coordinate conversion parameter
EP2751777B1 (en) Method for estimating a camera motion and for determining a three-dimensional model of a real environment
EP1886281B1 (en) Image processing method and image processing apparatus
JP7379065B2 (en) Information processing device, information processing method, and program
JP5248806B2 (en) Information processing apparatus and information processing method
EP3159125A1 (en) Device for recognizing position of mobile robot by using direct tracking, and method therefor
CN118411401A (en) Part inspection method, device, equipment and storage medium based on augmented reality
US20160253807A1 (en) Method and System for Determining 3D Object Poses and Landmark Points using Surface Patches
US20230085384A1 (en) Characterizing and improving of image processing
US20060088203A1 (en) Method and apparatus for machine-vision
US11490062B2 (en) Information processing apparatus, information processing method, and storage medium
WO2012023593A1 (en) Position and orientation measurement apparatus, position and orientation measurement method, and storage medium
US20160210761A1 (en) 3d reconstruction
Baratoff et al. Interactive multi-marker calibration for augmented reality applications
US12033406B2 (en) Method and device for identifying presence of three-dimensional objects using images
CN113295089B (en) Carriage volume rate measuring method based on visual inertia SLAM
CN116612459B (en) Target detection method, target detection device, electronic equipment and storage medium
CN113848931A (en) Agricultural machinery automatic driving obstacle recognition method, system, equipment and storage medium
CN112987002A (en) Obstacle danger identification method, system and device
Wientapper et al. Composing the feature map retrieval process for robust and ready-to-use monocular tracking
CN118642121B (en) Monocular vision ranging and laser point cloud fusion space positioning method and system
CN112907633A (en) Dynamic characteristic point identification method and application thereof
Zhang et al. Constraints for heterogeneous sensor auto-calibration
Agrawal et al. RWU3D: Real World ToF and Stereo Dataset with High Quality Ground Truth

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination