CN113592959B - Visual processing-based membrane lamination method and system - Google Patents
Visual processing-based membrane lamination method and system Download PDFInfo
- Publication number
- CN113592959B CN113592959B CN202110941788.6A CN202110941788A CN113592959B CN 113592959 B CN113592959 B CN 113592959B CN 202110941788 A CN202110941788 A CN 202110941788A CN 113592959 B CN113592959 B CN 113592959B
- Authority
- CN
- China
- Prior art keywords
- image
- plane
- coordinate system
- module
- mapping relation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
The application discloses a membrane lamination method and a system based on visual processing, comprising the following steps: establishing a third mapping relation according to the first mapping relation and the second mapping relation, wherein the third mapping relation refers to a mapping relation between an image coordinate system of the first plane and an image coordinate system of the second plane; acquiring a first image and a second image in real time, wherein the first image is an image corresponding to a diaphragm adsorbed on a mechanical module on a first plane, and the second image is an image corresponding to a rubber frame on a second plane; according to the third mapping relation, mapping the pixel coordinates of the second image acquired in real time to an image coordinate system of a first plane where the diaphragm is positioned; in an image coordinate system of the first plane, pixel offset of the second image and the first image is calculated. The membrane and the rubber frame are aligned on the first plane, so that electrostatic adsorption can be avoided, and the membrane and the rubber frame are ensured to be accurately stacked together.
Description
Technical Field
The application belongs to the technical field of vision processing, and particularly relates to a film lamination method and system based on vision processing.
Background
The outer frame of the mobile phone backlight is usually made of plastic or a combination of plastic and an iron frame, and is therefore also called a rubber frame.
In the backlight processing technology, a diffusion film, a first brightness enhancement film, a second brightness enhancement film, a light shielding film and a glue frame are required to be sequentially laminated together, wherein the lamination alignment precision of each layer of film and the glue frame directly influences the use effect of the backlight, for example, if the light shielding film is not aligned with the glue frame, light leakage phenomenon may occur.
In one implementation, the mechanical module is guided based on the vision processing system to complete the membrane stacking operation, however, in the practical application process, the applicant finds that, in order to ensure the accuracy of stacking the membrane on the glue frame, when the membrane adsorbed by the mechanical module is close enough to the glue frame, the vision positioning processing is performed through the image acquisition module, however, when the membrane adsorbed on the mechanical module is too close to the glue frame, due to the electrostatic action between the membranes, the membrane adsorbed on the mechanical module can re-adsorb the membrane originally stacked on the glue frame, so that the vision positioning cannot be performed.
Therefore, the method for laminating the films, which can ensure the accurate positioning of the films and the rubber frame and reduce the electrostatic adsorption phenomenon between the films, is a technical problem to be solved in the prior art.
Disclosure of Invention
In order to solve the technical problems, the application provides a membrane lamination method and a membrane lamination system based on visual processing.
In a first aspect, the present application provides a method of laminating a film sheet based on visual processing, comprising:
establishing a first mapping relation between an image coordinate system of a first plane and a motion coordinate system of a mechanical module;
establishing a second mapping relation, wherein the second mapping relation refers to a mapping relation between an image coordinate system of a second plane and a motion coordinate system of a mechanical module, the second plane refers to a plane where a turntable rubberizing frame is located, the first plane is located above the second plane, and the distance between the first plane and the second plane is greater than or equal to the minimum distance between two films without static electricity;
establishing a third mapping relation according to the first mapping relation and the second mapping relation, wherein the third mapping relation refers to a mapping relation between an image coordinate system of the first plane and an image coordinate system of the second plane;
acquiring a first image and a second image in real time, wherein the first image is an image of a diaphragm adsorbed on the mechanical module corresponding to the first plane, and the second image is an image of the rubber frame corresponding to the second plane;
According to the third mapping relation, mapping the pixel coordinates of the second image acquired in real time to an image coordinate system of a first plane where the diaphragm is positioned;
calculating the pixel offset of the second image and the first image in an image coordinate system of the first plane;
determining a first target moving path of the mechanical module according to the pixel offset and the first mapping relation;
transmitting the first target movement path to the mechanical module;
controlling the mechanical module, and adjusting the pose of the diaphragm on the first plane according to a first target moving path to enable the adjusted pose of the diaphragm to be the same as the pose of the rubber frame in an image coordinate system of the first plane;
and controlling the mechanical module to drive the adjusted membrane to move to the second plane, so that the membrane and the rubber frame are laminated.
In one implementation, the establishing a first mapping relationship includes:
acquiring a third image, wherein the third image is an image of the calibration membrane adsorbed on the mechanical module corresponding to the first plane;
determining first position information according to a template matching method, wherein the first position information refers to pixel coordinates corresponding to the center of the calibration membrane in the third image;
Transmitting at least nine calibration points to the mechanical module, wherein the calibration points refer to coordinate points in a motion coordinate system of the mechanical module;
acquiring a fourth image, wherein the fourth image refers to an image corresponding to each calibration point when the calibration membrane moves to each calibration point along with the mechanical module;
determining second position information according to a template matching method, wherein the second position information refers to pixel coordinates corresponding to the center of the calibration membrane in the fourth image;
and determining the first mapping relation according to the first position information, the second position information and the at least nine calibration points.
In one implementation, the establishing the second mapping relationship includes:
acquiring a fifth image, wherein the fifth image is an image corresponding to a calibration membrane adsorbed on the mechanical module on the second plane, and the calibration membrane moves vertically from the first plane to the second plane;
determining third position information according to a template matching method, wherein the third position information refers to pixel coordinates corresponding to the center of the calibration membrane in the fifth image;
obtaining a sixth image, wherein the sixth image refers to an image corresponding to each calibration point when the calibration membrane moves to each calibration point along with the mechanical module;
Determining fourth position information according to a template matching method, wherein the fourth position information refers to pixel coordinates corresponding to the center of the calibration membrane in the sixth image;
and determining the second mapping relation according to the third position information, the fourth position information and the motion coordinates corresponding to the at least nine calibration points.
In one implementation, the establishing a third mapping relationship includes:
establishing a matrix conversion formula of an image coordinate system of a first plane and an image coordinate system of a second plane, wherein the matrix conversion formula is as follows:
wherein, (X, Y) is the pixel coordinate in the image coordinate system of the first plane, (A, B) is the pixel coordinate in the image coordinate system of the second plane, a1, a2, a4, a5 represent scaling and rotation parameters, a3, a6 represent translation parameters;
according to the first position information, the second position information, the third position information, the fourth position information and the matrix conversion, calculating to obtain a mapping matrix of the image coordinate system of the first plane and the image coordinate system of the second plane, wherein the mapping matrix of the image coordinate system of the first plane and the image coordinate system of the second plane is as follows
In a second aspect, the present application provides a vision-based film lamination system comprising:
The first mapping relation establishing module is used for establishing a first mapping relation, wherein the first mapping relation refers to a mapping relation between an image coordinate system of a first plane and a motion coordinate system of the mechanical module;
the second mapping relation establishing module is used for establishing a second mapping relation, wherein the second mapping relation refers to a mapping relation between an image coordinate system of a second plane and a motion coordinate system of the mechanical module, the second plane refers to a plane where the turntable rubberizing frame is located, the first plane is located above the second plane, and the distance between the first plane and the second plane is greater than or equal to the minimum distance between two films without generating static electricity;
the third mapping relation establishing module is used for establishing a third mapping relation according to the first mapping relation and the second mapping relation, wherein the third mapping relation is a mapping relation between an image coordinate system of the first plane and an image coordinate system of the second plane;
the image acquisition module is used for acquiring a first image and a second image in real time, wherein the first image is an image corresponding to the membrane adsorbed on the mechanical module on the first plane, and the second image is an image corresponding to the rubber frame on the second plane;
The mapping module is used for mapping the pixel coordinates of the second image acquired in real time to an image coordinate system of a first plane where the diaphragm is positioned according to the third mapping relation;
a calculating module, configured to calculate, in an image coordinate system of the first plane, a pixel offset of the second image and the first image;
the determining module is used for determining a first target moving path of the mechanical module according to the pixel offset and the first mapping relation; the sending module is used for sending the first target moving path to the mechanical module;
the adjusting module is used for controlling the mechanical module, and adjusting the pose of the diaphragm on the first plane according to a first target moving path so that the adjusted pose of the diaphragm is the same as the pose of the rubber frame in an image coordinate system of the first plane;
and the lamination module is used for controlling the mechanical module to drive the adjusted membrane to move to the second plane so as to laminate the membrane and the rubber frame.
In one implementation, the first mapping relation establishing module includes:
the first acquisition sub-module is used for acquiring a third image, wherein the third image refers to an image, corresponding to the first plane, of the calibration membrane adsorbed on the mechanical module;
The first determining submodule is used for determining first position information according to a template matching method, wherein the first position information refers to pixel coordinates corresponding to the center of the calibration membrane in the third image;
the first sending submodule is used for sending at least nine calibration points to the mechanical module, wherein the calibration points refer to coordinate points in a motion coordinate system of the mechanical module;
the second acquisition sub-module is used for acquiring a fourth image, wherein the fourth image refers to an image corresponding to each calibration point when the calibration membrane moves to each calibration point along with the mechanical module;
the second determining submodule is used for determining second position information according to a template matching method, wherein the second position information refers to pixel coordinates corresponding to the center of the calibration membrane in the fourth image;
and the third determining submodule is used for determining the first mapping relation according to the first position information, the second position information and the at least nine standard points.
In one implementation, the second mapping relation establishing module includes:
the third acquisition sub-module is used for acquiring a fifth image, wherein the fifth image is an image corresponding to the calibration membrane adsorbed on the mechanical module on the second plane, and the calibration membrane moves vertically from the first plane to the second plane;
A fourth determining submodule, configured to determine third location information according to a template matching method, where the third location information refers to a pixel coordinate corresponding to a center of the calibration diaphragm in the fifth image;
the fourth acquisition sub-module is used for acquiring a sixth image, wherein the sixth image refers to an image corresponding to each calibration point when the calibration membrane moves to each calibration point along with the mechanical module;
a fifth determining submodule, configured to determine fourth location information according to a template matching method, where the fourth location information refers to a pixel coordinate corresponding to a center of the calibration diaphragm in the sixth image;
and a sixth determining sub-module, configured to determine the second mapping relationship according to the third position information, the fourth position information, and the motion coordinates corresponding to the at least nine calibration points.
In one implementation, the third mapping relation establishing module includes:
the first establishing submodule is used for establishing a matrix conversion formula of an image coordinate system of the first plane and an image coordinate system of the second plane, and the matrix conversion formula is as follows:
wherein, (X, Y) is the pixel coordinate in the image coordinate system of the first plane, (A, B) is the pixel coordinate in the image coordinate system of the first plane, a1, a2, a4, a5 represent scaling and rotation parameters, a3, a6 represent translation parameters;
A first calculation sub-module, configured to calculate, according to the first location information, the second location information, the third location information, the fourth location information, and the matrix conversion, a mapping matrix of the image coordinate system of the first plane and the image coordinate system of the second plane, where the mapping matrix of the image coordinate system of the first plane and the image coordinate system of the second plane is
In one implementation, the image acquisition module is located below the second plane, wherein the image acquisition module is capable of simultaneously acquiring images on the first plane and images on the second plane.
In summary, according to the method and the device for laminating the films based on the visual processing provided by the application, firstly, a mapping relation between an image coordinate system of a first plane and an image coordinate system of a second plane is established; then, the mapping relation between the established image coordinate system of the first plane and the image coordinate system of the second plane is facilitated, and the membrane and the rubber frame are aligned and calculated on the first plane, so that electrostatic adsorption can be avoided, and the membrane and the rubber frame are ensured to be accurately stacked together.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a film stacking station according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a mechanical module according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a mechanical module according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an image acquisition module according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an image acquisition module and the mechanical module according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a visual processing-based film lamination method according to an embodiment of the present application;
fig. 7 is an image coordinate system of a first plane according to an embodiment of the present application.
Description of the reference numerals
10-a turntable, 20-a rubber frame mold, 30-an adsorption device, 40-an image collector, 50-a light source and 60-an image controller.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
In order to facilitate understanding of the technical scheme of the application, an application scene is first introduced.
Fig. 1 is a schematic structural diagram of a film stacking station provided by the application. As shown in fig. 1, the film stacking station comprises a turntable 10, and a rubber frame feeding station, a diffusion film feeding station, a first layer polishing film feeding station, a second layer polishing film feeding station, a shading film feeding station and a blanking station are sequentially arranged around the outer side of the turntable 10 in a clockwise direction. The turntable 10 is provided with a rubber frame mold 20, and the rubber frame mold 20 can be used for placing rubber frames of different types, so that the rubber frames fixed on the rubber frame mold 20 can rotate to various stations along with the turntable 10.
During operation, the rubber frame is fixed on the rubber frame mold 20 on the turntable 10 through the rubber frame feeding station, and can be sequentially brought to each station along with the clockwise rotation of the turntable 10, so that corresponding films can be laminated on the rubber frame through the mechanical module at each station, after the films of all the stations are sequentially laminated on the rubber frame, the rubber frame leaves the turntable 10 through the blanking station, and thus the lamination process of the rubber frame and the films is finished.
In order to ensure that the rubber frame and each membrane can be accurately aligned, the application provides a membrane stacking device based on visual processing.
The following describes the machine module, the image acquisition module and the vision processing system with reference to the accompanying drawings.
Fig. 2 is a schematic structural diagram of a mechanical module according to an embodiment of the present application, as shown in fig. 2, the mechanical module includes an adsorption device 30, and an XYDZ four-axis structure for fixing the adsorption device 30, where the XYDZ four-axis structure can drive the adsorption device 30 to move linearly along the X-axis, the Y-axis and the Z-axis directions, and rotate along the D-axis.
Fig. 3 is a schematic structural diagram of another mechanical module provided in an embodiment of the present application, as shown in fig. 3, the mechanical module may drive a diaphragm (not shown in fig. 3) connected to the M3 shaft to move in a horizontal direction and a vertical direction through the cooperation of the M1 shaft, the M2 shaft and the M3 shaft.
It should be noted that the mechanical module shown in fig. 2 and fig. 3 is only an exemplary structure, and is not meant to limit the structure of the mechanical module in the present application, and in practical application, the mechanical module may further include more structures, which are not listed here.
The mechanical module in the embodiment of the application is in communication connection with the vision processing system, and drives the membrane to be laminated with the rubber frame according to the processing result of the vision processing system.
Fig. 4 is a schematic structural diagram of an image acquisition module according to an embodiment of the present application. As shown in fig. 4, the image capturing module includes two sets of image capturing devices 40, a light source 50 disposed above the two sets of image capturing devices 40, and an image controller 60 connected to the two sets of image capturing devices 40.
Firstly, a set of image acquisition module and mechanical module are correspondingly configured at each film feeding station, and the positional relationship among the image acquisition module, the mechanical module and the light shielding film feeding station is described below by taking the light shielding film feeding station as an example.
As shown in fig. 5, two sets of image collectors 40 in the image collecting module are all disposed below the turntable 10, and the two sets of image collectors 40 can collect the image of the glue frame and the top of the glue frame through the glue frame mold, wherein the light source 50 can be disposed on the adsorption device 30 of the mechanical module, so that when the adsorption device 30 adsorbs the film onto the top of the glue frame corresponding to the light shielding film feeding station, the light source irradiates the glue frame and the light shielding film adsorbed on the adsorption device 30, and thus the image of the glue frame and the light shielding film adsorbed on the adsorption device 30 can be collected through the image collector 40.
In the process of laminating the membrane on the rubber frame by matching the image acquisition module and the mechanical module, the applicant finds that in order to ensure the accuracy of laminating the membrane on the rubber frame, the membrane adsorbed by the mechanical module is required to be close to the rubber frame sufficiently, and visual positioning processing is performed through the image acquisition module, however, when the membrane which is adsorbed on the mechanical module and is laminated on the rubber frame is too close, the membrane adsorbed on the mechanical module can be re-adsorbed by the electrostatic action between the membranes, so that visual positioning cannot be performed. The height of the two diaphragms generating electrostatic action is generally close to 10mm, and if the rubber frame and the diaphragms are positioned on the basis of the height difference of approximately 10mm, it is difficult to ensure that each diaphragm can be accurately laminated on the rubber frame. Based on the method, the application provides a membrane lamination method based on visual processing, which can not only prevent electrostatic adsorption between membranes, but also ensure accurate alignment of a rubber frame and the membranes.
The following describes a film lamination method based on visual processing according to an embodiment of the present application with reference to the accompanying drawings.
Fig. 6 is a schematic workflow diagram of a film laminating method based on visual processing according to an embodiment of the present application, and as shown in fig. 6, the film laminating method based on visual processing according to an embodiment of the present application includes the following steps:
step 1, a first mapping relation is established, wherein the first mapping relation refers to a mapping relation between an image coordinate system of a first plane and a motion coordinate system of a mechanical module.
Step 1 is a step of calibrating an image coordinate system of a first plane and a motion coordinate system of a mechanical module, and the coordinate in the image of the first plane can be converted into the motion coordinate system of the mechanical module through the first mapping relation established in step 1, so that any point on the image of the first plane can be converted into a coordinate point on the motion coordinate system of the mechanical module.
The first plane is positioned above the rubber frame, and the distance from the rubber frame is more than or equal to the minimum distance between the two diaphragms without generating static electricity.
The method for establishing the first mapping relation is not limited in the present application, and in one implementation manner, the method includes the following steps:
And 11, acquiring a third image, wherein the third image is an image of the calibration membrane adsorbed on the mechanical module corresponding to the first plane.
When the first mapping relation is established, the calibration membrane is sent to the first plane through the mechanical module, then an image of the first plane (for convenience of description, the application is called a third image) can be acquired through the image acquisition device, and the third image comprises the calibration membrane, so that the vision processing system can acquire the third image from one end of the image acquisition device.
Step 12, determining first position information according to a template matching method, wherein the first position information refers to pixel coordinates corresponding to the center of the calibration membrane in the third image.
After the third image is obtained, the vision processing system can identify the calibration membrane in the third image according to a template matching method, and then can determine the corresponding pixel coordinate of the calibration membrane in the third image, wherein the center of the calibration membrane is on the extension line of the Z axis of the mechanical module.
And 13, transmitting at least nine calibration points to the mechanical module, wherein the calibration points refer to coordinate points in a motion coordinate system of the mechanical module, and the nine calibration points can be points corresponding to forming a nine-square lattice.
After the mechanical module receives the calibration points, the calibration membrane is driven to sequentially move to each calibration point from the first position information, and when the calibration membrane moves to each calibration point, the corresponding image, namely the fourth image in the step 14, is acquired through the image acquisition device.
It should be understood that each of the calibration points should be within the field of view of the image collector.
And 14, acquiring a fourth image, wherein the fourth image refers to an image corresponding to each calibration point when the calibration diaphragm moves to each calibration point along with the mechanical module.
The vision processing system acquires a fourth image from one end of the image collector.
And 15, determining second position information according to a template matching method, wherein the second position information refers to pixel coordinates corresponding to the center of the calibration membrane in the fourth image.
After the fourth image is obtained, the vision processing system can identify the calibration membrane in the fourth image corresponding to each calibration point according to a template matching method, and then can determine the corresponding pixel coordinate of the calibration membrane in the fourth image.
As shown in fig. 7, fig. 7 is an image coordinate system of the first plane, an initial position of a pixel coordinate of a center of the calibration membrane in the image coordinate system of the first plane is a point a (first position information), the mechanical module drives the calibration membrane to sequentially move from the point a to each calibration point, and the pixel coordinate of the center of the calibration membrane, which is correspondingly collected at each calibration point, in the image coordinate system of the first plane is a point B, a point C, a point D, a point E, a point F, a point G, a point H and a point I in sequence.
And step 16, determining the first mapping relation by the first position information, the second position information and the motion coordinates corresponding to the at least nine calibration points.
Taking the example that the mechanical module drives the calibration diaphragm to move from the point A to the point B, the mechanical module can record the coordinates corresponding to the point A to the point B in the motion coordinate system of the mechanical module, so that the pixel coordinates of the point A and the point B in the image coordinate system of the first plane and the coordinates (namely the coordinates of each calibration point) of the point A and the point B in the motion coordinate system of the mechanical module can be obtained.
Similarly, the pixel coordinates of the C, D, E, F, G, H and I points in the image coordinate system of the first plane may be obtained, and the coordinates in the motion coordinate system of the mechanical module, i.e. the coordinate points are in the motion coordinate system of the mechanical module.
Further, a mapping relationship between the image coordinate system of the first plane and the motion coordinate system of the mechanical module can be established through the corresponding relationship of the nine groups of coordinates.
And 2, establishing a second mapping relation, wherein the second mapping relation refers to a mapping relation between an image coordinate system of a second plane and a motion coordinate system of the mechanical module, the second plane refers to a plane where the turntable rubberizing frame is located, the first plane is located above the second plane, and the distance between the first plane and the second plane is greater than or equal to the minimum distance between two films without static electricity.
After the calibration of the image coordinate system of the first plane and the motion coordinate system of the mechanical module is completed in the step 1, the mechanical module continuously drives the same calibration membrane to vertically move to the second plane, and the step 2 is executed. And 2, calibrating the image coordinate system of the second plane and the motion coordinate system of the mechanical module, wherein the coordinate in the image of the second plane can be mapped to the motion coordinate system of the mechanical module through the second mapping relation established in the step 2.
In the embodiment of the application, the second plane refers to a plane where the turntable rubberizing frame is located, the first plane and the second plane are planes with two different heights in space, and the height between the first plane and the second plane set during calibration is the alignment height adopted during actual membrane lamination. However, it should be noted that, when the second mapping relationship is established, the second plane is not provided with the rubber frame, but the calibration membrane is moved to the second plane, and the second plane corresponds to the plane where the rubber frame is located in actual working, that is, the same calibration membrane is used to collect the images of the calibration membrane on the first plane and the second plane respectively, so as to implement establishment of the third mapping relationship.
The method for establishing the second mapping relation is not limited, for example, the same method as that for establishing the first mapping relation can be adopted, and the method specifically comprises the following steps:
step 21, obtaining a fifth image, wherein the fifth image is an image corresponding to the calibration membrane adsorbed on the mechanical module on the second plane, and the calibration membrane moves vertically from the first plane to the second plane;
step 22, determining third position information according to a template matching method, wherein the third position information refers to a corresponding pixel coordinate set of the calibration membrane in the fifth image.
Step 23, obtaining a sixth image, wherein the sixth image refers to an image corresponding to each calibration point when the calibration diaphragm moves to each calibration point along with the mechanical module.
And step 24, determining fourth position information according to a template matching method, wherein the fourth position information refers to a pixel coordinate set corresponding to the calibration membrane in the sixth image.
And step 25, determining the second mapping relation by the third position information, the fourth position information and the motion coordinates corresponding to the mechanical module moving to the at least nine calibration points.
The above steps 21 to 25 can be referred to the above steps 11 to 16, and will not be described here again.
It should be noted that the mechanical module also drives the calibration membrane to move to each calibration point on the second plane, that is, the mechanical module drives the same calibration membrane to move once on the first plane and the second plane respectively with the same movement path in the movement coordinate system. Therefore, the mapping between the image coordinate system of the first plane and the image coordinate system of the second plane can be realized based on the motion coordinate system of the mechanical module.
And 3, establishing a third mapping relation according to the first mapping relation and the second mapping relation, wherein the third mapping relation is a mapping relation between the image coordinate system of the first plane and the image coordinate system of the second plane.
In the embodiment of the application, the first plane and the second plane are planes with different heights in space, and the mapping relation between the image coordinate system of the first plane and the image coordinate system of the second plane can be established through the conversion of the motion coordinate system of the mechanical module, so that any pixel coordinate on the first plane can be corresponding to the image coordinate system of the second plane, or any pixel coordinate on the second plane can be corresponding to the image coordinate system of the first plane.
The application is not limited to the way of establishing the third mapping relation, and in one implementation, the method comprises the following steps:
step 31, establishing a matrix conversion formula of an image coordinate system of the first plane and an image coordinate system of the second plane, wherein the matrix conversion formula is as follows:
wherein (X, Y) is the pixel coordinates in the image coordinate system of the first plane, (a, B) is the pixel coordinates in the image coordinate system of the second plane, a1, a2, a4, a5 represent the scaling and rotation parameters, and a3, a6 represent the translation parameters.
The transformation relationship between the two image coordinate systems requires the computation of a transformation matrix that includes translation, rotation, and scaling. Wherein, the image coordinate system XOY of the first plane is represented by (X, Y) coordinates, the second plane coordinate system AOB is represented by (a, B) coordinates, and the mapping relationship between the image coordinate system XOY of the first plane and the second plane coordinate system AOB can be represented by a matrix conversion formula:
step 32, calculating a mapping matrix of the image coordinate system of the first plane and the image coordinate system of the second plane according to the first position information, the second position information, the third position information, the fourth position information and the matrix conversion, wherein the mapping matrix of the image coordinate system of the first plane and the image coordinate system of the second plane is
The mapping matrix is calculated by using at least nine pairs of pixel coordinates obtained in the above steps, namely, the pixel coordinates corresponding to the nine calibration points in the image coordinate system of the first plane, and the pixel coordinates corresponding to the nine calibration points in the image coordinate system of the second plane.
The resulting nine pairs of pixel coordinates are brought into the matrix conversion equation to form an overdetermined equation, which can be solved by using a least squares method, and the transformation returned is a transformation that minimizes the distance between the input point and the transformed point.
By way of example, nine pairs of pixel coordinates are shown in table 1,
bringing the data in Table 1 into the matrix conversion to form an overdetermined equation, and calculating the mapping matrix by using a least square method to obtain
It should be noted that, the above steps 1 to 3 are the process of system calibration, and are used for calibrating the mapping relationship between the image coordinate system of the first plane and the image coordinate system of the second plane, so that the above steps only need to be calibrated once, and the calibration result is directly adopted when the system is used subsequently.
The process of achieving precise alignment of the membrane and the frame using the third mapping relationship is described below.
And 4, acquiring a first image and a second image in real time, wherein the first image is an image of the membrane adsorbed on the mechanical module corresponding to the first plane, and the second image is an image of the rubber frame corresponding to the second plane.
The application performs visual alignment on a first plane with a certain height from the rubber frame, and the height of the first plane is as follows: the membrane adsorbed on the mechanical module can not generate electrostatic adsorption effect with the membrane on the rubber frame, the first plane is the same as the first plane in the steps 1 to 3, and the plane of the rubber frame is the same as the second plane in the steps 1 to 3.
The image of the diaphragm on the first plane and the image of the rubber frame can be acquired in real time through the image acquisition device.
And 5, mapping the pixel coordinates of the second image acquired in real time to an image coordinate system of the first plane where the diaphragm is positioned according to the third mapping relation.
According to a third mapping relation obtained in advance, coordinates on two different planes can be converted into the same coordinate system, so that accurate positioning of the diaphragms on the two different planes can be realized.
For example, the mapping matrix obtained by the calculation in step 3 is the matrix conversion typePixel coordinates of the second image [1231.170166,1399.925415 ]]Carrying out matrix conversion to obtain mapped pixel coordinates of [1253.796116,1396.000527 ]]The pixel coordinate difference of the two planes is found to be [22.62595, -3.924888 ] by comparing the pixel coordinates before mapping and after mapping ]This also shows that the mapping calculation provided by the application is necessary in practical application, and can greatly reduce the calculation errors under different working distances (upper and lower planes).
And 6, calculating the pixel offset of the second image and the first image in the image coordinate system of the first plane.
After the rubber frame is mapped to the image coordinate system of the first plane, the pixel offset of the diaphragm and the rubber frame can be calculated according to each pixel coordinate of the diaphragm in the image coordinate system of the first plane and each pixel coordinate of the rubber frame in the image coordinate system of the first plane, and the pixel offset can represent the difference of the pose of the diaphragm and the rubber frame.
And 7, determining a first target moving path of the mechanical module according to the pixel offset and the first mapping relation.
Step 7 is to determine how the mechanical module is to be moved, so that the gesture of the membrane and the gesture of the glue frame adsorbed on the mechanical module can be completely overlapped, and according to the pixel offset, how the gesture of the membrane and the gesture of the glue frame adsorbed on the mechanical module are completely overlapped in the image coordinate system of the first plane can be determined, and further according to the first mapping relation, how the gesture of the membrane is adjusted to be completely overlapped with the gesture of the glue frame by the mechanical module can be determined.
And step 8, the first target moving path is sent to the mechanical module.
And 9, controlling the mechanical module, and adjusting the pose of the diaphragm on the first plane according to a first target moving path so that the adjusted pose of the diaphragm is the same as the pose of the rubber frame in an image coordinate system of the first plane.
After receiving the first target moving path, the mechanical module completes the action of adjusting the diaphragm on the first plane, so that electrostatic adsorption can not be generated between the mechanical module and the diaphragm on the lower rubber frame.
And step 10, controlling the mechanical module to drive the adjusted membrane to move to the second plane so that the membrane and the rubber frame are laminated.
The film is adjusted to be completely the same as the gesture of the rubber frame on the first plane, so that the film is directly moved to the plane of the rubber frame in one step without any other alignment treatment in the process of moving from the first plane to the second plane, and the lamination of the film and the rubber frame is realized.
In summary, according to the membrane stacking method based on visual processing provided by the embodiment of the application, firstly, a mapping relation between an image coordinate system of a first plane and an image coordinate system of a second plane is established; then, the mapping relation between the established image coordinate system of the first plane and the image coordinate system of the second plane is facilitated, and the membrane and the rubber frame are aligned and calculated on the first plane, so that electrostatic adsorption can be avoided, and the membrane and the rubber frame are ensured to be accurately stacked together.
The embodiment of the application also provides a film laminating system based on visual processing, which comprises the following steps:
the first mapping relation establishing module is used for establishing a first mapping relation, wherein the first mapping relation refers to a mapping relation between an image coordinate system of a first plane and a motion coordinate system of the mechanical module;
the second mapping relation establishing module is used for establishing a second mapping relation, wherein the second mapping relation refers to a mapping relation between an image coordinate system of a second plane and a motion coordinate system of the mechanical module, the second plane refers to a plane where the turntable rubberizing frame is located, the first plane is located above the second plane, and the distance between the first plane and the second plane is greater than or equal to the minimum distance between two films without generating static electricity;
the third mapping relation establishing module is used for establishing a third mapping relation according to the first mapping relation and the second mapping relation, wherein the third mapping relation is a mapping relation between an image coordinate system of the first plane and an image coordinate system of the second plane;
the image acquisition module is used for acquiring a first image and a second image in real time, wherein the first image is an image corresponding to the membrane adsorbed on the mechanical module on the first plane, and the second image is an image corresponding to the rubber frame on the second plane;
The mapping module is used for mapping the pixel coordinates of the second image acquired in real time to an image coordinate system of a first plane where the diaphragm is positioned according to the third mapping relation;
a calculating module, configured to calculate, in an image coordinate system of the first plane, a pixel offset of the second image and the first image;
the determining module is used for determining a first target moving path of the mechanical module according to the pixel offset and the first mapping relation; the sending module is used for sending the first target moving path to the mechanical module;
the adjusting module is used for controlling the mechanical module, and adjusting the pose of the diaphragm on the first plane according to a first target moving path so that the adjusted pose of the diaphragm is the same as the pose of the rubber frame in an image coordinate system of the first plane;
and the lamination module is used for controlling the mechanical module to drive the adjusted membrane to move to the second plane so as to laminate the membrane and the rubber frame.
Further, the first mapping relation establishing module includes:
the first acquisition sub-module is used for acquiring a third image, wherein the third image refers to an image, corresponding to the first plane, of the calibration membrane adsorbed on the mechanical module;
The first determining submodule is used for determining first position information according to a template matching method, wherein the first position information refers to pixel coordinates corresponding to the center of the calibration membrane in the third image;
the first sending submodule is used for sending at least nine calibration points to the mechanical module, wherein the calibration points refer to coordinate points in a motion coordinate system of the mechanical module;
the second acquisition sub-module is used for acquiring a fourth image, wherein the fourth image refers to an image corresponding to each calibration point when the calibration membrane moves to each calibration point along with the mechanical module;
the second determining submodule is used for determining second position information according to a template matching method, wherein the second position information refers to pixel coordinates corresponding to the center of the calibration membrane in the fourth image;
and the third determining submodule is used for determining the first mapping relation according to the first position information, the second position information and the at least nine standard points.
Further, the second mapping relation establishing module includes:
the third acquisition sub-module is used for acquiring a fifth image, wherein the fifth image is an image corresponding to the calibration membrane adsorbed on the mechanical module on the second plane, and the calibration membrane moves vertically from the first plane to the second plane;
A fourth determining submodule, configured to determine third location information according to a template matching method, where the third location information refers to a pixel coordinate corresponding to a center of the calibration diaphragm in the fifth image;
the fourth acquisition sub-module is used for acquiring a sixth image, wherein the sixth image refers to an image corresponding to each calibration point when the calibration membrane moves to each calibration point along with the mechanical module;
a fifth determining submodule, configured to determine fourth location information according to a template matching method, where the fourth location information refers to a pixel coordinate corresponding to a center of the calibration diaphragm in the sixth image;
and a sixth determining sub-module, configured to determine the second mapping relationship according to the third position information, the fourth position information, and the motion coordinates corresponding to the at least nine calibration points.
Further, the third mapping relation establishing module includes:
the first establishing submodule is used for establishing a matrix conversion formula of an image coordinate system of the first plane and an image coordinate system of the second plane, and the matrix conversion formula is as follows:
wherein, (X, Y) is the pixel coordinate in the image coordinate system of the first plane, (A, B) is the pixel coordinate in the image coordinate system of the first plane, a1, a2, a4, a5 represent scaling and rotation parameters, a3, a6 represent translation parameters;
A first calculation sub-module, configured to calculate, according to the first location information, the second location information, the third location information, the fourth location information, and the matrix conversion, a mapping matrix of the image coordinate system of the first plane and the image coordinate system of the second plane, where the mapping matrix of the image coordinate system of the first plane and the image coordinate system of the second plane is
Further, the image acquisition module is located below the second plane, wherein the image acquisition module can acquire images on the first plane and images on the second plane simultaneously.
The same or similar parts between the various embodiments in this specification are referred to each other. In particular, for embodiments of the system, since they are substantially similar to the method embodiments, the description is relatively simple, as far as reference is made to the description in the method embodiments.
The application has been described in detail in connection with the specific embodiments and exemplary examples thereof, but such description is not to be construed as limiting the application. It will be understood by those skilled in the art that various equivalent substitutions, modifications or improvements may be made to the technical solution of the present application and its embodiments without departing from the spirit and scope of the present application, and these fall within the scope of the present application. The scope of the application is defined by the appended claims.
In a specific implementation, an embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium may store a program, where the program may include some or all of the steps in each embodiment of the vision processing-based film lamination method provided by the present application when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory, RAM), or the like.
It will be apparent to those skilled in the art that the techniques of embodiments of the present application may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present application may be embodied in essence or what contributes to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
The embodiments of the present application described above do not limit the scope of the present application.
Claims (9)
1. A vision-based film lamination method, comprising:
establishing a first mapping relation between an image coordinate system of a first plane and a motion coordinate system of a mechanical module;
establishing a second mapping relation, wherein the second mapping relation refers to a mapping relation between an image coordinate system of a second plane and a motion coordinate system of a mechanical module, the second plane refers to a plane where a turntable rubberizing frame is located, the first plane is located above the second plane, and the distance between the first plane and the second plane is greater than or equal to the minimum distance between two films without static electricity;
establishing a third mapping relation according to the first mapping relation and the second mapping relation, wherein the third mapping relation refers to a mapping relation between an image coordinate system of the first plane and an image coordinate system of the second plane;
acquiring a first image and a second image in real time, wherein the first image is an image of a diaphragm adsorbed on the mechanical module corresponding to the first plane, and the second image is an image of the rubber frame corresponding to the second plane;
According to the third mapping relation, mapping the pixel coordinates of the second image acquired in real time to an image coordinate system of a first plane where the diaphragm is positioned;
calculating the pixel offset of the second image and the first image in an image coordinate system of the first plane;
determining a first target moving path of the mechanical module according to the pixel offset and the first mapping relation;
transmitting the first target movement path to the mechanical module;
controlling the mechanical module, and adjusting the pose of the diaphragm on the first plane according to a first target moving path to enable the adjusted pose of the diaphragm to be the same as the pose of the rubber frame in an image coordinate system of the first plane;
and controlling the mechanical module to drive the adjusted membrane to move to the second plane, so that the membrane and the rubber frame are laminated.
2. The method of claim 1, wherein the establishing a first mapping relationship comprises:
acquiring a third image, wherein the third image is an image of the calibration membrane adsorbed on the mechanical module corresponding to the first plane;
determining first position information according to a template matching method, wherein the first position information refers to pixel coordinates corresponding to the center of the calibration membrane in the third image;
Transmitting at least nine calibration points to the mechanical module, wherein the calibration points refer to coordinate points in a motion coordinate system of the mechanical module;
acquiring a fourth image, wherein the fourth image refers to an image corresponding to each calibration point when the calibration membrane moves to each calibration point along with the mechanical module;
determining second position information according to a template matching method, wherein the second position information refers to pixel coordinates corresponding to the center of the calibration membrane in the fourth image;
and determining the first mapping relation according to the first position information, the second position information and the at least nine calibration points.
3. The method of claim 2, wherein the establishing the second mapping relationship comprises:
acquiring a fifth image, wherein the fifth image is an image corresponding to a calibration membrane adsorbed on the mechanical module on the second plane, and the calibration membrane moves vertically from the first plane to the second plane;
determining third position information according to a template matching method, wherein the third position information refers to pixel coordinates corresponding to the center of the calibration membrane in the fifth image;
obtaining a sixth image, wherein the sixth image refers to an image corresponding to each calibration point when the calibration membrane moves to each calibration point along with the mechanical module;
Determining fourth position information according to a template matching method, wherein the fourth position information refers to pixel coordinates corresponding to the center of the calibration membrane in the sixth image;
and determining the second mapping relation according to the third position information, the fourth position information and the motion coordinates corresponding to the at least nine calibration points.
4. A method according to claim 3, wherein said establishing a third mapping relationship comprises:
establishing a matrix conversion formula of an image coordinate system of a first plane and an image coordinate system of a second plane, wherein the matrix conversion formula is as follows:
wherein, (X, Y) is the pixel coordinate in the image coordinate system of the first plane, (A, B) is the pixel coordinate in the image coordinate system of the second plane, a1, a2, a4, a5 represent scaling and rotation parameters, a3, a6 represent translation parameters;
according to the first position information, the second position information, the third position information, the fourth position information and the matrix conversion, calculating to obtain a mapping matrix of the image coordinate system of the first plane and the image coordinate system of the second plane, wherein the mapping matrix of the image coordinate system of the first plane and the image coordinate system of the second plane is as follows
5. A vision-based film lamination system, comprising:
the first mapping relation establishing module is used for establishing a first mapping relation, wherein the first mapping relation refers to a mapping relation between an image coordinate system of a first plane and a motion coordinate system of the mechanical module;
the second mapping relation establishing module is used for establishing a second mapping relation, wherein the second mapping relation refers to a mapping relation between an image coordinate system of a second plane and a motion coordinate system of the mechanical module, the second plane refers to a plane where the turntable rubberizing frame is located, the first plane is located above the second plane, and the distance between the first plane and the second plane is greater than or equal to the minimum distance between two films without generating static electricity;
the third mapping relation establishing module is used for establishing a third mapping relation according to the first mapping relation and the second mapping relation, wherein the third mapping relation is a mapping relation between an image coordinate system of the first plane and an image coordinate system of the second plane;
the image acquisition module is used for acquiring a first image and a second image in real time, wherein the first image is an image corresponding to the membrane adsorbed on the mechanical module on the first plane, and the second image is an image corresponding to the rubber frame on the second plane;
The mapping module is used for mapping the pixel coordinates of the second image acquired in real time to an image coordinate system of a first plane where the diaphragm is positioned according to the third mapping relation;
a calculating module, configured to calculate, in an image coordinate system of the first plane, a pixel offset of the second image and the first image;
the determining module is used for determining a first target moving path of the mechanical module according to the pixel offset and the first mapping relation; the sending module is used for sending the first target moving path to the mechanical module;
the adjusting module is used for controlling the mechanical module, and adjusting the pose of the diaphragm on the first plane according to a first target moving path so that the adjusted pose of the diaphragm is the same as the pose of the rubber frame in an image coordinate system of the first plane;
and the lamination module is used for controlling the mechanical module to drive the adjusted membrane to move to the second plane so as to laminate the membrane and the rubber frame.
6. The system of claim 5, wherein the first mapping relationship establishment module comprises:
the first acquisition sub-module is used for acquiring a third image, wherein the third image refers to an image, corresponding to the first plane, of the calibration membrane adsorbed on the mechanical module;
The first determining submodule is used for determining first position information according to a template matching method, wherein the first position information refers to pixel coordinates corresponding to the center of the calibration membrane in the third image;
the first sending submodule is used for sending at least nine calibration points to the mechanical module, wherein the calibration points refer to coordinate points in a motion coordinate system of the mechanical module;
the second acquisition sub-module is used for acquiring a fourth image, wherein the fourth image refers to an image corresponding to each calibration point when the calibration membrane moves to each calibration point along with the mechanical module;
the second determining submodule is used for determining second position information according to a template matching method, wherein the second position information refers to pixel coordinates corresponding to the center of the calibration membrane in the fourth image;
and the third determining submodule is used for determining the first mapping relation according to the first position information, the second position information and the at least nine standard points.
7. The system of claim 6, wherein the second mapping relationship establishment module comprises:
the third acquisition sub-module is used for acquiring a fifth image, wherein the fifth image is an image corresponding to the calibration membrane adsorbed on the mechanical module on the second plane, and the calibration membrane moves vertically from the first plane to the second plane;
A fourth determining submodule, configured to determine third location information according to a template matching method, where the third location information refers to a pixel coordinate corresponding to a center of the calibration diaphragm in the fifth image;
the fourth acquisition sub-module is used for acquiring a sixth image, wherein the sixth image refers to an image corresponding to each calibration point when the calibration membrane moves to each calibration point along with the mechanical module;
a fifth determining submodule, configured to determine fourth location information according to a template matching method, where the fourth location information refers to a pixel coordinate corresponding to a center of the calibration diaphragm in the sixth image;
and a sixth determining sub-module, configured to determine the second mapping relationship according to the third position information, the fourth position information, and the motion coordinates corresponding to the at least nine calibration points.
8. The system of claim 7, wherein the third mapping relationship establishment module comprises:
the first establishing submodule is used for establishing a matrix conversion formula of an image coordinate system of the first plane and an image coordinate system of the second plane, and the matrix conversion formula is as follows:
wherein, (X, Y) is the pixel coordinate in the image coordinate system of the first plane, (A, B) is the pixel coordinate in the image coordinate system of the first plane, a1, a2, a4, a5 represent scaling and rotation parameters, a3, a6 represent translation parameters;
A first calculation sub-module, configured to calculate, according to the first location information, the second location information, the third location information, the fourth location information, and the matrix conversion, a mapping matrix of the image coordinate system of the first plane and the image coordinate system of the second plane, where the mapping matrix of the image coordinate system of the first plane and the image coordinate system of the second plane is
9. The system of claim 7, wherein the image acquisition module is located below the second plane, wherein the image acquisition module is capable of simultaneously acquiring images on the first plane and images on the second plane.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110941788.6A CN113592959B (en) | 2021-08-17 | 2021-08-17 | Visual processing-based membrane lamination method and system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110941788.6A CN113592959B (en) | 2021-08-17 | 2021-08-17 | Visual processing-based membrane lamination method and system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN113592959A CN113592959A (en) | 2021-11-02 |
| CN113592959B true CN113592959B (en) | 2023-11-28 |
Family
ID=78258274
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110941788.6A Active CN113592959B (en) | 2021-08-17 | 2021-08-17 | Visual processing-based membrane lamination method and system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113592959B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114670194B (en) * | 2022-03-22 | 2023-06-27 | 荣耀终端有限公司 | Positioning method and device for manipulator system |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107506754A (en) * | 2017-09-19 | 2017-12-22 | 厦门中控智慧信息技术有限公司 | Iris identification method, device and terminal device |
| CN107783310A (en) * | 2017-11-08 | 2018-03-09 | 凌云光技术集团有限责任公司 | A kind of scaling method and device of post lens imaging system |
| CN111899307A (en) * | 2020-07-30 | 2020-11-06 | 浙江大学 | Space calibration method, electronic device and storage medium |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6659611B2 (en) * | 2001-12-28 | 2003-12-09 | International Business Machines Corporation | System and method for eye gaze tracking using corneal image mapping |
| CN103780830B (en) * | 2012-10-17 | 2017-04-12 | 晶睿通讯股份有限公司 | Linkage type photographing system and control method of multiple cameras thereof |
-
2021
- 2021-08-17 CN CN202110941788.6A patent/CN113592959B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107506754A (en) * | 2017-09-19 | 2017-12-22 | 厦门中控智慧信息技术有限公司 | Iris identification method, device and terminal device |
| CN107783310A (en) * | 2017-11-08 | 2018-03-09 | 凌云光技术集团有限责任公司 | A kind of scaling method and device of post lens imaging system |
| CN111899307A (en) * | 2020-07-30 | 2020-11-06 | 浙江大学 | Space calibration method, electronic device and storage medium |
Non-Patent Citations (1)
| Title |
|---|
| 二维和三维视觉传感集成系统联合标定方法;李琳;张旭;屠大维;;仪器仪表学报(11);第75-81页 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113592959A (en) | 2021-11-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111028155B (en) | Parallax image splicing method based on multiple pairs of binocular cameras | |
| CN109015630B (en) | Hand-eye calibration method and system based on calibration point extraction and computer storage medium | |
| CN106408556B (en) | A Calibration Method for Small Object Measurement System Based on General Imaging Model | |
| CN113643282B (en) | A method, device, electronic device and storage medium for generating a workpiece gluing trajectory | |
| CN113658266B (en) | Visual measurement method for rotation angle of moving shaft based on fixed camera and single target | |
| CN107808400A (en) | A kind of camera calibration systems and its scaling method | |
| CN111461963A (en) | Fisheye image splicing method and device | |
| CN114900688B (en) | Method and apparatus for detecting camera assembly, and computer-readable storage medium | |
| CN112950724A (en) | Screen printing visual calibration method and device | |
| CN111105467B (en) | Image calibration method and device and electronic equipment | |
| CN112634379B (en) | Three-dimensional positioning measurement method based on mixed vision field light field | |
| CN107230233A (en) | The scaling method and device of telecentric lens 3-D imaging system based on bundle adjustment | |
| CN113592959B (en) | Visual processing-based membrane lamination method and system | |
| CN115719387A (en) | 3D camera calibration method, point cloud image acquisition method and camera calibration system | |
| CN102081798A (en) | Epipolar rectification method for fish-eye stereo camera pair | |
| CN107685007A (en) | Double-camera lens alignment method, alignment gluing method and substrate alignment method | |
| CN115115550A (en) | A method and device for image perspective correction based on camera perspective transformation | |
| CN112348895A (en) | Control method, control equipment and medium for attaching liquid crystal flat plate | |
| CN108582037B (en) | Method for realizing precise fitting by matching two cameras with robot | |
| CN202141852U (en) | Microscope device for full-view micro image shooting | |
| CN115790449A (en) | Three-dimensional shape measurement method for long and narrow space | |
| CN104216202A (en) | Inertia gyroscope combined real-time visual camera positioning system and method | |
| CN117218320B (en) | Space labeling method based on mixed reality | |
| CN118279411A (en) | Leveling alignment calibration method in image sensor AA process | |
| CN105759559B (en) | A kind of motion control method of focusing spot gluing equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |