Disclosure of Invention
The invention provides a camera module lens appearance inspection method and equipment, wherein the camera module lens appearance inspection method can ensure the definition of a defect image, reduce the calculation pressure of an industrial personal computer, shorten the compression detection time and improve the productivity of a production line.
In order to achieve the above purpose, the present invention provides the following technical solutions:
A method for checking the appearance of a lens of a camera module comprises the following steps of adjusting a shallow depth of field lens of a camera to a set position, controlling the camera to move at a constant speed to a set stroke in a direction close to a product to be tested, arranging a plurality of preset shooting points in the set stroke, shooting images at all preset shooting points by the camera, comparing an image acquired by the shallow depth of field lens at any shooting point except the first shooting point with a focusing layer at which a definition maximum value is located by an image acquired by the last shooting point through a preset algorithm to generate a fused image, entering a main detection flow after the fused image of the last shooting point is generated, analyzing the focusing layers at which all detected defects are located, and reporting a detection result.
According to the visual inspection method for the lens of the camera module, as the shallow depth lens of the camera shoots the product to be detected at each shooting point, the preset algorithm synchronously fuses the shot images, and only after the shooting of the last shooting point is completed, a unique primary detection algorithm flow is carried out.
The method for the simultaneous shooting and simultaneous fusion can reduce the calculation pressure of an industrial personal computer, shorten the compression detection time, improve the productivity of a production line, and ensure the definition of a defect image due to the fact that a camera is provided with a shallow depth lens, thereby ensuring the accuracy of a detection result.
Optionally, adjusting the shallow depth lens of the camera to a set position includes adjusting a center of a through hole of the first light source, a center of the shallow depth lens, and a center of a light passing hole of the product to be measured to be coaxial, and ensuring that the second light source and the third light source are respectively located at two sides of the product to be measured.
Optionally, except the first shooting point, the image is shot at any other shooting point to generate a fused image, wherein the fused image comprises the steps of recording the definition value of each pixel in the image acquired by the corresponding shooting point, comparing the definition value of each pixel with the definition value of the maximum definition value graph corresponding to the last shooting point, enabling each pixel to select a larger definition value in two pictures, fusing the pixels with the selected maximum definition values to form a maximum definition value graph corresponding to the shooting point, and recording a focusing layer where the definition maximum value of each pixel is located according to the formed maximum definition value graph corresponding to the shooting point to form a mark graph corresponding to the shooting point.
Optionally, after analyzing the focusing layers where all the detected defects are located and before reporting the detection results, the main detection flow keeps the defects on each layer of lenses of the light passing holes and filters the defects on the surface of the protective film.
The main detection process comprises the steps of automatically dividing non-concentric circular areas by using a circular gradient method when detecting a light-passing hole lens, detecting defects of different areas by using different parameters, detecting low-contrast defects by using a circular area dynamic threshold segmentation method, extracting the defects by using a deeplab v & lt3+ & gt semantic segmentation model when detecting an end face, calculating the size, average gray level and contrast information of each area, and further judging the defects by using a support vector machine.
A camera module lens appearance inspection device of a camera module lens appearance inspection method comprises a frame, a camera, a first light source, a second light source, a third light source, a lifting adjusting mechanism and an industrial personal computer, wherein the frame is provided with a positioning mechanism used for positioning a product to be inspected, the camera is arranged on the frame through the lifting mechanism so that the camera can move along a first direction relative to the frame for a set stroke, the camera is provided with a shallow depth lens, the center of a through hole of the first light source and the center of the shallow depth lens are coaxially arranged, the axis extends along the first direction, the second light source and the third light source are respectively positioned at two sides of the product to be inspected along a second direction perpendicular to the first direction, and the industrial personal computer is used for carrying out algorithm processing on images acquired by the shallow depth lens.
Optionally, the camera is a high-frame-rate industrial camera, the first light source is a sorghum stroboscopic dome light source, the second light source and the third light source are light supplementing strip light sources, the industrial personal computer is provided with a first module and a second module, the first module is used for storing a maximum definition map, and the second module is used for storing a marker map.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The method for checking the appearance of the lens of the camera module comprises the following steps of adjusting a shallow depth of field lens of a camera to a set position, controlling the camera to move at a constant speed to a direction close to a product to be tested for a set stroke, arranging a plurality of preset shooting points in the set stroke, shooting images at all preset shooting points by the camera, wherein an image obtained by the shallow depth of field lens at any shooting point except a first shooting point is compared with a focusing layer where a definition maximum value and a definition maximum value are located by an image obtained by the last shooting point through a preset algorithm to generate a fusion image, after the fusion image of the last shooting point is generated, entering a main detection flow by the algorithm, analyzing the focusing layers where all detected defects are located, and reporting a detection result.
Fig. 1 is an overall flowchart of a method for inspecting the appearance of a lens of a camera module according to an embodiment of the present invention, and referring to fig. 1, in the method for inspecting the appearance of a lens of a camera module according to the embodiment, as a shallow depth lens of a camera photographs a product to be inspected at each photographing point, a preset algorithm synchronously performs a fusion operation on photographed images, and only after the last photographing point completes photographing an image, a unique primary detection algorithm flow is performed.
The method for simultaneous shooting and simultaneous fusion can reduce the calculation pressure of an industrial personal computer, shorten the time consumption of compression detection and improve the productivity of a production line, and in addition, the camera is provided with a shallow depth lens, so that the definition of a defect image can be ensured, and the accuracy of a detection result is ensured.
The camera can select microsecond exposure time to ensure that images shot by the camera in the moving process are free of smear.
As an alternative embodiment, the adjustment of the shallow depth lens of the camera to the set position includes adjusting the center of the through hole of the first light source, the center of the shallow depth lens and the center of the light passing hole of the product to be measured to be coaxial, and ensuring that the second light source and the third light source are respectively located at two sides of the product to be measured.
In this embodiment, when reaching a shooting point, the motion control card simultaneously sends trigger signals to the first light source, the second light source, the third light source and the camera, the camera collects images after the three light sources are all lightened, and after the image collection is completed, the three light sources are extinguished, and then image fusion is performed through an algorithm.
The first light source, the second light source and the third light source can be high-brightness high-speed stroboscopic light sources, so that the camera can capture images with enough brightness under the condition of low exposure time.
Fig. 2 is a flowchart of specific steps of a fused image in a method for inspecting the appearance of a lens of a camera module, and referring to fig. 2, as an alternative embodiment, the fused image is generated after an image is captured at any other capturing point except for a first capturing point, which includes recording a definition value of each pixel in an image obtained at a corresponding capturing point, comparing the definition value with a definition value of a maximum definition value map corresponding to a last capturing point, selecting a larger definition value in two pictures for each pixel, fusing pixels with the selected maximum definition value to form a maximum definition value map corresponding to the capturing point, and recording a focusing layer where a definition maximum value of each pixel is located according to the formed maximum definition value map corresponding to the capturing point, so as to form a mark map corresponding to the capturing point.
In this embodiment, the previous three shooting points are specifically described as examples, the image shot at the first shooting point is the first image, the image shot at the second shooting point is the second image, and so on.
After the first image is shot, as no more images exist before the first image is shot, the definition value graph of each pixel in the first image is the maximum definition value graph corresponding to the first shooting point, namely the first maximum definition graph;
After the second image is shot, comparing the definition value of each pixel in the second image with the first maximum definition map, and selecting a numerical value with a larger definition value to form a maximum definition map corresponding to a second shooting point, namely a second maximum definition map;
after the third image is shot, comparing the definition value of each pixel in the third image with the definition value of each pixel in the second maximum definition map, and selecting a numerical value with a larger definition value to form a maximum definition map corresponding to a third shooting point, which is called a third maximum definition map;
and the image shooting is finished until the last shooting point, and a maximum definition map and a mark map corresponding to the last shooting point are generated after comparison, namely a final maximum definition map and a final mark map.
The multi-focusing layer images can be fused with high precision by utilizing a pixel-by-pixel multi-line fusion technology.
Referring to fig. 1, as an alternative embodiment, after analyzing the focusing layer where all the detected defects are located, the main detection process retains the defects on each layer of lenses of the light passing hole and filters out the defects on the surface of the protective film before reporting the detection result.
In this embodiment, the interference of the defect of the protective film on the detection result can be eliminated, so that the detected defect of the lens is more accurate.
The main detection flow comprises the steps of automatically dividing non-concentric circular areas by using a circular gradient method when detecting a light-passing hole lens, detecting defects of different areas by using different parameters, detecting low-contrast defects by using a circular area dynamic threshold segmentation method, extracting the defects by using a deeplab v3+ semantic segmentation model when detecting an end face, calculating the size, average gray level and contrast information of each area, and further judging the defects by using a support vector machine.
In this embodiment, due to the polishing mode and the internal lens structure, the images will show non-concentric circular areas with different gray levels, so the circular gradient method is used to automatically divide the non-concentric circular areas, different parameters are used to detect defects in different areas to improve the adaptability of the algorithm, in addition, in order to accurately detect the defects with low contrast and eliminate the interference of camera noise, the dynamic threshold segmentation algorithm of the circular areas is used to greatly improve the detection rate of defects of the lens with the light-transmitting holes, and in addition, due to the existence of textures of the end face with both brightness and darkness and the very low contrast of part of the defects, in order to accurately detect the defects and eliminate the influence of the textures of the end face, the deeplab v3+ semantic segmentation model is used to extract the defects, the size, average gray level and contrast information of each area are calculated, and the defects are further judged by using a support vector machine, so that other interference is filtered.
Fig. 3 is a schematic structural diagram of a camera module lens appearance inspection device provided by an embodiment of the invention, and referring to fig. 3, the embodiment of the invention also provides a camera module lens appearance inspection device of a camera module lens appearance inspection method, which comprises a frame, a camera, a first light source, a second light source, a third light source, a lifting adjustment mechanism and an industrial personal computer, wherein the frame is provided with a positioning mechanism for positioning a product to be tested, the camera is arranged on the frame through the lifting mechanism so that the camera can move along a first direction for a set stroke relative to the frame, the camera is provided with a shallow depth lens, the center of a through hole of the first light source and the center of the shallow depth lens are coaxially arranged, the axis extends along the first direction, the second light source and the third light source are respectively positioned at two sides of the product to be tested along a second direction perpendicular to the first direction, and the industrial personal computer is used for carrying out algorithm processing on images acquired by the shallow depth lens.
In this embodiment, the first direction may be a vertical direction, and the second direction may be a horizontal direction. The lifting adjusting mechanism can control the camera to move at a uniform speed along the vertical direction.
As an alternative embodiment, the camera is a high frame rate industrial camera, the first light source is a sorghum stroboscopic dome light source, the second light source and the third light source are light supplementing strip light sources, and the industrial personal computer is provided with a first module and a second module, wherein the first module is used for storing a maximum definition map, and the second module is used for storing a marking map.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.