CN115222743A - Furniture surface paint spraying defect detection method based on vision - Google Patents
Furniture surface paint spraying defect detection method based on vision Download PDFInfo
- Publication number
- CN115222743A CN115222743A CN202211146708.9A CN202211146708A CN115222743A CN 115222743 A CN115222743 A CN 115222743A CN 202211146708 A CN202211146708 A CN 202211146708A CN 115222743 A CN115222743 A CN 115222743A
- Authority
- CN
- China
- Prior art keywords
- processed
- defect
- block
- superpixel
- super
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Quality & Reliability (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image data processing, in particular to a furniture surface painting defect detection method based on vision, which comprises the following steps: acquiring a surface paint spraying image of furniture to be detected, and preprocessing the surface paint spraying image; carrying out super-pixel block division processing on the target spray painting image; determining the integral significance corresponding to the super pixel block to be processed; dividing a defect candidate superpixel block set and a normal pixel block set from the superpixel block set to be processed; screening a paint spraying defect pixel point set from the defect candidate super pixel block set; determining the target paint spraying defect degree corresponding to the furniture to be detected; and generating target defect information corresponding to the furniture to be detected. According to the invention, by carrying out image processing on the surface paint spraying image, the technical problem that the efficiency and the accuracy of paint spraying defect detection on the furniture surface are low is solved, the efficiency and the accuracy of defect detection on the label are improved, and the method is mainly applied to paint spraying defect detection on the furniture surface.
Description
Technical Field
The invention relates to the technical field of image data processing, in particular to a furniture surface paint spraying defect detection method based on vision.
Background
The paint spraying on the surface of the furniture can protect the furniture from being eroded by various media such as light, water and the like, prolong the service life of the furniture, enhance the luster, glossiness and smoothness of the surface of the furniture and improve the aesthetic feeling of the furniture. Furniture without painting defects generally has smooth surfaces, uniform distribution and no defects. However, due to external factors such as incomplete workpiece dust removal, incomplete paint filtration, incomplete paint conveying pipelines and the like, paint spraying defects such as orange peel, sagging, powder accumulation, gap defects and the like often occur on the surface of furniture during paint spraying, the attractiveness of the surface of the furniture is reduced, the appearance of the furniture is influenced by consumers, and the damage to the furniture is accelerated in the severe cases. Wherein a void defect may be indicative of an area of the furniture surface to be inspected that should be painted that is not painted. Therefore, the paint spraying defect detection on the furniture surface is very important. At present, when the paint spraying defect detection is carried out on the surface of an article, the method generally adopted is as follows: firstly, a paint spraying defect detection network is trained through a large number of defective article images and non-defective article images, then the surface paint spraying images of the articles to be detected are input into the trained paint spraying defect detection network, and the degree of the surface paint spraying defects of the articles to be detected is determined through the paint spraying defect detection network, wherein the paint spraying defect detection network can be a neural network.
However, when the above method is adopted, the following technical problems often exist when the painting defect detection is carried out on the furniture surface:
firstly, when a paint spraying defect detection network is trained, a large number of defective furniture images and non-defective furniture images are often needed, a large amount of time is often consumed for collecting the images, and the time for training the paint spraying defect detection network is also often long, so that the efficiency of paint spraying defect detection on the surface of furniture is often low;
secondly, in the actual conditions, the area that is close to the center of waiting to detect furniture surface often takes place the defect than the area that is far away from the center of waiting to detect furniture surface, often influence furniture pleasing to the eye more, and the area that is close to the center of waiting to detect furniture surface often is by more use, often arouse bigger harm more easily, however through the defect detection network that sprays paint, when confirming the surface defect degree of spraying paint of waiting to detect furniture, often will wait to detect the important degree that takes place the defect of spraying paint on each position of furniture and regard as the same, consequently, often lead to the accuracy that detects the defect of spraying paint to furniture surface low.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The invention provides a furniture surface paint spraying defect detection method based on vision, and aims to solve the technical problem that the efficiency and the accuracy of paint spraying defect detection on the furniture surface are low.
The invention provides a furniture surface painting defect detection method based on vision, which comprises the following steps:
acquiring a surface paint spraying image of furniture to be detected, and preprocessing the surface paint spraying image to obtain a target paint spraying image;
carrying out superpixel block division on the target spray painting image to obtain a superpixel block set to be processed;
determining the integral significance corresponding to the super pixel blocks to be processed in the super pixel block set to be processed;
dividing a defect candidate superpixel block set and a normal superpixel block set from the superpixel block set to be processed according to the integral significance corresponding to each superpixel block to be processed in the superpixel block set to be processed;
screening a paint spraying defect pixel point set from the defect candidate super pixel block set according to the defect candidate super pixel block set and the normal pixel block set;
determining the target paint spraying defect degree corresponding to the furniture to be detected according to the paint spraying defect pixel point set and the furniture center pixel points acquired in advance;
and generating target defect information corresponding to the furniture to be detected according to the target paint spraying defect degree.
Further, the determining the overall saliency corresponding to the super pixel block to be processed in the super pixel block set to be processed includes:
determining the gradient amplitude and the gradient direction corresponding to each pixel point in the superpixel blocks to be processed in the superpixel block set to be processed;
determining the internal entropy corresponding to the superpixel block to be processed according to the gradient amplitude and the gradient direction corresponding to each pixel point in each superpixel block to be processed in the superpixel block set to be processed;
determining a neighborhood superpixel block set corresponding to each superpixel block to be processed in the superpixel block set to be processed;
determining the gray significance corresponding to the super pixel blocks to be processed according to each super pixel block to be processed in the super pixel block set to be processed and the neighborhood super pixel block set corresponding to the super pixel block to be processed;
determining the contrast significance corresponding to each super-pixel block to be processed in the super-pixel block set to be processed;
and determining the integral significance corresponding to the super pixel blocks to be processed according to the internal entropy, the gray significance and the contrast significance corresponding to each super pixel block to be processed in the super pixel block set to be processed.
Further, the determining the internal entropy corresponding to the super pixel block to be processed according to the gradient amplitude and the gradient direction corresponding to each pixel point in each super pixel block to be processed in the super pixel block set to be processed includes:
combining the gradient amplitude value and the gradient direction corresponding to each pixel point in the super-pixel block to be processed into a binary group corresponding to the pixel point to obtain a binary group set corresponding to the super-pixel block to be processed;
classifying the binary groups in the binary group set corresponding to the super pixel block to be processed to obtain a binary group category set corresponding to the super pixel block to be processed;
and determining the internal entropy corresponding to the superpixel block to be processed according to the number of the binary groups in the binary group class set corresponding to the superpixel block to be processed, the number of the pixel points in the superpixel block to be processed and the number of the binary groups in each binary group class in the binary group class set corresponding to the superpixel block to be processed.
Further, the determining the gray significance corresponding to the super pixel block to be processed according to each super pixel block to be processed in the super pixel block set to be processed and the neighborhood super pixel block set corresponding to the super pixel block to be processed includes:
determining the mean value of the gray values corresponding to the pixel points in the super pixel block to be processed as a first gray mean value corresponding to the super pixel block to be processed;
determining the mean value of the gray values corresponding to the pixel points in each neighborhood superpixel block in the neighborhood superpixel block set corresponding to the superpixel block to be processed as a second gray mean value to obtain a second gray mean value set corresponding to the superpixel block to be processed;
determining a first gray significance according to the first gray mean value corresponding to the super pixel block to be processed and each second gray mean value in the second gray mean value set, and obtaining a first gray significance set corresponding to the super pixel block to be processed;
and screening out the maximum first gray significance from the first gray significance set corresponding to the super pixel block to be processed, and taking the maximum first gray significance as the gray significance corresponding to the super pixel block to be processed.
Further, the determining the contrast saliency corresponding to each super pixel block to be processed in the super pixel block set to be processed includes:
determining a gray contrast index corresponding to the super pixel block to be processed according to the gray value corresponding to each pixel point in the super pixel block to be processed, the mean value of the gray values corresponding to the pixel points in the super pixel block to be processed and the number of the pixel points in the super pixel block to be processed;
determining a gray contrast index corresponding to a neighborhood superpixel block in a neighborhood superpixel block set corresponding to the superpixel block to be processed;
and determining the contrast significance corresponding to the super pixel block to be processed according to the gray contrast indexes corresponding to the super pixel block to be processed and the neighborhood super pixel blocks in the neighborhood super pixel block set corresponding to the super pixel block to be processed.
Further, the dividing a defect candidate superpixel block set and a normal pixel block set from the superpixel block set to be processed according to the overall significance corresponding to each superpixel block to be processed in the superpixel block set to be processed includes:
normalizing the integral significance corresponding to the super-pixel block to be processed in the super-pixel block set to be processed to obtain the normalized significance corresponding to the super-pixel block to be processed;
when the normalized significance corresponding to the superpixel blocks to be processed in the superpixel block set to be processed is greater than a preset defect threshold, determining the superpixel blocks to be processed as defect candidate superpixel blocks;
and when the normalized significance corresponding to the super pixel block to be processed in the super pixel block set to be processed is less than or equal to the defect threshold, determining the super pixel block to be processed as a normal pixel block.
Further, the step of screening out a paint spraying defect pixel point set from the defect candidate super pixel block set according to the defect candidate super pixel block set and the normal pixel block set comprises:
determining the mean value of the gray values corresponding to each pixel point in each normal pixel block in the normal pixel block set as a normal gray mean value;
determining the absolute value of the difference value between the gray value corresponding to the pixel point in the defect candidate super pixel block set and the normal gray average value as the difference index corresponding to the pixel point;
and when the difference index corresponding to the pixel point in the defect candidate super-pixel block set is larger than a preset difference threshold value, determining the pixel point as a paint spraying defect pixel point.
Further, the determining the target paint spraying defect degree corresponding to the furniture to be detected according to the paint spraying defect pixel point set and the furniture center pixel points acquired in advance includes:
determining the Euclidean distance between each paint spraying defect pixel point in the paint spraying defect pixel point set and the furniture center pixel point as a target distance corresponding to the paint spraying defect pixel point;
and determining the target paint spraying defect degree corresponding to the furniture to be detected according to the number of the paint spraying defect pixel points in the paint spraying defect pixel point set, the difference index and the target distance corresponding to each paint spraying defect pixel point in the paint spraying defect pixel point set.
Further, the generating of the target defect information corresponding to the furniture to be detected according to the target painting defect degree includes:
carrying out normalization processing on the target paint spraying defect degree to obtain a normalized defect degree;
when the normalized defect degree is larger than a preset defect degree threshold value, generating target defect information representing that the surface of the furniture to be detected has defects;
and when the normalized defect degree is smaller than or equal to the defect degree threshold value, generating target defect information representing the normal surface of the furniture to be detected.
The invention has the following beneficial effects:
according to the furniture surface paint spraying defect detection method based on vision, the technical problem that the efficiency and the accuracy of paint spraying defect detection on the furniture surface are low is solved by carrying out image processing on the surface paint spraying image, and the efficiency and the accuracy of defect detection on the label are improved. Firstly, a surface paint spraying image of furniture to be detected is obtained, and the surface paint spraying image is preprocessed to obtain a target paint spraying image. In actual conditions, when adopting artificial mode, when carrying out the defect detection of spraying paint to the furniture surface, often rely on the subjective impression of person who detects to detect the discernment, the discernment judgement of making is often inaccurate, consequently, when adopting artificial mode to carry out the defect detection of spraying paint to the furniture surface, often can lead to the degree of accuracy of the defect detection of spraying paint to the furniture surface low. Consequently, through acquireing the surface image of spraying paint that includes the furniture surface condition of spraying paint that waits to detect, can be convenient for follow-up through the mode of quantization, the analysis waits to detect the furniture surface condition of spraying paint, can improve the degree of accuracy of carrying out the defect detection of spraying paint to the furniture surface. Moreover, the surface paint-spraying image is preprocessed, so that irrelevant information in the surface paint-spraying image can be eliminated, useful real information can be recovered, the detectability of relevant information can be enhanced, the data can be simplified to the maximum extent, and the paint-spraying defect detection can be conveniently carried out on the surface of furniture through analyzing the target paint-spraying image. And then, performing superpixel block division processing on the target paint spraying image to obtain a superpixel block set to be processed. In practical situations, the texture, color, brightness and other characteristics of the painting defect are often dissimilar to those of the normal area. Therefore, through the partition processing of the superpixel blocks, pixel points with similar characteristics such as texture, color, brightness and the like can be partitioned into the same superpixel block to be processed, and the condition of paint spraying defects on the surface of furniture to be detected can be conveniently determined subsequently. And then, determining the integral significance corresponding to the super pixel blocks to be processed in the super pixel block set to be processed. The integral significance can represent the possibility that the super-pixel block to be processed is the area where the defect is located, so that the integral significance corresponding to the super-pixel block to be processed is determined, the possibility that the super-pixel block to be processed is the area where the defect is located can be quantized, whether the super-pixel block to be processed is the area where the paint defect is located can be conveniently analyzed subsequently, and the accuracy of judging whether the super-pixel block to be processed is the area where the paint defect is located can be improved. And continuously dividing a defect candidate superpixel block set and a normal superpixel block set from the to-be-processed superpixel block set according to the integral significance corresponding to each to-be-processed superpixel block in the to-be-processed superpixel block set. Due to the fact that the overall significance degree corresponding to the super pixel block to be processed quantifies the possibility that the super pixel block to be processed is the region where the defect is located, the accuracy of dividing the defect candidate super pixel block set and the normal pixel block set is improved. And then, screening out a paint spraying defect pixel point set from the defect candidate super pixel block set according to the defect candidate super pixel block set and the normal pixel block set. By comparing the defect candidate super-pixel block set with the normal pixel block set, the accuracy of screening the paint spraying defect pixel points can be improved. And then, determining the target paint spraying defect degree corresponding to the furniture to be detected according to the paint spraying defect pixel point set and the furniture center pixel points acquired in advance. The paint spraying defect pixel point set and the furniture center pixel point are comprehensively considered, so that the accuracy of determining the target paint spraying defect degree corresponding to the furniture to be detected can be improved. And finally, generating target defect information corresponding to the furniture to be detected according to the target paint spraying defect degree. Therefore, the invention solves the technical problem of low efficiency and accuracy of paint spraying defect detection on the surface of furniture by image processing of the paint spraying image on the surface, and improves the efficiency and accuracy of defect detection on the label.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a vision-based furniture surface painting defect detection method according to the present invention;
FIG. 2 is a schematic diagram of a set of superpixels to be processed and neighborhood superpixels in accordance with the present invention.
Wherein the reference numerals in fig. 2 include: a superpixel block to be processed 201, a first neighborhood superpixel block 202, a second neighborhood superpixel block 203, a third neighborhood superpixel block 204, and a fourth neighborhood superpixel block 205.
Detailed Description
To further explain the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, structures, features and effects of the technical solutions according to the present invention will be given with reference to the accompanying drawings and preferred embodiments. In the following description, different references to "one embodiment" or "another embodiment" do not necessarily refer to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention provides a furniture surface painting defect detection method based on vision, which comprises the following steps:
acquiring a surface paint spraying image of furniture to be detected, and preprocessing the surface paint spraying image to obtain a target paint spraying image;
carrying out superpixel block division processing on the target paint spraying image to obtain a superpixel block set to be processed;
determining the integral significance corresponding to the super pixel blocks to be processed in the super pixel block set to be processed;
dividing a defect candidate superpixel block set and a normal superpixel block set from the superpixel block set to be processed according to the integral significance corresponding to each superpixel block to be processed in the superpixel block set to be processed;
screening a paint spraying defect pixel point set from the defect candidate superpixel block set according to the defect candidate superpixel block set and the normal pixel block set;
determining the target paint spraying defect degree corresponding to the furniture to be detected according to the paint spraying defect pixel point set and the furniture center pixel points acquired in advance;
and generating target defect information corresponding to the furniture to be detected according to the target paint spraying defect degree.
The following steps are detailed:
referring to FIG. 1, a flow diagram of some embodiments of a vision-based furniture surface painting defect detection method according to the present invention is shown. The furniture surface painting defect detection method based on vision comprises the following steps:
s1, acquiring a surface paint spraying image of furniture to be detected, and preprocessing the surface paint spraying image to obtain a target paint spraying image.
In some embodiments, a paint-spraying image of a surface of furniture to be detected may be obtained, and the paint-spraying image may be preprocessed to obtain a target paint-spraying image.
The furniture to be detected can be furniture with a single paint color sprayed on the surface of the furniture to be detected for the condition of the paint spraying defect. Painting defects may include, but are not limited to: orange peel, powder accumulation, sagging and void defects. The area in which the void defect is located may be an area of the surface of the piece of furniture to be inspected that should be painted that is not painted. The surface paint image may be an image of the surface of the furniture to be inspected. Pre-processing may include, but is not limited to: image denoising, image enhancement and image graying. The target paint image may be a surface paint image after the pretreatment.
As an example, this step may include the steps of:
firstly, acquiring a surface painting image of furniture to be detected.
For example, the surface painting image may be acquired by an image acquisition device. The image acquisition equipment can be composed of a camera, a light source and the like.
And secondly, performing image denoising on the surface paint spraying image through an image denoising algorithm to obtain a target paint spraying image.
The image denoising algorithm may include, but is not limited to: mean filtering, median filtering, and gaussian filtering.
And S2, performing superpixel block division processing on the target paint spraying image to obtain a superpixel block set to be processed.
In some embodiments, the target paint-spraying image may be subjected to superpixel block division processing to obtain a set of superpixel blocks to be processed.
The super-pixel block to be processed in the super-pixel block set to be processed may be a region composed of pixel points with similar characteristics such as texture, color, brightness, and the like.
As an example, the target paint-spraying image may be subjected to superpixel block division processing by a superpixel division algorithm, so as to obtain a to-be-processed superpixel block set.
And S3, determining the integral significance corresponding to the super pixel blocks to be processed in the super pixel block set to be processed.
In some embodiments, an overall saliency corresponding to a superpixel to be processed in the set of superpixels described above may be determined.
Wherein the overall saliency may characterize the probability that a superpixel block to be processed is the region where a defect is located.
As an example, this step may include the steps of:
firstly, determining the gradient amplitude and the gradient direction corresponding to each pixel point in the superpixel blocks to be processed in the superpixel block set to be processed.
For example, the gradient amplitude and the gradient direction corresponding to each pixel point in the to-be-processed superpixel block set may be determined by an edge detection algorithm.
And secondly, determining the internal entropy corresponding to the superpixel block to be processed according to the gradient amplitude and the gradient direction corresponding to each pixel point in each superpixel block to be processed in the superpixel block set to be processed.
The internal entropy corresponding to the super pixel block to be processed can represent the intensity of gradient change of the pixel points in the super pixel block to be processed.
For example, this step may include the following sub-steps:
the first substep is to combine the gradient amplitude and the gradient direction corresponding to each pixel point in the superpixel block to be processed into a binary group corresponding to the pixel point, so as to obtain a binary group set corresponding to the superpixel block to be processed.
For example, the gradient amplitude corresponding to the pixel point may be determined as the first element in the binary group corresponding to the pixel point. The gradient direction corresponding to the pixel point can be determined as the second element in the binary group corresponding to the pixel point.
And a second sub-step, namely classifying the binary group in the binary group set corresponding to the super pixel block to be processed to obtain a binary group category set corresponding to the super pixel block to be processed.
For example, the same binary group in the binary group set corresponding to the to-be-processed super pixel block may be divided into the same binary group category.
And a third substep of determining the internal entropy corresponding to the superpixel block to be processed according to the number of the binary groups in the binary group class set corresponding to the superpixel block to be processed, the number of the pixel points in the superpixel block to be processed and the number of the binary groups in each binary group class in the binary group class set corresponding to the superpixel block to be processed.
For example, the formula for determining the internal entropy corresponding to the superpixel block to be processed may be:
wherein,Spris the internal entropy corresponding to the superpixel block to be processed.AIs the number of the binary group classes in the binary group class set corresponding to the super pixel block to be processed.BIs the number of pixels in the superpixel block to be processed.Is the first of the set of binary classes corresponding to the superpixel block to be processednThe number of tuples in the two-tuple class.Based on natural constantsThe logarithm of (d).Based on natural constantsThe logarithm of (d).
Due to the fact thatIt can be characterized that the corresponding gradient magnitude and gradient direction belong tonThe proportion of the pixel points of the binary group category in the super-pixel block to be processed is larger, so that when the internal entropy corresponding to the super-pixel block to be processed is larger, the internal distribution condition of the super-pixel block to be processed is more disordered, the regularity is lower, and the possibility that defects exist in the super-pixel block to be processed is higher.
And thirdly, determining a neighborhood superpixel block set corresponding to each superpixel block to be processed in the superpixel block sets to be processed.
The neighbor superpixel block in the neighbor superpixel block set corresponding to the to-be-processed superpixel block may be a to-be-processed superpixel block adjacent to the to-be-processed superpixel block except the to-be-processed superpixel block.
For example, as shown in fig. 2, the neighborhood superpixel block set corresponding to the to-be-processed superpixel block 201 may include: a first neighborhood superpixel block 202, a second neighborhood superpixel block 203, a third neighborhood superpixel block 204, and a fourth neighborhood superpixel block 205. That is, the rectangle in the middle of fig. 2 may represent the superpixel block 201 to be processed, and the outer pentagon, the triangle, and the two rectangles may represent the neighborhood superpixel block set corresponding to the superpixel block 201 to be processed.
And fourthly, determining the gray significance corresponding to the super pixel blocks to be processed according to each super pixel block to be processed in the super pixel block set to be processed and the neighborhood super pixel block set corresponding to the super pixel block to be processed.
The gray significance corresponding to the super pixel block to be processed can represent the gray difference between the super pixel block to be processed and the neighborhood super pixel block.
For example, this step may include the following substeps:
the first sub-step, determining the mean value of the gray values corresponding to the pixel points in the super pixel block to be processed as the first mean value of the gray values corresponding to the super pixel block to be processed.
And a second sub-step of determining the mean value of the gray values corresponding to the pixel points in each of the neighborhood superpixel blocks in the neighborhood superpixel block set corresponding to the superpixel block to be processed as a second gray mean value to obtain a second gray mean value set corresponding to the superpixel block to be processed.
And a third substep of determining a first gray significance according to the first gray mean value corresponding to the superpixel block to be processed and each second gray mean value in the second gray mean value set, so as to obtain a first gray significance set corresponding to the superpixel block to be processed.
For example, the formula for determining the first gray saliency may be:
wherein,is the first gray level saliency set corresponding to the super pixel block to be processedzA first gray saliency.Is the first gray average corresponding to the super pixel block to be processed.Is the first gray mean value in the second gray mean value set corresponding to the superpixel block to be processedzA second gray scale average.Is a preset numerical value with the prevention denominator being 0. Such as, for example,。
in practical situations, the larger the difference between the first gray average value and the second gray average value corresponding to the superpixel block to be processed is, the larger the gray difference between the superpixel block to be processed and the neighborhood superpixel block is. The furniture to be detected can be the furniture with the surface to be detected with the paint defect condition and the single paint color. Therefore, when the super-processing is performedThe larger the difference in gray level between the pixel block and the neighborhood superpixel block, the more likely the superpixel block to be processed is to be the region where the defect is located. Namely, the larger the first gray significance in the first gray significance set corresponding to the super pixel block to be processed is, the more likely the super pixel block to be processed is to be the region where the defect is located. And,the first gray significance set corresponding to the super pixel block to be processedzFirst degree of significance of gray scaleHas a value range of [0,1]Subsequent image processing can be facilitated.The denominator is prevented from being 0 and the numerators and denominators are all addedAnd the first gray level significance corresponding to the super pixel block to be processed is 0 when the first gray level mean value and the second gray level mean value corresponding to the super pixel block to be processed are equal.
And a fourth substep of screening out the maximum first grayscale saliency from the first grayscale saliency set corresponding to the superpixel block to be processed as the grayscale saliency corresponding to the superpixel block to be processed.
And fifthly, determining the contrast significance corresponding to each super pixel block to be processed in the super pixel block set to be processed.
The contrast significance corresponding to the super pixel block to be processed can represent the difference between the gray gradient richness degrees corresponding to the super pixel block to be processed and the neighborhood super pixel block.
For example, this step may include the following sub-steps:
the first substep is to determine the gray contrast index corresponding to the superpixel block to be processed according to the gray value corresponding to each pixel point in the superpixel block to be processed, the mean value of the gray values corresponding to the pixel points in the superpixel block to be processed, and the number of the pixel points in the superpixel block to be processed.
For example, the mean value of the gray values corresponding to the pixel points in the super pixel block to be processed may be used as the first gray mean value corresponding to the super pixel block to be processed. Therefore, the formula for determining the gray contrast index corresponding to the super pixel block to be processed may be:
wherein,Conis the gray contrast index corresponding to the super pixel block to be processed.HIs the number of pixels in the superpixel block to be processed.Is the first in the superpixel block to be processedwThe gray value corresponding to each pixel point.Is the first gray average corresponding to the superpixel block to be processed.
In an actual situation, the larger the difference between the gray value corresponding to each pixel point in the super pixel block to be processed and the first gray average value corresponding to the super pixel block to be processed is, the larger the gray change degree corresponding to the super pixel block to be processed tends to be. And, the gray contrast index corresponding to the super pixel block to be processedConThe average level of the degree of grey scale change corresponding to the superpixel block to be processed can be characterized.
And a second substep of determining the gray contrast index corresponding to the neighborhood superpixel block in the neighborhood superpixel block set corresponding to the superpixel block to be processed.
The specific implementation manner of this sub-step may refer to the first sub-step included in the fifth step included in step S3, and may use the neighborhood superpixel block as the superpixel block to be processed, and execute the first sub-step included in the fifth step included in step S3, so as to obtain the gray contrast index corresponding to the neighborhood superpixel block.
And a third substep of determining the contrast significance corresponding to the superpixel block to be processed according to the grayscale contrast indexes corresponding to the superpixel block to be processed and the neighborhood superpixel blocks in the neighborhood superpixel block set corresponding to the superpixel block to be processed.
For example, the formula for determining the contrast saliency corresponding to the to-be-processed super pixel block may be:
wherein,Dis the contrast saliency corresponding to the superpixel block to be processed.kIs the number of neighborhood superpixel blocks in the neighborhood superpixel block set corresponding to the superpixel block to be processed.eIs a natural constant.ConIs the gray contrast index corresponding to the super pixel block to be processed.Is the first in the set of neighborhood superpixels corresponding to the superpixel block to be processedZGray contrast indexes corresponding to the neighborhood superpixel blocks.
In practical situations, the closer the gray contrast index corresponding to the superpixel block to be processed is to the gray contrast index corresponding to the neighborhood superpixel block, the more similar the gray change degrees of the superpixel block to be processed and the neighborhood superpixel block are. The furniture to be detected can be the furniture with the surface to be detected with the paint defect condition and the single paint color. Therefore, when the degree of gray change of the super pixel block to be processed is more similar to that of the neighboring super pixel block, the super pixel block to be processed is more likely to be a normal region. Namely, the smaller the contrast significance corresponding to the super pixel block to be processed is, the more likely the super pixel block to be processed is to be a normal region. The larger the contrast significance corresponding to the super-pixel block to be processed is, the more likely the super-pixel block to be processed is to be the region where the defect is located.Can characterize the super pixel block to be processed and each neighborhood super pixel block in the neighborhood super pixel block set corresponding to the super pixel block to be processedAverage similarity degree of gray scale change degree corresponding to the pixel block, and contrast significance corresponding to the super pixel block to be processedDHas a value range of [0,1]Subsequent image processing can be facilitated.
And sixthly, determining the integral significance corresponding to the super pixel blocks to be processed according to the internal entropy, the gray significance and the contrast significance corresponding to each super pixel block to be processed in the super pixel block set to be processed.
For example, the formula for determining the overall saliency corresponding to the super-pixel block to be processed may be:
wherein,xis the overall saliency corresponding to the superpixel block to be processed.SprIs the internal entropy corresponding to the superpixel block to be processed.Is the gray level saliency corresponding to the superpixel block to be processed.DIs the contrast saliency corresponding to the superpixel block to be processed.Andis an exponential factor preset according to actual conditions. Such as。
In practical situations, when the internal entropy, the grayscale saliency, or the contrast saliency corresponding to the to-be-processed super-pixel block is larger, the overall saliency corresponding to the to-be-processed super-pixel block tends to be larger, and the possibility that a defect exists inside the to-be-processed super-pixel block tends to be larger. And, an exponential factor is preset according to actual conditionsAndthe determined overall significance can be more in line with the actual situation.
And S4, dividing a defect candidate superpixel block set and a normal pixel block set from the superpixel block set to be processed according to the integral significance corresponding to each superpixel block to be processed in the superpixel block set to be processed.
In some embodiments, the defect candidate super pixel block set and the normal pixel block set may be partitioned from the to-be-processed super pixel block set according to the overall saliency corresponding to each to-be-processed super pixel block in the to-be-processed super pixel block set.
And the defect candidate superpixel blocks in the defect candidate superpixel block set are to-be-processed superpixel blocks which possibly have defects. The normal pixel blocks in the normal pixel block set may be pending superpixel blocks that have no defects.
As an example, this step may include the steps of:
firstly, normalizing the integral significance corresponding to the super-pixel blocks to be processed in the super-pixel block set to be processed to obtain the normalized significance corresponding to the super-pixel blocks to be processed.
The value range of the normalized significance may be (0,1).
And secondly, when the normalized significance corresponding to the superpixel blocks to be processed in the superpixel block set to be processed is greater than a preset defect threshold value, determining the superpixel blocks to be processed as defect candidate superpixel blocks.
The defect threshold may be a maximum normalized saliency allowed to correspond to the to-be-processed super pixel block when the preset to-be-processed super pixel block is a normal pixel block. For example, the defect threshold may be 0.5.
And thirdly, when the normalized significance corresponding to the super-pixel block to be processed in the super-pixel block set to be processed is smaller than or equal to the defect threshold, determining the super-pixel block to be processed as a normal pixel block.
And S5, screening a paint spraying defect pixel point set from the defect candidate super pixel block set according to the defect candidate super pixel block set and the normal pixel block set.
In some embodiments, a painting defect pixel point set can be screened from the defect candidate super pixel block set according to the defect candidate super pixel block set and the normal pixel block set.
And the painting defect pixel points in the painting defect pixel point set can be pixel points in the region where the defect is located.
As an example, this step may include the steps of:
firstly, determining the mean value of the gray values corresponding to each pixel point in each normal pixel block in the normal pixel block set as a normal gray mean value.
And secondly, determining the absolute value of the difference value between the gray value corresponding to the pixel point in the defect candidate super-pixel block set and the normal gray average value as the difference index corresponding to the pixel point.
And thirdly, when the difference index corresponding to the pixel point in the defect candidate superpixel block set is larger than a preset difference threshold, determining the pixel point as a paint spraying defect pixel point.
The difference threshold may be a maximum difference index allowed by a preset pixel point when the preset pixel point is a normal pixel point. For example, the difference threshold may be 20. The normal pixel points may be pixel points in the normal region.
And S6, determining the target paint spraying defect degree corresponding to the furniture to be detected according to the paint spraying defect pixel point set and the furniture center pixel points acquired in advance.
In some embodiments, the target paint defect degree corresponding to the furniture to be detected may be determined according to the paint defect pixel point set and the furniture center pixel points acquired in advance.
The furniture center pixel point can be a pixel point corresponding to the center point of the furniture to be detected. The target paint spraying defect degree can represent the defect degree of the surface paint spraying of the furniture to be detected.
As an example, this step may include the steps of:
the method comprises the following steps of firstly, determining the Euclidean distance between each paint spraying defect pixel point in the paint spraying defect pixel point set and a furniture center pixel point as the target distance corresponding to the paint spraying defect pixel point.
And secondly, determining the target paint spraying defect degree corresponding to the furniture to be detected according to the number of the paint spraying defect pixel points in the paint spraying defect pixel point set, the difference index and the target distance corresponding to each paint spraying defect pixel point in the paint spraying defect pixel point set.
For example, the formula for determining the target paint defect degree corresponding to the furniture to be detected may be:
wherein,and the target paint spraying defect degree corresponding to the furniture to be detected.TThe number of the paint spraying defect pixel points in the paint spraying defect pixel point set is shown.eIs a natural constant.Is the first in the paint spraying defect pixel point settAnd the target distance corresponding to each paint spraying defect pixel point.Is the first in the paint spraying defect pixel point settAnd (4) corresponding difference indexes of the pixel points of each paint spraying defect.
In practical situations, when the painting defect occurs on the surface of the furniture to be detected at a position closer to the center, the attractiveness of the surface of the furniture to be detected is often influenced more, and the painting defect is often used more easily at a position closer to the center, so the painting defect on the surface of the furniture to be detected is often more important at a position closer to the center. Thus, it is possible to provideThe target distance (Euclidean distance between the paint defect pixel point and the furniture center pixel point) corresponding to the paint defect pixel point is consideredThe determined target paint spraying defect degree can better accord with the actual condition as the weight of the difference index corresponding to the paint spraying defect pixel point, and the accuracy of determining the target paint spraying defect degree can be improved. Moreover, when the target distance corresponding to the paint spraying defect pixel point is smaller, the distance between the paint spraying defect pixel point and the furniture center pixel point is often closer,the larger the tendency. When the temperature is higher than the set temperatureThe larger or smallerThe larger the size is, the paint spraying defect degree of the target corresponding to the furniture to be detectedThe greater the degree of defects in the painting of the surface of the piece of furniture to be inspected.
And S7, generating target defect information corresponding to the furniture to be detected according to the target paint spraying defect degree.
In some embodiments, the target defect information corresponding to the furniture to be detected may be generated according to the target painting defect degree.
The target defect information can represent the painting defect condition of the furniture to be detected.
As an example, this step may include the steps of:
firstly, normalizing the target paint spraying defect degree to obtain a normalized defect degree.
The value range of the normalized defect degree can be (0,1).
And secondly, generating target defect information representing that the surface of the furniture to be detected has defects when the normalized defect degree is larger than a preset defect degree threshold value.
The defect degree threshold may be a maximum normalized defect degree allowed to correspond to the furniture to be detected when the preset surface of the furniture to be detected is normal. For example, the defect level threshold may be 0.5.
And thirdly, generating target defect information representing the normal surface of the furniture to be detected when the normalized defect degree is less than or equal to a defect degree threshold value.
According to the furniture surface paint spraying defect detection method based on vision, the technical problem that the efficiency and the accuracy of paint spraying defect detection on the furniture surface are low is solved by carrying out image processing on the surface paint spraying image, and the efficiency and the accuracy of defect detection on the label are improved. Firstly, a surface painting image of furniture to be detected is obtained, and the surface painting image is preprocessed to obtain a target painting image. In actual conditions, when adopting artificial mode, when carrying out the defect detection of spraying paint to the furniture surface, often rely on the subjective impression of person who detects to detect the discernment, the discernment judgement of making is often inaccurate, consequently, when adopting artificial mode to carry out the defect detection of spraying paint to the furniture surface, often can lead to the degree of accuracy of the defect detection of spraying paint to the furniture surface low. Consequently, through acquireing the surface image of spraying paint that includes the furniture surface condition of spraying paint that waits to detect, can be convenient for follow-up through the mode of quantization, the analysis waits to detect the furniture surface condition of spraying paint, can improve the degree of accuracy of carrying out the defect detection of spraying paint to the furniture surface. Moreover, the surface paint-spraying image is preprocessed, so that irrelevant information in the surface paint-spraying image can be eliminated, useful real information can be recovered, the detectability of relevant information can be enhanced, the data can be simplified to the maximum extent, and the paint-spraying defect detection can be conveniently carried out on the surface of furniture through analyzing the target paint-spraying image. And then, performing superpixel block division processing on the target paint spraying image to obtain a superpixel block set to be processed. In practical situations, the texture, color, brightness and other characteristics of the painting defect are often not similar to those of the normal area. Therefore, through the partition processing of the superpixel blocks, pixel points with similar characteristics such as texture, color, brightness and the like can be partitioned into the same superpixel block to be processed, and the condition of paint spraying defects on the surface of furniture to be detected can be conveniently determined subsequently. And then, determining the integral significance corresponding to the super pixel blocks to be processed in the super pixel block set to be processed. The integral significance can represent the possibility that the super-pixel block to be processed is the area where the defect is located, so that the integral significance corresponding to the super-pixel block to be processed is determined, the possibility that the super-pixel block to be processed is the area where the defect is located can be quantized, whether the super-pixel block to be processed is the area where the paint defect is located can be conveniently analyzed subsequently, and the accuracy of judging whether the super-pixel block to be processed is the area where the paint defect is located can be improved. And continuously dividing a defect candidate superpixel block set and a normal superpixel block set from the to-be-processed superpixel block set according to the integral significance corresponding to each to-be-processed superpixel block in the to-be-processed superpixel block set. Due to the fact that the overall significance degree corresponding to the super pixel block to be processed quantifies the possibility that the super pixel block to be processed is the region where the defect is located, the accuracy of dividing the defect candidate super pixel block set and the normal pixel block set is improved. And then, screening out a paint spraying defect pixel point set from the defect candidate super pixel block set according to the defect candidate super pixel block set and the normal pixel block set. By comparing the defect candidate super-pixel block set with the normal pixel block set, the accuracy of screening the paint spraying defect pixel points can be improved. And then, determining the target paint spraying defect degree corresponding to the furniture to be detected according to the paint spraying defect pixel point set and the furniture center pixel points acquired in advance. The paint spraying defect pixel point set and the furniture center pixel point are comprehensively considered, so that the accuracy of determining the target paint spraying defect degree corresponding to the furniture to be detected can be improved. And finally, generating target defect information corresponding to the furniture to be detected according to the target paint spraying defect degree. Therefore, the invention solves the technical problem of low efficiency and accuracy of paint spraying defect detection on the surface of furniture by image processing of the paint spraying image on the surface, and improves the efficiency and accuracy of defect detection on the label.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; the modifications or substitutions do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present application, and are included in the protection scope of the present application.
Claims (9)
1. A furniture surface painting defect detection method based on vision is characterized by comprising the following steps:
acquiring a surface paint spraying image of furniture to be detected, and preprocessing the surface paint spraying image to obtain a target paint spraying image;
performing superpixel block division processing on the target paint spraying image to obtain a superpixel block set to be processed;
determining the integral significance corresponding to the super pixel blocks to be processed in the super pixel block set to be processed;
dividing a defect candidate superpixel block set and a normal superpixel block set from the superpixel block set to be processed according to the integral significance corresponding to each superpixel block to be processed in the superpixel block set to be processed;
screening a paint spraying defect pixel point set from the defect candidate super pixel block set according to the defect candidate super pixel block set and the normal pixel block set;
determining the target paint spraying defect degree corresponding to the furniture to be detected according to the paint spraying defect pixel point set and the furniture center pixel points acquired in advance;
and generating target defect information corresponding to the furniture to be detected according to the target painting defect degree.
2. The vision-based furniture surface painting defect detection method of claim 1, wherein said determining the overall saliency corresponding to the superpixel blocks to be processed in the set of superpixel blocks to be processed comprises:
determining the gradient amplitude and the gradient direction corresponding to each pixel point in the superpixel blocks to be processed in the superpixel block set to be processed;
determining the internal entropy corresponding to the superpixel block to be processed according to the gradient amplitude and the gradient direction corresponding to each pixel point in each superpixel block to be processed in the superpixel block set to be processed;
determining a neighborhood superpixel block set corresponding to each superpixel block to be processed in the superpixel block set to be processed;
determining the gray significance corresponding to the super pixel blocks to be processed according to each super pixel block to be processed in the super pixel block set to be processed and the neighborhood super pixel block set corresponding to the super pixel block to be processed;
determining the contrast significance corresponding to each super-pixel block to be processed in the super-pixel block set to be processed;
and determining the integral significance corresponding to the super pixel blocks to be processed according to the internal entropy, the gray significance and the contrast significance corresponding to each super pixel block to be processed in the super pixel block set to be processed.
3. The vision-based furniture surface painting defect detection method of claim 2, wherein the determining the internal entropy of the superpixel block to be processed according to the gradient amplitude and the gradient direction corresponding to each pixel point in each superpixel block to be processed in the superpixel block set comprises:
combining the gradient amplitude value and the gradient direction corresponding to each pixel point in the super-pixel block to be processed into a binary group corresponding to the pixel point to obtain a binary group set corresponding to the super-pixel block to be processed;
classifying the binary group in the binary group set corresponding to the super pixel block to be processed to obtain a binary group category set corresponding to the super pixel block to be processed;
and determining the internal entropy corresponding to the superpixel block to be processed according to the number of the binary groups in the binary group class set corresponding to the superpixel block to be processed, the number of the pixel points in the superpixel block to be processed and the number of the binary groups in each binary group class in the binary group class set corresponding to the superpixel block to be processed.
4. The vision-based furniture surface painting defect detection method of claim 2, wherein the determining the gray significance of the superpixel block to be processed according to each superpixel block to be processed in the superpixel block set and the neighborhood superpixel block set corresponding to the superpixel block to be processed comprises:
determining the mean value of the gray values corresponding to the pixel points in the super pixel block to be processed as a first gray mean value corresponding to the super pixel block to be processed;
determining the mean value of the gray values corresponding to the pixel points in each neighborhood superpixel block in the neighborhood superpixel block set corresponding to the superpixel block to be processed as a second gray mean value to obtain a second gray mean value set corresponding to the superpixel block to be processed;
determining a first gray significance according to the first gray mean value corresponding to the super pixel block to be processed and each second gray mean value in the second gray mean value set, and obtaining a first gray significance set corresponding to the super pixel block to be processed;
and screening out the maximum first gray significance from the first gray significance set corresponding to the super pixel block to be processed, and taking the maximum first gray significance as the gray significance corresponding to the super pixel block to be processed.
5. The vision-based furniture surface painting defect detecting method of claim 2, wherein the determining the contrast saliency corresponding to each of the set of superpixels to be processed comprises:
determining a gray contrast index corresponding to the super pixel block to be processed according to the gray value corresponding to each pixel point in the super pixel block to be processed, the mean value of the gray values corresponding to the pixel points in the super pixel block to be processed and the number of the pixel points in the super pixel block to be processed;
determining a gray contrast index corresponding to a neighborhood superpixel block in a neighborhood superpixel block set corresponding to the superpixel block to be processed;
and determining the contrast significance corresponding to the superpixel block to be processed according to the grayscale contrast indexes corresponding to the superpixel block to be processed and the neighborhood superpixel blocks in the neighborhood superpixel block set corresponding to the superpixel block to be processed.
6. The vision-based furniture surface painting defect detection method of claim 1, wherein the dividing of the set of superpixels to be processed into the set of defect candidate superpixels and the set of normal pixel blocks according to the overall saliency corresponding to each superpixel to be processed in the set of superpixels to be processed comprises:
normalizing the integral significance corresponding to the super-pixel block to be processed in the super-pixel block set to be processed to obtain the normalized significance corresponding to the super-pixel block to be processed;
when the normalized significance corresponding to the super-pixel block to be processed in the super-pixel block set to be processed is larger than a preset defect threshold value, determining the super-pixel block to be processed as a defect candidate super-pixel block;
and when the normalized significance corresponding to the super pixel block to be processed in the super pixel block set to be processed is less than or equal to the defect threshold, determining the super pixel block to be processed as a normal pixel block.
7. The vision-based furniture surface painting defect detection method of claim 1, wherein the step of screening the painting defect pixel point set from the defect candidate superpixel block set according to the defect candidate superpixel block set and the normal pixel block set comprises:
determining the mean value of the gray values corresponding to each pixel point in each normal pixel block in the normal pixel block set as a normal gray mean value;
determining the absolute value of the difference value between the gray value corresponding to the pixel point in the defect candidate super pixel block set and the normal gray average value as the difference index corresponding to the pixel point;
and when the difference index corresponding to the pixel point in the defect candidate super-pixel block set is larger than a preset difference threshold value, determining the pixel point as a paint spraying defect pixel point.
8. The vision-based furniture surface paint defect detection method of claim 7, wherein the determining the target paint defect degree corresponding to the furniture to be detected according to the paint defect pixel point set and the pre-obtained furniture center pixel point comprises:
determining the Euclidean distance between each paint spraying defect pixel point in the paint spraying defect pixel point set and the furniture center pixel point as a target distance corresponding to the paint spraying defect pixel point;
and determining the target paint spraying defect degree corresponding to the furniture to be detected according to the number of the paint spraying defect pixel points in the paint spraying defect pixel point set, and the difference index and the target distance corresponding to each paint spraying defect pixel point in the paint spraying defect pixel point set.
9. The furniture surface painting defect detection method based on vision according to claim 1, wherein the generating of the target defect information corresponding to the furniture to be detected according to the target painting defect degree comprises:
normalizing the target paint spraying defect degree to obtain a normalized defect degree;
when the normalized defect degree is larger than a preset defect degree threshold value, generating target defect information representing that the surface of the furniture to be detected has defects;
and when the normalized defect degree is smaller than or equal to the defect degree threshold value, generating target defect information representing the normal surface of the furniture to be detected.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211146708.9A CN115222743B (en) | 2022-09-21 | 2022-09-21 | Furniture surface paint spraying defect detection method based on vision |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211146708.9A CN115222743B (en) | 2022-09-21 | 2022-09-21 | Furniture surface paint spraying defect detection method based on vision |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN115222743A true CN115222743A (en) | 2022-10-21 |
| CN115222743B CN115222743B (en) | 2022-12-09 |
Family
ID=83617536
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211146708.9A Active CN115222743B (en) | 2022-09-21 | 2022-09-21 | Furniture surface paint spraying defect detection method based on vision |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115222743B (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115375675A (en) * | 2022-10-24 | 2022-11-22 | 山东济矿鲁能煤电股份有限公司阳城煤矿 | Coal quality detection method based on image data |
| CN115797299A (en) * | 2022-12-05 | 2023-03-14 | 常宝新材料(苏州)有限公司 | Defect detection method of optical composite film |
| CN115880302A (en) * | 2023-03-08 | 2023-03-31 | 杭州智源电子有限公司 | Instrument panel welding quality detection method based on image analysis |
| CN116363136A (en) * | 2023-06-01 | 2023-06-30 | 山东创元智能设备制造有限责任公司 | On-line screening method and system for automatic production of motor vehicle parts |
| CN117474910A (en) * | 2023-12-27 | 2024-01-30 | 陕西立拓科源科技有限公司 | Visual detection method for motor quality |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9147255B1 (en) * | 2013-03-14 | 2015-09-29 | Hrl Laboratories, Llc | Rapid object detection by combining structural information from image segmentation with bio-inspired attentional mechanisms |
| CN112991305A (en) * | 2021-03-24 | 2021-06-18 | 苏州亚朴智能科技有限公司 | Visual inspection method for surface defects of paint spraying panel |
| CN113781402A (en) * | 2021-08-19 | 2021-12-10 | 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) | Chip surface scratch defect detection method, device and computer equipment |
| CN113888461A (en) * | 2021-08-26 | 2022-01-04 | 华能大理风力发电有限公司 | Method, system and equipment for detecting defects of hardware parts based on deep learning |
| CN115049835A (en) * | 2022-08-16 | 2022-09-13 | 众烁精密模架(南通)有限公司 | Data preprocessing method based on die-casting die defect identification |
-
2022
- 2022-09-21 CN CN202211146708.9A patent/CN115222743B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9147255B1 (en) * | 2013-03-14 | 2015-09-29 | Hrl Laboratories, Llc | Rapid object detection by combining structural information from image segmentation with bio-inspired attentional mechanisms |
| CN112991305A (en) * | 2021-03-24 | 2021-06-18 | 苏州亚朴智能科技有限公司 | Visual inspection method for surface defects of paint spraying panel |
| CN113781402A (en) * | 2021-08-19 | 2021-12-10 | 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) | Chip surface scratch defect detection method, device and computer equipment |
| CN113888461A (en) * | 2021-08-26 | 2022-01-04 | 华能大理风力发电有限公司 | Method, system and equipment for detecting defects of hardware parts based on deep learning |
| CN115049835A (en) * | 2022-08-16 | 2022-09-13 | 众烁精密模架(南通)有限公司 | Data preprocessing method based on die-casting die defect identification |
Non-Patent Citations (2)
| Title |
|---|
| HUIZHOU LIU ET AL.: "An adaptive defect detection method for LNG storage tank insulation layer based on visual saliency", 《PROCESS SAFETY AND ENVIRONMENTAL PROTECTION》 * |
| 马逐曦: "基于超像素的平面铣削工件表面缺陷视觉检测研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅰ辑》 * |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115375675A (en) * | 2022-10-24 | 2022-11-22 | 山东济矿鲁能煤电股份有限公司阳城煤矿 | Coal quality detection method based on image data |
| CN115797299A (en) * | 2022-12-05 | 2023-03-14 | 常宝新材料(苏州)有限公司 | Defect detection method of optical composite film |
| CN115797299B (en) * | 2022-12-05 | 2023-09-01 | 常宝新材料(苏州)有限公司 | Defect detection method of optical composite film |
| CN115880302A (en) * | 2023-03-08 | 2023-03-31 | 杭州智源电子有限公司 | Instrument panel welding quality detection method based on image analysis |
| CN115880302B (en) * | 2023-03-08 | 2023-05-23 | 杭州智源电子有限公司 | Method for detecting welding quality of instrument panel based on image analysis |
| CN116363136A (en) * | 2023-06-01 | 2023-06-30 | 山东创元智能设备制造有限责任公司 | On-line screening method and system for automatic production of motor vehicle parts |
| CN116363136B (en) * | 2023-06-01 | 2023-08-11 | 山东创元智能设备制造有限责任公司 | On-line screening method and system for automatic production of motor vehicle parts |
| CN117474910A (en) * | 2023-12-27 | 2024-01-30 | 陕西立拓科源科技有限公司 | Visual detection method for motor quality |
| CN117474910B (en) * | 2023-12-27 | 2024-03-12 | 陕西立拓科源科技有限公司 | Visual detection method for motor quality |
Also Published As
| Publication number | Publication date |
|---|---|
| CN115222743B (en) | 2022-12-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN115222743B (en) | Furniture surface paint spraying defect detection method based on vision | |
| CN116721106B (en) | Profile flaw visual detection method based on image processing | |
| CN114937055B (en) | Image self-adaptive segmentation method and system based on artificial intelligence | |
| US20230289979A1 (en) | A method for video moving object detection based on relative statistical characteristics of image pixels | |
| Mizushima et al. | An image segmentation method for apple sorting and grading using support vector machine and Otsu’s method | |
| CN115082683A (en) | Injection molding defect detection method based on image processing | |
| CN109682839B (en) | Online detection method for surface defects of metal arc-shaped workpiece | |
| CN116152242B (en) | Visual detection system of natural leather defect for basketball | |
| Gyimah et al. | A robust completed local binary pattern (RCLBP) for surface defect detection | |
| CN116883408B (en) | Integrating instrument shell defect detection method based on artificial intelligence | |
| Wang et al. | A non-reference evaluation method for edge detection of wear particles in ferrograph images | |
| CN114240888A (en) | A method and system for repairing paint defects of furniture components based on image processing | |
| CN115100174B (en) | Ship sheet metal part paint surface defect detection method | |
| CN114549441A (en) | Sucker defect detection method based on image processing | |
| CN113221881B (en) | A multi-level smartphone screen defect detection method | |
| CN115994904A (en) | Garment steamer panel production quality detection method based on computer vision | |
| CN117830312B (en) | Alloy crack nondestructive testing method based on machine vision | |
| CN118470015B (en) | Visual detection method and system for production quality of titanium alloy rod | |
| CN107993219A (en) | A kind of deck of boat detection method of surface flaw based on machine vision | |
| CN119810032A (en) | A method for detecting the surface spraying quality of a bicycle crankset | |
| CN119379609A (en) | A flange surface defect detection algorithm based on traditional image processing | |
| CN114494205B (en) | Door and window corrosion degree determination method based on self-adaptive color grading | |
| Chen et al. | Study of Sub-pixel Level Edge Extraction for Graphics Under Different Environmental Conditions | |
| CN119399567B (en) | Surface cleaning and identifying method for rolling titanium alloy bar | |
| CN119067980B (en) | Circuit breaker plastic clamp quality detection method and system based on visual assistance |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |