[go: up one dir, main page]

CN118351422B - Training method and device for LED lamp strip defect detection model, computer readable storage medium and LED lamp strip defect detection method - Google Patents

Training method and device for LED lamp strip defect detection model, computer readable storage medium and LED lamp strip defect detection method Download PDF

Info

Publication number
CN118351422B
CN118351422B CN202410767574.5A CN202410767574A CN118351422B CN 118351422 B CN118351422 B CN 118351422B CN 202410767574 A CN202410767574 A CN 202410767574A CN 118351422 B CN118351422 B CN 118351422B
Authority
CN
China
Prior art keywords
led lamp
lamp strip
sample
training
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410767574.5A
Other languages
Chinese (zh)
Other versions
CN118351422A (en
Inventor
周文辉
刘志业
黎冬媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China Zhongshan Institute
Original Assignee
University of Electronic Science and Technology of China Zhongshan Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China Zhongshan Institute filed Critical University of Electronic Science and Technology of China Zhongshan Institute
Priority to CN202410767574.5A priority Critical patent/CN118351422B/en
Publication of CN118351422A publication Critical patent/CN118351422A/en
Application granted granted Critical
Publication of CN118351422B publication Critical patent/CN118351422B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Analysis (AREA)
  • Testing Of Optical Devices Or Fibers (AREA)

Abstract

The embodiment of the invention provides a training method and device of an LED lamp strip defect detection model, a computer-readable storage medium and an LED lamp strip defect detection method, wherein the training method comprises the following steps: providing an LED lamp strip training set; selecting K typical positive samples from the LED lamp strip training set, and storing the K typical positive samples into a memory pool; obtaining pseudo-abnormal samples corresponding to each positive sample; and taking the remaining positive class samples and the pseudo-abnormal samples as input images for model training: extracting advanced characteristic information of the input image in each preset dimension by adopting a characteristic extraction network; obtaining difference information between the input image and the memory sample, and determining optimal difference information; obtaining serial information; obtaining a fusion feature map based on the serial information; and acquiring a spatial attention map and outputting each spatial attention map to a decoder through a skip connection. The embodiment can improve the robustness of the model and is convenient for expanding on different data sets.

Description

Training method and device for LED lamp strip defect detection model, computer readable storage medium and LED lamp strip defect detection method
Technical Field
The embodiment of the invention relates to the technical field of computer vision detection, in particular to a training method and device of an LED lamp strip defect detection model, a computer readable storage medium and an LED lamp strip defect detection method.
Background
The production process of the LED lamp strip is provided with a detection procedure for detecting whether each LED lamp bead on the generated LED lamp strip has defects or not.
Currently, computer vision detection technology is quite widely applied to an LED lamp strip defect detection method. The existing method for detecting the defects of the LED lamp strip is to utilize a pre-trained detection model to detect and analyze an actual image to be detected when the LED lamp strip obtained by shooting by a shooting device is lighted so as to determine whether each LED lamp bead has the defects or not. When the detection model is trained in advance, the luminous image of the LED lamp beads is used as a training sample, a positive sample is divided from the training sample, and then the positive sample is directly input into a Deep SVDD network for Deep learning and training, so that the detection model is obtained.
However, the inventor finds that the Deep SVDD network has relatively large limitation on the Deep learning training mode, the model is relatively difficult to train, and the expansibility is relatively poor; moreover, the traditional model training only learns the image characteristics of the positive sample, and the model detection performance is limited.
Disclosure of Invention
The technical problem to be solved by the embodiment of the invention is to provide the training method of the LED lamp strip defect detection model, which can improve the robustness of the model and is convenient to expand on different data sets.
The technical problem to be further solved by the embodiment of the invention is to provide the training device for the LED lamp strip defect detection model, which can improve the robustness of the model and is convenient to expand on different data sets.
A further technical problem to be solved by the embodiments of the present invention is to provide a computer readable storage medium for storing a computer program capable of improving robustness of a model and facilitating expansion on different data sets.
The technical problem to be further solved by the embodiment of the invention is to provide the LED lamp strip defect detection method, which has strong robustness and can be expanded on different data sets.
In order to solve the technical problems, the embodiment of the invention provides the following technical scheme: a training method of an LED lamp strip defect detection model comprises the following steps:
providing an LED lamp strip training set only comprising positive type samples;
K typical positive class samples are selected from the LED lamp strip training set based on a K-means++ cluster selection algorithm, and template image features of each typical positive class sample in a plurality of different preset dimensions are obtained by adopting a preset feature extraction network and are used as memory samples to be stored in a memory pool;
obtaining pseudo-abnormal samples corresponding to each positive type sample in the LED lamp strip training set based on a pseudo-abnormal sample generation method; and
And taking the remaining positive class samples except the typical positive class samples in the LED lamp strip training set and the pseudo-abnormal samples as input images to carry out the following model training:
Extracting advanced feature information of the input image in each preset dimension by adopting the feature extraction network;
Calculating Euclidean distance between the advanced feature information of the input image and the memory sample vector in each preset dimension, obtaining difference information between the input image and the memory sample, and determining optimal difference information;
Combining the difference information of each dimension in the optimal difference information with the advanced feature information of the input image to obtain serial information;
Performing multi-scale feature fusion on the serial information based on a multi-scale feature fusion network model to obtain a fusion feature map of each preset dimension; and
And obtaining the spatial attention map of each fusion characteristic map, and flowing each spatial attention map to a decoder through a jump connection, and outputting a predicted image after the decoder predicts according to the characteristics in each spatial attention map.
Further, during model training, the difference between the true value S and the predicted value S * is also calculated, and the minimized L1 loss function and the focal loss function are constructed to train the model, which specifically includes:
Constructing a minimized L1 loss function: Wherein S represents a true pixel value of an abnormal pixel point in the predicted image at a position corresponding to the input image, and S * represents a predicted pixel value of the abnormal pixel point in the predicted image;
the focal loss function is constructed as: Wherein p t represents the prediction probability of an abnormal pixel point in the predicted image, and alpha t and gamma are super parameters for controlling the weighting degree;
Constructing a total loss function by combining the minimized L1 loss function and the focal loss function: Wherein lambda l1 and lambda f are predetermined empirical constants; and
And correcting the parameter values of the feature extraction network, the multi-scale feature fusion network model and each layer of network in the decoder to solve the minimum value of the total loss function, and outputting the LED lamp strip defect detection model when the total loss function is the minimum value.
Further, the K-means++ cluster selection algorithm is used for selecting K typical positive class samples from the LED lamp strip training set, and specifically comprises the following steps:
Randomly selecting a positive sample from the LED lamp strip training set as a current initial cluster center;
calculating a first actual distance between each positive sample in the LED lamp strip training set and the nearest center of the initial cluster;
Updating the current initial cluster center by adopting the positive sample with the largest first actual distance and circularly executing the previous step until K initial cluster centers are selected;
Adopting each initial cluster center to correspondingly construct an empty cluster space, calculating a second actual distance between each positive type sample in the LED lamp strip training set and each initial cluster center, and dividing each positive type sample in the LED lamp strip training set into the cluster space closest to the initial cluster center according to the second actual distance;
Calculating a vector average value of positive class samples in each cluster space, and updating the initial cluster center corresponding to the cluster space by adopting the vector average value; and
And circularly executing the previous step until the initial cluster center of each cluster type space is unchanged so as to determine each initial cluster center corresponding positive type sample as a typical positive type sample.
Further, the obtaining the pseudo-abnormal samples corresponding to each positive sample in the LED strip training set based on the pseudo-abnormal sample generating method specifically includes:
Respectively carrying out binarization processing on each positive type sample I in the LED lamp strip training set to generate a contour description graph M I, carrying out preset threshold binarization processing on two-dimensional Berlin noise P which is randomly generated in advance to generate a random mask graph M p, and multiplying each contour description graph M I with the random mask graph M p pixel by pixel to generate a target mask graph M;
The specific calculation formula of multiplying each target mask M by texture data I n derived from the DTD dataset pixel by pixel to generate the abnormal region image S n * is: Wherein, the method comprises the steps of, wherein, The transparency factor is used for balancing fusion of the positive sample I and the abnormal region image I n *, and is randomly sampled and determined in the range of [0.15,1 ]; and
Combining the abnormal region image I n * with the corresponding positive sample I to generate a pseudo-abnormal sample I A, where the combination formula is:
further, the multi-scale feature fusion network model includes:
a first convolution layer for performing a primary convolution on the concatenated information to maintain a number of channels of the concatenated information;
A coordinate attention weighting layer, configured to perform coordinate attention weighting on the serial information in different dimensions to capture channel information of the serial information;
an up-sampling layer for up-sampling the serial information weighted by the coordinate attention to align the dimensions;
A second convolution layer, configured to deconvolute the serial information after the alignment dimension to align the number of channels of the serial information; and
And the pixel addition operation layer is used for carrying out pixel-by-pixel addition on the serial information of different dimensions after the number of the aligned channels.
Further, the method further comprises:
Inputting a test set which comprises a positive class sample and a negative class sample and is similar to the LED lamp strip training set into the LED lamp strip defect detection model to obtain a predicted image, and calculating the abnormal score of the predicted image corresponding to each sample in each test set, wherein the abnormal score calculation formula is as follows:
Wherein, the pixel points of each predicted image are ranked from high to low according to the size of the pixel values, G (x test) represents the sum of the pixel values of the pixel points of the predicted image in the front preset position, G (x test)max represents the pixel maximum value of the pixel points of the predicted image in the front preset position, G (x test)min represents the pixel minimum value of the pixel points of the predicted image in the front preset position);
Determining a score threshold for distinguishing a positive sample from a negative sample by integrating the abnormal scores of each test predicted image and the corresponding sample types;
And detecting the image to be detected of the LED lamp strip by adopting the defect detection model of the LED lamp strip to judge whether the LED lamp strip has defects or not, detecting the image to be detected by adopting the defect detection model of the LED lamp strip to generate a predicted image, calculating the abnormal score of the predicted image corresponding to the image to be detected according to the abnormal score calculation formula, comparing the abnormal score corresponding to the image to be detected with the score threshold, and dividing the image to be detected lower than the score threshold into the image to be detected which has no defects and higher than the score threshold into the image to be detected which has defects.
Further, the feature extraction network is resnet network pre-trained by ImageNet, and the parameters of the first three layers of the resnet network are kept unchanged in the training process.
On the other hand, in order to solve the above further technical problems, the embodiments of the present invention further provide the following technical solutions: a training apparatus for a LED strip defect detection model, the apparatus comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the processor implementing a training method for a LED strip defect detection model as claimed in any one of the preceding claims when the computer program is executed by the processor.
In order to solve the above further technical problems, the embodiments of the present invention further provide the following technical solutions: a computer readable storage medium comprising a stored computer program, wherein the computer program when run controls a device in which the computer readable storage medium is located to perform the training method of the LED strip defect detection model according to any one of the above.
In order to solve the above further technical problems, the embodiments of the present invention further provide the following technical solutions: the LED lamp strip defect detection method comprises the following steps:
acquiring an image to be tested of an LED lamp strip to be tested; and
The LED lamp strip defect detection model obtained through training by the training method of the LED lamp strip defect detection model is used for detecting the image to be detected to judge whether the LED lamp strip has defects or not.
After the technical scheme is adopted, the embodiment of the invention has at least the following beneficial effects: according to the embodiment of the invention, K typical positive type samples are selected from the LED lamp strip training set only comprising the positive type samples based on the K-means++ cluster selection algorithm, and the K-means++ cluster selection algorithm can realize sample cluster selection aiming at different types of data samples, so that the flexibility is high, and the robustness of a training obtained model can be effectively improved; and then, processing each positive sample in the LED lamp strip training set based on a pseudo-abnormal sample generation method to obtain pseudo-abnormal samples corresponding to each positive sample, expanding the LED lamp strip training set by using the generated pseudo-abnormal samples, combining the rest positive samples except the typical positive samples in the LED lamp strip training set with the pseudo-abnormal samples as input images to perform network training during training, thereby improving the learning of the abnormal samples by a finally generated model, improving the detection performance of the model, then, calculating Euclidean distance of each feature as difference information among corresponding features, finally obtaining optimal difference information, performing feature fusion on serial information generated by combining the difference information with advanced feature information of an input image by using a multi-scale feature fusion network model, calculating a plurality of spatial attention force diagrams by using an attention guiding mechanism, and finally, predicting by a decoder to output a predicted image by using each spatial attention force diagram in a jump connection mode, thereby obtaining the LED lamp strip defect detection model, and completing the training of the model.
Drawings
FIG. 1 is a flowchart showing the steps of an alternative embodiment of the training method of the LED lamp strip defect detection model of the present invention.
FIG. 2 is a schematic block diagram of an alternative embodiment of a multi-scale feature fusion model of the training method of the LED lamp strip defect detection model of the present invention.
FIG. 3 is a schematic block diagram of an alternative embodiment of the training device for the LED lamp strip defect detection model of the present invention.
FIG. 4 is a functional block diagram of an alternative embodiment of the training device for the LED lamp strip defect detection model of the present invention.
Detailed Description
The application will be described in further detail with reference to the drawings and the specific examples. It should be understood that the following exemplary embodiments and descriptions are only for the purpose of illustrating the application and are not to be construed as limiting the application, and that the embodiments and features of the embodiments of the application may be combined with one another without conflict.
Referring to fig. 1, an alternative embodiment of the present invention provides a training method for an LED strip defect detection model, including the following steps:
S1: providing an LED lamp strip training set only comprising positive type samples;
S2: k typical positive class samples are selected from the LED lamp strip training set based on a K-means++ cluster selection algorithm, and template image features of each typical positive class sample in a plurality of different preset dimensions are obtained by adopting a preset feature extraction network and are used as memory samples to be stored in a memory pool;
S3: obtaining pseudo-abnormal samples corresponding to each positive type sample in the LED lamp strip training set based on a pseudo-abnormal sample generation method;
S4: and taking the remaining positive class samples except the typical positive class samples in the LED lamp strip training set and the pseudo-abnormal samples as input images to carry out the following model training:
s41: extracting advanced feature information of the input image in each preset dimension by adopting the feature extraction network;
s42: calculating Euclidean distance between the advanced feature information of the input image and the memory sample vector in each preset dimension, obtaining difference information between the input image and the memory sample, and determining optimal difference information;
s43: combining the difference information of each dimension in the optimal difference information with the advanced feature information of the input image to obtain serial information;
S44: performing multi-scale feature fusion on the serial information based on a multi-scale feature fusion network model to obtain a fusion feature map of each preset dimension; and
S45: and obtaining the spatial attention map of each fusion characteristic map, and flowing each spatial attention map to a decoder through a jump connection, and outputting a predicted image after the decoder predicts according to the characteristics in each spatial attention map.
According to the embodiment of the invention, K typical positive type samples are selected from the LED lamp strip training set only comprising the positive type samples based on the K-means++ cluster selection algorithm, and the K-means++ cluster selection algorithm can realize sample cluster selection aiming at different types of data samples, so that the flexibility is high, and the robustness of a training obtained model can be effectively improved; and then, processing each positive sample in the LED lamp strip training set based on a pseudo-abnormal sample generation method to obtain pseudo-abnormal samples corresponding to each positive sample, expanding the LED lamp strip training set by using the generated pseudo-abnormal samples, combining the rest positive samples except the typical positive samples in the LED lamp strip training set with the pseudo-abnormal samples as input images to perform network training during training, thereby improving the learning of the abnormal samples by a finally generated model, improving the detection performance of the model, then, calculating Euclidean distance of each feature as difference information among corresponding features, finally obtaining optimal difference information, performing feature fusion on serial information generated by combining the difference information with advanced feature information of an input image by using a multi-scale feature fusion network model, calculating a plurality of spatial attention force diagrams by using an attention guiding mechanism, and finally, predicting by a decoder to output a predicted image by using each spatial attention force diagram in a jump connection mode, thereby obtaining the LED lamp strip defect detection model, and completing the training of the model.
In an alternative embodiment of the present invention, when performing model training, the difference between the true value S and the predicted value S * is also calculated, and the minimized L1 loss function and the focal loss function are constructed to train the model, which specifically includes:
Constructing a minimized L1 loss function: Wherein S represents a true pixel value of an abnormal pixel point in the predicted image at a position corresponding to the input image, and S * represents a predicted pixel value of the abnormal pixel point in the predicted image;
the focal loss function is constructed as: Wherein p t represents the prediction probability of an abnormal pixel point in the predicted image, and alpha t and gamma are super parameters for controlling the weighting degree;
Constructing a total loss function by combining the minimized L1 loss function and the focal loss function: Wherein lambda l1 and lambda f are predetermined empirical constants; and
And correcting the parameter values of the feature extraction network, the multi-scale feature fusion network model and each layer of network in the decoder to solve the minimum value of the total loss function, and outputting the LED lamp strip defect detection model when the total loss function is the minimum value.
In this embodiment, the minimum value of the total loss function is solved by constraining the model by adopting the minimized L1 loss function and the focal loss function, so as to construct the total loss function, and by correcting the feature extraction network, the multi-scale feature fusion network model and the relevant parameter values of each layer of network in the decoder in the training process, the LED lamp strip defect detection model can be obtained, thereby completing the training of the model.
According to the embodiment of the invention, K typical positive type samples are selected from the LED lamp strip training set only comprising the positive type samples based on the improved K-means cluster selection algorithm, and the improved K-means cluster selection algorithm can realize sample cluster selection aiming at different types of data samples, so that the flexibility is high, and the robustness of a training obtained model can be effectively improved; then, based on a pseudo-abnormal sample generation method, processing each positive sample in the LED lamp strip training set to obtain a pseudo-abnormal sample corresponding to each positive sample, expanding the LED lamp strip training set by using the generated pseudo-abnormal sample, combining the positive samples except the typical positive sample in the LED lamp strip training set with the pseudo-abnormal sample as a model training sample to perform network training when training is performed, thereby improving the learning of the finally generated model on the abnormal sample, improving the detection performance of the model, then, obtaining the optimal difference information finally by calculating the Euclidean distance of each feature as the difference information between the corresponding features, after feature fusion is carried out on serial information generated by combining difference information and original feature information of an input sample through a multi-scale feature fusion network model, a plurality of spatial attention force diagrams can be calculated through an attention guiding mechanism, then, the spatial attention force diagrams are connected to a decoder through jump connection, prediction can be finally carried out by the decoder to output a predicted image, a total loss function is constructed by restraining the model through a minimized L1 loss function and a focal loss function, and the LED lamp strip defect detection model can be obtained by correcting the feature extraction network, the multi-scale feature fusion network model and the relevant parameter values of each layer of network in the decoder in the training process to solve the minimum value of the total loss function, so that training of the model is completed.
In particular, it is understood that the positive sample is an LED strip image without defects and the negative sample is an LED strip image with defects.
In an optional embodiment of the present invention, the selecting K typical positive samples from the LED strip training set based on the K-means++ cluster selection algorithm specifically includes:
Randomly selecting a positive sample from the LED lamp strip training set as a current initial cluster center;
calculating a first actual distance between each positive sample in the LED lamp strip training set and the nearest center of the initial cluster;
Updating the current initial cluster center by adopting the positive sample with the largest first actual distance and circularly executing the previous step until K initial cluster centers are selected;
Adopting each initial cluster center to correspondingly construct an empty cluster space, calculating a second actual distance between each positive type sample in the LED lamp strip training set and each initial cluster center, and dividing each positive type sample in the LED lamp strip training set into the cluster space closest to the initial cluster center according to the second actual distance;
Calculating a vector average value of positive class samples in each cluster space, and updating the initial cluster center corresponding to the cluster space by adopting the vector average value; and
And circularly executing the previous step until the initial cluster center of each cluster type space is unchanged so as to determine each initial cluster center corresponding positive type sample as a typical positive type sample.
In the embodiment, the improved K-means cluster selection algorithm is specifically the sample processing process, K typical positive samples can be simply and rapidly selected from the LED lamp strip training set, and the data processing efficiency is high.
In an optional embodiment of the present invention, the obtaining, based on the method for generating a pseudo-abnormal sample, a pseudo-abnormal sample corresponding to each positive sample in the LED strip training set specifically includes:
Respectively carrying out binarization processing on each positive type sample I in the LED lamp strip training set to generate a contour description graph M I, carrying out preset threshold binarization processing on two-dimensional Berlin noise P which is randomly generated in advance to generate a random mask graph M p, and multiplying each contour description graph M I with the random mask graph M p pixel by pixel to generate a target mask graph M;
The specific calculation formula of multiplying each target mask M by texture data I n derived from the DTD dataset pixel by pixel to generate the abnormal region image S n * is: Wherein, the method comprises the steps of, wherein, The transparency factor is used for balancing fusion of the positive sample I and the abnormal region image I n *, and is randomly sampled and determined in the range of [0.15,1 ]; and
Combining the abnormal region image I n * with the corresponding positive sample I to generate a pseudo-abnormal sample I A, where the combination formula is:
In this embodiment, the pseudo-abnormal sample generating method specifically performs the above processing procedure on each positive sample I in the LED strip training set, and can simply and quickly generate a pseudo-abnormal sample I A corresponding to each positive sample I.
In an alternative embodiment of the present invention, as shown in fig. 2, the multi-scale feature fusion network model includes:
a first convolution layer for performing a primary convolution on the concatenated information to maintain a number of channels of the concatenated information;
A coordinate attention weighting layer, configured to perform coordinate attention weighting on the serial information in different dimensions to capture channel information of the serial information;
an up-sampling layer for up-sampling the serial information weighted by the coordinate attention to align the dimensions;
A second convolution layer, configured to deconvolute the serial information after the alignment dimension to align the number of channels of the serial information; and
And the pixel addition operation layer is used for carrying out pixel-by-pixel addition on the serial information of different dimensions after the number of the aligned channels.
In this embodiment, the first convolution layer is specifically a3×3 convolution layer, and is configured to maintain the number of channels of the serial information, and since the serial information of each dimension is simply connected with the advanced feature information and the difference information of the model training sample, the channel information of the serial information is captured by using a coordinate attention (CA-block, i.e. a coordinate attention weighting layer); after the coordinate attention weighting, the serial information of different dimensions is firstly up-sampled by an up-sampling layer to align the dimensions, then the convolution operation is performed by a second convolution layer to align the number of channels, and finally the element-by-element addition operation is performed by a pixel addition operation layer to realize multi-scale feature fusion.
In an alternative embodiment of the invention, the method further comprises:
Inputting a test set which comprises a positive class sample and a negative class sample and is similar to the LED lamp strip training set into the LED lamp strip defect detection model to obtain a predicted image, and calculating the abnormal score of the predicted image corresponding to each sample in each test set, wherein the abnormal score calculation formula is as follows:
Wherein, the pixel points of each predicted image are ranked from high to low according to the size of the pixel values, G (x test) represents the sum of the pixel values of the pixel points of the predicted image in the front preset position, G (x test)max represents the pixel maximum value of the pixel points of the predicted image in the front preset position, G (x test)min represents the pixel minimum value of the pixel points of the predicted image in the front preset position);
Determining and outputting a score threshold value for distinguishing a positive sample from a negative sample by integrating the abnormal score of each test predicted image and the corresponding sample type;
And detecting the image to be detected of the LED lamp strip by adopting the defect detection model of the LED lamp strip to judge whether the LED lamp strip has defects or not, detecting the image to be detected by adopting the defect detection model of the LED lamp strip to generate a predicted image, calculating the abnormal score of the predicted image corresponding to the image to be detected according to the abnormal score calculation formula, comparing the abnormal score corresponding to the image to be detected with the score threshold, and dividing the image to be detected lower than the score threshold into the image to be detected which has no defects and higher than the score threshold into the image to be detected which has defects.
When the traditional model is used for defect detection, only a predicted image is usually given, and a manual dividing threshold is needed to distinguish positive and negative samples, in the embodiment, the trained model is tested, and the predicted image is output by the modelThe first 1000 largest pixel values and calculate an anomaly score; assuming that the sample image input by the test is x test, the output is S *(xtest), the sum of the first 1000 maximum pixel values is G (x test), and the anomaly score G (x test) is obtained through normalization calculation; since the anomaly score corresponding to the positive class sample is lower, and the anomaly score corresponding to the negative class sample is higher, anomaly detection is performed by determining the score threshold of the division through reasonable reasoning, the division lower than the score threshold is positive (i.e. no defect exists), and the division higher than the threshold is negative (i.e. defect exists).
In an alternative embodiment of the present invention, the feature extraction network is a resnet network pre-trained by ImageNet, and the first three layers of parameters of the resnet network are kept constant during training. In this embodiment, the resnet network trained by ImageNet in advance can effectively extract the advanced features of the sample; meanwhile, in order to ensure the uniformity of the high-level features of the template image features in the memory pool and the image features of the input image, parameters of the first three layers of resnet networks are fixed in the whole training process.
In specific implementation, each positive sample in the LED lamp strip training set isThe resnet network trained by ImageNet carries out feature extraction on each typical positive class sample to output template image features with three dimensions of 64 multiplied by 64, 128 multiplied by 32 and 256 multiplied by 16; likewise, the number of the cells to be processed, each input image is also characterized the extraction outputs 64×64×64 advanced feature information II in three dimensions of 128×32×32 and 256×16×16;
In step S42, by calculating the euclidean distance between the advanced feature information II and each of the template image features in the memory pool in the same dimension, summing the euclidean distances between the advanced feature information II and each of the template image features in different dimensions to obtain euclidean distances and values corresponding to the advanced feature information II corresponding to different template image features, and then, taking the minimum value of the euclidean distances and values as the best difference information between the advanced feature information II and each of the template image features in the memory pool, combining each of the euclidean distances in different dimensions corresponding to the best difference information with each of the template image features to obtain combined serial information, where, when calculating the euclidean distances between the advanced feature information and each of the template image features in the memory pool, a specific calculation formula is as follows:
(equation 1)
Wherein N represents the total number of template image features of the same dimension in the memory pool, and MI represents the template image features in the memory pool;
by selecting the minimum value of the Euclidean distance and the value as the optimal difference information DI * between the advanced feature information II and the image features of each template in the memory pool, a specific calculation formula is as follows:
(equation 2)
Wherein, the value of N is 3, which represents the three dimensions;
The best difference information DI * indicates the difference between a model training sample and a typical positive sample that is most similar to the model training sample, so that the greater the corresponding position difference value is, the higher the possibility that abnormality occurs in the position area in the model training sample is;
further, in step S43, the difference information (i.e. euclidean distance) in three dimensions constituting the optimal difference information DI * is combined with the template image features of the corresponding model training samples to obtain the concatenation information CI 1、 CI2 and CI, respectively 3
Next, in step S44, after feature fusion is performed on each serial information by the multi-scale feature fusion module, three-dimensional fusion feature maps CI n * (n=1, 2, 3) of 64×16×16, 128×32×32, and 256×64×64 are obtained respectively,The channel pixel mean image of the feature map CI 3 * with the size is directly used as the spatial attention map M 3, the spatial attention map M 2 is obtained by up-sampling M 3 and then multiplying the channel pixel mean image of the feature map CI 2 * with the size of 128×32×32 pixel by pixel, and then the spatial attention map M 1 is obtained by processing the channel pixel mean image of the feature map CI 1 * by adopting the same operation, wherein the specific calculation formula is as follows:
(equation 3)
(Equation 4)
(Equation 5)
Wherein, C 1、C2 and C 3 represent the channel numbers of CI 1 *、CI2 * and CI 3 *, respectively, and M 2 U and M 3 U represent the feature maps obtained by upsampling M 2 and M 3. Finally M 1、M2 and M 3 will flow to the decoder through a jump connection; in a specific implementation, the network structure of the decoder is a U-net network structure, which is not described in detail herein.
In addition, the input image is formed by combining the rest positive type samples except the typical positive type samples in the LED lamp strip training set and the pseudo-abnormal samples, wherein S of the positive type samples is composed of a pixel value full 1 image, which indicates that an abnormal region is not contained, and the S is not set to 0 so as to prevent gradient from disappearing; the S of the pseudo-abnormal sample is composed of I n, and represents an abnormal region of the pseudo-abnormal sample.
Finally, for the total loss function, the importance of the loss of L l1 and L f is controlled through the parameters of lambda l1 and lambda f during model training, so that the objective function is better optimized; empirically set λ l1 to 0.6 and λ f to 0.4.
On the other hand, as shown in fig. 3, an embodiment of the present invention further provides a training device 1 for an LED strip defect detection model, where the device includes a processor 10, a memory 12, and a computer program stored in the memory 12 and configured to be executed by the processor 10, where the processor 10 implements the training method for an LED strip defect detection model according to the above embodiment when executing the computer program.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory 12 and executed by the processor 10 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program in the training device 1 of the LED strip defect detection model. For example, the computer program may be divided into functional modules in the training device 1 of the LED strip defect detection model shown in fig. 4, where the training set providing module 21, the memory pool sample selecting module 22, the pseudo-abnormal sample generating module 23, and the model training module 25 respectively perform the above steps S1 to S4.
The training device 1 of the LED lamp strip defect detection model can be a computing device such as a desktop computer, a notebook computer, a palm computer and a cloud server. The training device 1 of the LED lamp strip defect detection model can comprise, but is not limited to, a processor 10 and a memory 12. It will be understood by those skilled in the art that the schematic diagram is merely an example of the command management apparatus 5 of the vehicle-mounted ECU system, and does not constitute a limitation of the training apparatus 1 of the LED strip defect detection model, and may include more or less components than those illustrated, or may combine some components, or different components, for example, the command management apparatus 5 of the vehicle-mounted ECU system may further include an input-output device, a network access device, a bus, and the like.
The Processor 10 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor, etc., and the processor 10 is a control center of the training device 1 for the LED strip defect detection model, and connects the respective parts of the training device 1 for the entire LED strip defect detection model by using various interfaces and lines.
The memory 12 may be used to store the computer program and/or module, and the processor 10 may implement various functions of the training device 1 of the LED strip defect detection model by running or executing the computer program and/or module stored in the memory 12 and invoking data stored in the memory 12. The memory 12 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for at least one function (such as a pattern recognition function, a pattern layering function, etc.), and the like; the storage data area may store data (such as graphic data, etc.) created according to the use of the training device 1 of the LED strip defect detection model, and the like. In addition, memory 12 may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart memory card (SMART MEDIA CARD, SMC), secure Digital (SD) card, flash memory card (FLASH CARD), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
The functionality of the embodiments of the present invention, if implemented in the form of software functional modules or units and sold or used as a stand-alone product, may be stored in a computing device readable storage medium. Based on such understanding, the implementation of all or part of the flow of the method of the foregoing embodiment according to the embodiments of the present invention may also be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of each of the foregoing method embodiments when executed by the processor 10. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
In still another aspect, an embodiment of the present invention further provides a computer readable storage medium, where the computer readable storage medium includes a stored computer program, where when the computer program runs, the device where the computer readable storage medium is located is controlled to execute the training method of the LED strip defect detection model according to any one of the foregoing embodiments of the present invention.
In still another aspect, an embodiment of the present invention further provides a method for detecting defects of an LED strip, including the following steps:
acquiring an image to be tested of an LED lamp strip to be tested; and
The LED lamp strip defect detection model obtained through training by the training method of the LED lamp strip defect detection model is used for detecting the image to be detected to judge whether the LED lamp strip has defects or not.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are all within the scope of the present invention.

Claims (10)

1. The training method of the LED lamp strip defect detection model is characterized by comprising the following steps of:
providing an LED lamp strip training set only comprising positive type samples;
K typical positive class samples are selected from the LED lamp strip training set based on a K-means++ cluster selection algorithm, and template image features of each typical positive class sample in a plurality of different preset dimensions are obtained by adopting a preset feature extraction network and are used as memory samples to be stored in a memory pool;
obtaining pseudo-abnormal samples corresponding to each positive type sample in the LED lamp strip training set based on a pseudo-abnormal sample generation method; and
And taking the remaining positive class samples except the typical positive class samples in the LED lamp strip training set and the pseudo-abnormal samples as input images to carry out the following model training:
Extracting advanced feature information of the input image in each preset dimension by adopting the feature extraction network;
Calculating Euclidean distance between the advanced feature information of the input image and the memory sample vector in each preset dimension, obtaining difference information between the input image and the memory sample, and determining optimal difference information;
Combining the difference information of each dimension in the optimal difference information with the advanced feature information of the input image to obtain serial information;
Performing multi-scale feature fusion on the serial information based on a multi-scale feature fusion network model to obtain a fusion feature map of each preset dimension; and
And obtaining the spatial attention map of each fusion characteristic map, and flowing each spatial attention map to a decoder through a jump connection, and outputting a predicted image after the decoder predicts according to the characteristics in each spatial attention map.
2. The training method of the LED strip defect detection model of claim 1, further calculating a difference between the true value S and the predicted value S * when performing model training, and constructing a minimized L1 loss function and a focal loss function to train the model, specifically comprising:
Constructing a minimized L1 loss function: Wherein S represents a true pixel value of an abnormal pixel point in the predicted image at a position corresponding to the input image, and S * represents a predicted pixel value of the abnormal pixel point in the predicted image;
the focal loss function is constructed as: Wherein p t represents the prediction probability of an abnormal pixel point in the predicted image, and alpha t and gamma are super parameters for controlling the weighting degree;
Constructing a total loss function by combining the minimized L1 loss function and the focal loss function: Wherein lambda l1 and lambda f are predetermined empirical constants; and
And correcting the parameter values of the feature extraction network, the multi-scale feature fusion network model and each layer of network in the decoder to solve the minimum value of the total loss function, and outputting the LED lamp strip defect detection model when the total loss function is the minimum value.
3. The method for training a model for detecting defects of an LED lamp strip according to claim 1, the K-means++ cluster selection algorithm is used for selecting K typical positive class samples from the LED lamp strip training set and specifically comprises the following steps:
Randomly selecting a positive sample from the LED lamp strip training set as a current initial cluster center;
calculating a first actual distance between each positive sample in the LED lamp strip training set and the nearest center of the initial cluster;
Updating the current initial cluster center by adopting the positive sample with the largest first actual distance and circularly executing the previous step until K initial cluster centers are selected;
Adopting each initial cluster center to correspondingly construct an empty cluster space, calculating a second actual distance between each positive type sample in the LED lamp strip training set and each initial cluster center, and dividing each positive type sample in the LED lamp strip training set into the cluster space closest to the initial cluster center according to the second actual distance;
Calculating a vector average value of positive class samples in each cluster space, and updating the initial cluster center corresponding to the cluster space by adopting the vector average value; and
And circularly executing the previous step until the initial cluster center of each cluster type space is unchanged so as to determine each initial cluster center corresponding positive type sample as a typical positive type sample.
4. The training method of the LED strip defect detection model according to claim 1, wherein the obtaining the pseudo-abnormal samples corresponding to each positive sample in the LED strip training set based on the pseudo-abnormal sample generation method specifically comprises:
Respectively carrying out binarization processing on each positive type sample I in the LED lamp strip training set to generate a contour description graph M I, carrying out preset threshold binarization processing on two-dimensional Berlin noise P which is randomly generated in advance to generate a random mask graph M p, and multiplying each contour description graph M I with the random mask graph M p pixel by pixel to generate a target mask graph M;
The specific calculation formula of multiplying each target mask M by texture data I n derived from the DTD dataset pixel by pixel to generate the abnormal region image S n * is: Wherein, the method comprises the steps of, wherein, The transparency factor is used for balancing fusion of the positive sample I and the abnormal region image I n *, and is randomly sampled and determined in the range of [0.15,1 ]; and
Combining the abnormal region image I n * with the corresponding positive sample I to generate a pseudo-abnormal sample I A, where the combination formula is:
5. The training method of the LED strip defect detection model of claim 1, wherein the multi-scale feature fusion network model comprises:
a first convolution layer for performing a primary convolution on the concatenated information to maintain a number of channels of the concatenated information;
A coordinate attention weighting layer, configured to perform coordinate attention weighting on the serial information in different dimensions to capture channel information of the serial information;
an up-sampling layer for up-sampling the serial information weighted by the coordinate attention to align the dimensions;
A second convolution layer, configured to deconvolute the serial information after the alignment dimension to align the number of channels of the serial information; and
And the pixel addition operation layer is used for carrying out pixel-by-pixel addition on the serial information of different dimensions after the number of the aligned channels.
6. The method of training a model for detecting defects in an LED strip of claim 1, further comprising:
Inputting a test set which comprises a positive class sample and a negative class sample and is similar to the LED lamp strip training set into the LED lamp strip defect detection model to obtain a predicted image, and calculating the abnormal score of the predicted image corresponding to each sample in each test set, wherein the abnormal score calculation formula is as follows:
Wherein, the pixel points of each predicted image are ranked from high to low according to the size of the pixel values, G (x test) represents the sum of the pixel values of the pixel points of the predicted image in the front preset position, G (x test)max represents the pixel maximum value of the pixel points of the predicted image in the front preset position, G (x test)min represents the pixel minimum value of the pixel points of the predicted image in the front preset position);
Determining and outputting a score threshold value for distinguishing a positive sample from a negative sample by integrating the abnormal score of each test predicted image and the corresponding sample type;
And detecting the image to be detected of the LED lamp strip by adopting the defect detection model of the LED lamp strip to judge whether the LED lamp strip has defects or not, detecting the image to be detected by adopting the defect detection model of the LED lamp strip to generate a predicted image, calculating the abnormal score of the predicted image corresponding to the image to be detected according to the abnormal score calculation formula, comparing the abnormal score corresponding to the image to be detected with the score threshold, and dividing the image to be detected lower than the score threshold into the image to be detected which has no defects and higher than the score threshold into the image to be detected which has defects.
7. The training method of the LED strip defect detection model of claim 1, wherein the feature extraction network is a resnet network pre-trained by ImageNet, and during the training process, the parameters of the first three layers of the resnet network are kept constant.
8. A training device for a LED strip defect detection model, characterized in that the device comprises a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the training method for a LED strip defect detection model according to any one of claims 1 to 7 when executing the computer program.
9. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored computer program, wherein the computer program when run controls a device in which the computer readable storage medium is located to perform the training method of the LED strip defect detection model according to any one of claims 1 to 7.
10. The method for detecting the defects of the LED lamp strip is characterized by comprising the following steps of:
acquiring an image to be tested of an LED lamp strip to be tested; and
The LED lamp strip defect detection model obtained through training by the training method of the LED lamp strip defect detection model according to any one of claims 1-7 is used for detecting the image to be detected to judge whether the LED lamp strip has defects.
CN202410767574.5A 2024-06-14 2024-06-14 Training method and device for LED lamp strip defect detection model, computer readable storage medium and LED lamp strip defect detection method Active CN118351422B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410767574.5A CN118351422B (en) 2024-06-14 2024-06-14 Training method and device for LED lamp strip defect detection model, computer readable storage medium and LED lamp strip defect detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410767574.5A CN118351422B (en) 2024-06-14 2024-06-14 Training method and device for LED lamp strip defect detection model, computer readable storage medium and LED lamp strip defect detection method

Publications (2)

Publication Number Publication Date
CN118351422A CN118351422A (en) 2024-07-16
CN118351422B true CN118351422B (en) 2024-08-16

Family

ID=91815876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410767574.5A Active CN118351422B (en) 2024-06-14 2024-06-14 Training method and device for LED lamp strip defect detection model, computer readable storage medium and LED lamp strip defect detection method

Country Status (1)

Country Link
CN (1) CN118351422B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113645098A (en) * 2021-08-11 2021-11-12 安徽大学 An Unsupervised Incremental Learning-Based Dynamic IoT Anomaly Detection Method
CN114677346A (en) * 2022-03-21 2022-06-28 西安电子科技大学广州研究院 End-to-end semi-supervised image surface defect detection method based on memory information

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090210935A1 (en) * 2008-02-20 2009-08-20 Jamie Alan Miley Scanning Apparatus and System for Tracking Computer Hardware
CN108540752B (en) * 2017-03-01 2021-04-30 中国电信股份有限公司 Method, device and system for identifying target object in video monitoring
KR101946317B1 (en) * 2018-09-13 2019-02-11 주식회사 에버비젼 User Identification and Tracking Method Using CCTV System
CN111601254A (en) * 2020-04-16 2020-08-28 深圳市优必选科技股份有限公司 Target tracking method, device, storage medium and smart device
CN112634209A (en) * 2020-12-09 2021-04-09 歌尔股份有限公司 Product defect detection method and device
CN114743119B (en) * 2022-04-28 2024-04-09 石家庄铁道大学 High-speed rail contact net hanger nut defect detection method based on unmanned aerial vehicle
CN117115663A (en) * 2023-09-28 2023-11-24 中国矿业大学 Remote sensing image change detection system and method based on deep supervision network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113645098A (en) * 2021-08-11 2021-11-12 安徽大学 An Unsupervised Incremental Learning-Based Dynamic IoT Anomaly Detection Method
CN114677346A (en) * 2022-03-21 2022-06-28 西安电子科技大学广州研究院 End-to-end semi-supervised image surface defect detection method based on memory information

Also Published As

Publication number Publication date
CN118351422A (en) 2024-07-16

Similar Documents

Publication Publication Date Title
CN110837836A (en) Semi-supervised semantic segmentation method based on maximized confidence
WO2014174932A1 (en) Image processing device, program, and image processing method
CN108961180B (en) Infrared image enhancement method and system
CN115311550B (en) Remote sensing image semantic change detection method and device, electronic equipment and storage medium
KR102476022B1 (en) Face detection method and apparatus thereof
CN110852349A (en) Image processing method, detection method, related equipment and storage medium
CN111797829A (en) License plate detection method and device, electronic equipment and storage medium
KR20180109658A (en) Apparatus and method for image processing
CN113807354B (en) Image semantic segmentation method, device, equipment and storage medium
KR20120066462A (en) Method and system for providing face recognition, feature vector extraction apparatus for face recognition
CN116030237A (en) Industrial defect detection method and device, electronic equipment and storage medium
CN118470714B (en) Camouflage object semantic segmentation method, system, medium and electronic equipment based on decision-level feature fusion modeling
CN113269752A (en) Image detection method, device terminal equipment and storage medium
CN120375207B (en) Remote sensing image change detection method, device, equipment and medium
CN109671055B (en) Pulmonary nodule detection method and device
CN116630918B (en) A lane detection method based on rectangular attention mechanism
EP3686841A1 (en) Image segmentation method and device
WO2022098307A1 (en) Context-aware pruning for semantic segmentation
CN110135428B (en) Image segmentation processing method and device
CN113920099B (en) Polyp segmentation method based on non-local information extraction and related components
CN120047894A (en) Chinese herbal medicine and foreign matter detection method, system, storage medium and product thereof
KR102469120B1 (en) Apparatus, method, computer-readable storage medium and computer program for identifying target from satellite image
CN118351422B (en) Training method and device for LED lamp strip defect detection model, computer readable storage medium and LED lamp strip defect detection method
CN118941776A (en) A three-channel target detection method and device based on active random model
US12254657B2 (en) Image processing apparatus, image processing method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant