[go: up one dir, main page]

CN115294115A - PCB defect identification method based on neural network - Google Patents

PCB defect identification method based on neural network Download PDF

Info

Publication number
CN115294115A
CN115294115A CN202211219124.XA CN202211219124A CN115294115A CN 115294115 A CN115294115 A CN 115294115A CN 202211219124 A CN202211219124 A CN 202211219124A CN 115294115 A CN115294115 A CN 115294115A
Authority
CN
China
Prior art keywords
obtaining
network
loss function
sequence
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211219124.XA
Other languages
Chinese (zh)
Inventor
雷海虹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Rudong Yihang Electronics R & D Co ltd
Original Assignee
Nantong Rudong Yihang Electronics R & D Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Rudong Yihang Electronics R & D Co ltd filed Critical Nantong Rudong Yihang Electronics R & D Co ltd
Priority to CN202211219124.XA priority Critical patent/CN115294115A/en
Publication of CN115294115A publication Critical patent/CN115294115A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of data processing, in particular to a PCB defect identification method based on a neural network. The method comprises the following steps: acquiring a surface image and a corresponding target image of each PCB; respectively inputting the surface image and the target image into the convolutional layer to obtain a plurality of description vectors, a segmentation image and a plurality of target description vectors; constructing a loss function of each convolution layer to obtain a comprehensive loss function; acquiring the reference attention of each network parameter; obtaining values of all network parameters in training to form a first sequence; training the corresponding loss function to form a second sequence every time, and obtaining an adjustment coefficient according to the first sequence and the second sequence; obtaining attention weight of the network parameters based on the adjustment coefficient and the reference attention so as to obtain an update gradient, training the neural network based on the update gradient, and detecting an actual defect area in the trained neural network; the reliability of the network is higher, so that the accuracy in defect identification is higher.

Description

PCB defect identification method based on neural network
Technical Field
The invention relates to the technical field of data processing, in particular to a PCB defect identification method based on a neural network.
Background
With the development of social economy, the demand of automation control is increased, and the quality of a PCB as one of important parts for realizing automation control directly affects the operation effect of an automation system, so that the quality of the PCB needs to be strictly controlled in the production process. Since the defect information on the PCB is less than the non-defect information, the defect information is easily buried in the non-defect information, and thus the network cannot learn the defect information well. Meanwhile, some defect information in the defect information has a large contribution to defect identification, and some defect information has a small contribution to defect identification, so that attention of each defect information needs to be controlled according to the contribution of the defect information to the defect identification.
In the network, each network parameter has different learning capabilities on each feature, some network parameters have stronger learning capabilities on defect features, and some network parameters have stronger learning capabilities on some non-defect features, so that when all network parameters adopt the same attention to perform learning operation, the accuracy of detected defects is lower.
Disclosure of Invention
In order to solve the above technical problems, an object of the present invention is to provide a PCB defect recognition method based on a neural network, the method comprising the steps of:
acquiring a surface image of each PCB, obtaining a defect area in each surface image through manual marking, and setting pixel values of all pixel points in the defect area to zero to obtain a target image; inputting the surface image into a convolution layer to obtain a plurality of description vectors and a segmentation image, and inputting the target image into the convolution layer to obtain a plurality of target description vectors;
constructing a loss function of each convolutional layer based on the corresponding description vector and the target description vector of each convolutional layer; obtaining a comprehensive loss function according to the loss functions of all the convolution layers;
in any training, obtaining a feature description value of any convolutional layer input by the surface image, obtaining a target feature description value of the same convolutional layer input by the target image, and obtaining a difference value between the feature description value and the target feature description value as a feature description value variable quantity; obtaining the reference attention of the corresponding network parameters according to the feature description value variable quantities of all the convolution layers;
obtaining values of all network parameters in training to form a first sequence; training a corresponding loss function to form a second sequence every time, and obtaining an adjustment coefficient according to the first sequence and the second sequence; obtaining attention weight of the network parameter based on the adjustment coefficient and the reference attention;
and obtaining an updating gradient of the network parameters according to the attention weight, training the neural network based on the updating gradient, and inputting the surface image of the PCB into the trained neural network to obtain an actual defect area.
Preferably, the step of constructing a loss function for each convolutional layer based on the description vector corresponding to each convolutional layer and the target description vector includes:
calculating the difference value of the data corresponding to each description vector and the target description vector, obtaining the occurrence probability of each difference value, obtaining the logarithm of the probability, and obtaining the sum of the products of the probability corresponding to all the data and the logarithm of the probability as the loss function.
Preferably, the step of obtaining a comprehensive loss function according to the loss functions of all convolutional layers includes:
and acquiring a cross entropy loss function between the segmentation image and the label image and the sum of the corresponding loss functions of all the convolution layers, wherein the sum of all the loss functions and the sum of the cross entropy loss function are the comprehensive loss function.
Preferably, the step of obtaining the reference attention of the corresponding network parameter according to the variation of the feature description values of all the convolutional layers includes:
and acquiring the mean value of the variation of the corresponding feature description of each convolutional layer, calculating the sum of the mean values of the variation corresponding to the network parameters, and obtaining the reference attention of the corresponding network parameters according to the ratio of the mean value of the variation corresponding to each network parameter to the sum of all the mean values of the variation.
Preferably, the step of obtaining the adjustment coefficient according to the first sequence and the second sequence includes:
and acquiring a partial correlation coefficient between the first sequence and the second sequence, and adding 1 to an absolute value of the partial correlation coefficient to obtain the adjustment coefficient.
Preferably, the step of obtaining the attention weight of the network parameter based on the adjustment coefficient and the reference attention includes:
the product of the adjustment coefficient and the reference attention is the attention weight.
Preferably, the step of obtaining an update gradient of the network parameter according to the attention weight includes:
and acquiring the gradient of the network parameter, wherein the product of the gradient and the attention weight is the updating gradient.
The invention has the following beneficial effects: the method has the advantages that higher attention is paid to network parameters with strong defect information describing capability in the network, so that the network parameters can play more roles; giving higher attention to network parameters with higher contribution of the loss function, adjusting the updating gradient according to different attention of each network parameter to ensure that the learning speed of the network parameters with low attention is lower, and ensuring the updating gradient of the network parameters with high attention weight to ensure that the network learning speed is higher; the change condition of the characteristic image when the defect-free information image is input is analyzed and supervised by designing a loss function to realize characteristic isolation, and the reliability of the trained network is higher, so that the accuracy of defect identification is higher.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for identifying a PCB defect based on a neural network according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a network structure according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description, structures, features and effects of a neural network-based PCB defect recognition method according to the present invention are provided with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of the neural network-based PCB defect identification method in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a PCB defect identification method based on a neural network according to an embodiment of the present invention is shown, where the method includes the following steps:
step S100, acquiring a surface image of each PCB, obtaining a defect area in each surface image through manual marking, and setting pixel values of all pixel points in the defect area to zero to obtain a target image; the surface image is input into the convolutional layer to obtain a plurality of description vectors and a segmentation image, and the target image is input into the convolutional layer to obtain a plurality of target description vectors.
Specifically, the produced PCB is transmitted to a quality inspection platform area through a transmission belt, and a camera is arranged on the quality inspection platform to collect a surface image of each PCB; judging a defect area in each surface image manually, and marking the defect area; and setting the pixel value of the marked defect area as 0, thereby acquiring a target image of the defect area.
The network result utilized in the embodiment of the present invention is shown in fig. 2, an image X with a label is input into the network, the label is a defective region semantic pixel label or a non-defective region semantic pixel label, for example, the defective region semantic pixel label is 1, and the non-defective region semantic pixel label is 0, and the size of the image X is assumed to be 1024 × 1024 in the embodiment of the present invention. Inputting an image X into a network, processing the image through a convolution layer 1 in the network to obtain a characteristic diagram H1, wherein all convolution layers comprise grouping convolution operation, pooling operation, an activation function and the like, a VGG6 network structure is adopted, the VGG6 network is a common neural network, and repeated description is omitted in a running embodiment; the feature map H1 is obtained by convolution in groups, and the size of the feature map H1 is 512 × 20 in the embodiment of the present invention, where the feature map H1 is divided into 20 groups; and respectively carrying out global maximum pooling on the 20 channel feature maps in the feature map to obtain 20-dimensional description vectors.
It should be noted that, the feature diagram H1 is input into the subsequent convolutional layer to obtain a corresponding feature diagram, and the convolution process of the subsequent convolutional layer is consistent with that of convolutional layer 1 and is not repeated; and finally obtaining an output result by the characteristic graph through each convolution layer, wherein the output result Y is a semantic segmentation image with the size of 1024 x 1024, and each pixel value in the segmentation image represents the probability that each pixel point belongs to a defective pixel.
Correspondingly, the surface image corresponding to the PCB is input into the network to obtain the description vector in the convolution layer 1
Figure 894406DEST_PATH_IMAGE001
Obtaining a description vector at convolutional layer 2
Figure 624595DEST_PATH_IMAGE002
Obtaining a vector of description values in convolutional layer 3
Figure 201070DEST_PATH_IMAGE003
And finally segmenting the image
Figure 907864DEST_PATH_IMAGE004
(ii) a Inputting the target image into the network to obtain the target description vector in the convolution layer 1
Figure 784553DEST_PATH_IMAGE005
Obtaining the target description vector in convolution layer 2
Figure 420065DEST_PATH_IMAGE006
And obtaining the target description vector in the convolutional layer 3
Figure 546153DEST_PATH_IMAGE007
By analogy, the surface image is input into the network to obtain a plurality of description vectors and segmentation images, the target image is input into the network to obtain a plurality of target description vectors, and the number of the target description vectors is consistent with that of the description vectors, namely, the number of the convolution layers in the network.
Step S200, constructing a loss function of each convolution layer based on the corresponding description vector and the target description vector of each convolution layer; and obtaining a comprehensive loss function according to the loss functions of all the convolution layers.
For the characteristics mainly describing defect information, the characteristic values of a defective image and a non-defective image can be greatly changed; for the features which are less descriptive of defects, the feature values of the defective image and the non-defective image do not change greatly; in order to separate defect feature information from non-defect feature information, that is, to describe defect features mainly and non-defect features mainly, it is necessary to change the feature values of defect-free images of defect features greatly and change the feature values of defect-free images of non-defect features slightly, so that the change entropy of the feature values is as small as possible, and therefore, the following loss function is designed by using convolution layer 1 to obtain description vector data:
Figure 71286DEST_PATH_IMAGE008
wherein,
Figure 802481DEST_PATH_IMAGE009
represents a loss function corresponding to the convolutional layer 1;
Figure 608894DEST_PATH_IMAGE010
description vector representing convolutional layer 1
Figure 159962DEST_PATH_IMAGE005
The data describing a description value of an ith channel feature map of a feature image H1 obtained by inputting the target image into the convolutional layer 1 in the network after removing the defect information,
Figure 208558DEST_PATH_IMAGE011
description vector representing convolutional layer 1
Figure 794260DEST_PATH_IMAGE001
I-th dimension data describing a characteristic image obtained by inputting the surface image into the convolution layer 1 in the network
Figure 833891DEST_PATH_IMAGE012
Description values of an ith channel feature map;
Figure 390031DEST_PATH_IMAGE013
when defect-free information is present, the convolutional layer 1 obtains the variation of the i-th channel image description value of the feature map.
Figure 727471DEST_PATH_IMAGE014
Representing a change in the feature descriptor of
Figure 183992DEST_PATH_IMAGE013
Probability of occurrence.
Accordingly, a corresponding loss function of the convolutional layer 2 is obtained
Figure 315896DEST_PATH_IMAGE015
Loss function for convolutional layer 3
Figure 356402DEST_PATH_IMAGE016
8230and loss function corresponding to convolution layer N
Figure 497533DEST_PATH_IMAGE017
Further, the image is divided according to the output
Figure 605298DEST_PATH_IMAGE004
Constructing cross entropy loss function with semantic label image
Figure 439261DEST_PATH_IMAGE018
Then, the loss functions corresponding to all convolutional layers are combined to obtain a comprehensive loss function as:
Figure 704414DEST_PATH_IMAGE019
wherein,
Figure 649236DEST_PATH_IMAGE020
representing a composite loss function;
Figure 814769DEST_PATH_IMAGE018
representing a cross entropy loss function;
Figure 554055DEST_PATH_IMAGE021
a loss function representing the ith convolutional layer;
Figure 303574DEST_PATH_IMAGE022
indicating the number of convolutional layers.
Step S300, in any training, acquiring a feature description value of a surface image input any convolution layer, acquiring a target feature description value of a target image input same convolution layer, and acquiring a difference value between the feature description value and the target feature description value as a feature description value variable quantity; and obtaining the reference attention of the corresponding network parameters according to the feature description value variation of all the convolution layers.
In the actual network training, the comprehensive loss function in step S300 should be made as small as possible, the network parameter update training is performed by a gradient descent method, the training is performed once every 1 time the network parameter is updated, 10000 times of training are performed, and it can be seen by combining the comprehensive loss function that once the parameter update is performed every time one surface image and one target image from which defect information is removed are input, thereby completing the training of the network in stage 1.
In the k training process, when the surface image is input, the feature description value of the ith channel feature image in the feature map of the jth convolutional layer
Figure 786508DEST_PATH_IMAGE023
(ii) a Correspondingly, when the target image is input, the target feature description value of the ith channel feature map in the jth convolution layer feature map is acquired
Figure 72127DEST_PATH_IMAGE024
Then, the difference between the feature description value and the target feature description value is the feature description value variation, that is:
Figure 247894DEST_PATH_IMAGE025
wherein,
Figure 284376DEST_PATH_IMAGE026
representing the variation of the characteristic description value of the ith channel in the jth convolutional layer characteristic diagram.
Further, acquiring feature description value variation quantities corresponding to the ith channel in the jth convolutional layer feature diagram in all training processes, namely 10000 feature description value variation quantities, and calculating a variation quantity mean value according to all feature description value variation quantities to obtain the variation quantity mean value
Figure 321733DEST_PATH_IMAGE027
(ii) a By analogy, obtaining the mean value of the variation corresponding to each convolution layer; obtaining network parameters corresponding to the characteristic graphs of each channel of each convolutional layer, wherein the reference attention of each network parameter is as follows:
Figure 445547DEST_PATH_IMAGE028
wherein,
Figure 775903DEST_PATH_IMAGE029
a reference attention representing an s-th network parameter;
Figure 250747DEST_PATH_IMAGE030
representing the mean value of the variation of the characteristic diagram obtained by the s-th network parameter,
Figure 91795DEST_PATH_IMAGE031
indicating the total number of network parameters contained in the network.
S400, obtaining values of all network parameters in training to form a first sequence; training the corresponding cross entropy loss function to form a second sequence every time, and obtaining an adjusting coefficient according to the first sequence and the second sequence; and obtaining the attention weight of the network parameter based on the adjustment coefficient and the reference attention.
Specifically, values of each network parameter in the training process are obtained, and the s-th network parameter value is the value of the k-th training
Figure 335695DEST_PATH_IMAGE032
The values of the network parameters in all the training phases form a first sequence of
Figure 359321DEST_PATH_IMAGE033
(ii) a Obtaining loss function of network in each training stage
Figure 321461DEST_PATH_IMAGE018
Value of (a), loss function of network at kth training
Figure 497359DEST_PATH_IMAGE018
Value of
Figure 110612DEST_PATH_IMAGE034
Loss function of the network for all training phases
Figure 267923DEST_PATH_IMAGE018
Form a second sequence of
Figure 468092DEST_PATH_IMAGE035
When the correlation between the network parameter value and the loss function value is larger, the network parameter has larger influence on the network loss; since the loss value is an effect under the comprehensive influence of a plurality of network parameters, the sheet correlation of each network parameter and the loss function needs to be analyzed; calculating partial correlation coefficient by using the first sequence of each network parameter and the second sequence of loss function value, and corresponding the s-th network parameter to the first sequence and the loss function
Figure 900210DEST_PATH_IMAGE018
The partial correlation coefficient of the second sequence of values is recorded as
Figure 105320DEST_PATH_IMAGE036
And obtaining an adjustment coefficient based on the partial correlation coefficient as follows:
Figure 964692DEST_PATH_IMAGE037
wherein,
Figure 183314DEST_PATH_IMAGE038
an adjustment factor representing an s-th network parameter;
Figure 153544DEST_PATH_IMAGE036
representing the first sequence and loss function corresponding to the s-th network parameter
Figure 475810DEST_PATH_IMAGE018
A partial correlation coefficient of the valued second sequence.
Further, the attention weight of the network parameter is obtained based on the adjustment coefficient corresponding to the network parameter and the reference attention, and the attention weight is:
Figure 53553DEST_PATH_IMAGE039
wherein,
Figure 212002DEST_PATH_IMAGE029
the reference attention degree of the s-th network parameter is represented, the larger the value is, the stronger the capability of describing the defect information of the network parameter is, and more attention degree needs to be given to the defect information in order to prevent the defect information from being hidden in the non-defect information;
Figure 238120DEST_PATH_IMAGE038
the adjustment coefficient of the s-th network parameter is represented, the larger the value is, the larger the influence of the network parameter on the loss value of the network is, and more attention is paid to the network parameter mainly influencing the network loss;
Figure 900046DEST_PATH_IMAGE040
representing the attention weight of the s-th network parameter.
And in the same way, obtaining the attention weight of each network parameter.
And S500, obtaining an updating gradient of the network parameters according to the attention weight, training the neural network based on the updating gradient, and inputting the surface image of the PCB into the trained neural network to obtain an actual defect area.
The attention weight of each network parameter is obtained in step S400, and the update gradient of the network parameters is adjusted according to the attention weight, where the update gradient is:
Figure 851952DEST_PATH_IMAGE041
wherein,
Figure 497697DEST_PATH_IMAGE042
the updated gradient of the s network parameter during the k training;
Figure 58998DEST_PATH_IMAGE043
representing the gradient of the s-th network parameter during k times of training;
Figure 106588DEST_PATH_IMAGE040
representing the s-th network parameterAttention weight of (1).
By attention weight
Figure 229396DEST_PATH_IMAGE040
Adjusting the updating gradient of the network parameters during each training, and ensuring the updating speed of the network parameters with high attention; and for the network parameters with low concern, the updating speed is restrained.
Further, the neural network is adjusted to adjust the loss function to
Figure 362437DEST_PATH_IMAGE044
And sequentially inputting each surface image into the neural network, and performing updating training on network parameters by a gradient descent method until the loss function is converged.
It should be noted that, through the loss function, the neural network performs update training once when the sample image is input once, and the sample image is the surface image.
And inputting the surface image of the PCB to be identified into the trained neural network, outputting a semantic segmentation image, and obtaining an actual defect area in the surface image according to the semantic segmentation image.
In summary, in the embodiments of the present invention, the surface image of each PCB is obtained, and then the target image corresponding to each surface image is obtained through manual marking; respectively inputting the surface image and the target image into the convolutional layer to obtain a plurality of description vectors, a segmentation image and a plurality of target description vectors; constructing a loss function of each convolutional layer based on the corresponding description vector and the target description vector of each convolutional layer; obtaining a comprehensive loss function according to the loss functions of all the convolution layers; in any training, acquiring a feature description value of a surface image input any convolution layer, acquiring a target feature description value of a target image input same convolution layer, and acquiring a difference value between the feature description value and the target feature description value as a feature description value variation; obtaining the reference attention of the corresponding network parameters according to the feature description value variation of all the convolution layers; obtaining values of all network parameters in training to form a first sequence; training the corresponding cross entropy loss function to form a second sequence every time, and obtaining an adjusting coefficient according to the first sequence and the second sequence; obtaining attention weight of the network parameter based on the adjustment coefficient and the reference attention; and obtaining an updating gradient of the network parameters according to the attention weight, training the neural network based on the updating gradient, and inputting the surface image of the PCB into the trained neural network to obtain an actual defect area. The network learning speed is higher, and the network identification result is more reliable, so that the defect identification of the surface of the PCB is more accurate.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit of the present invention.

Claims (6)

1. A PCB defect identification method based on a neural network is characterized by comprising the following steps:
acquiring a surface image of each PCB, obtaining a defect area in each surface image through manual marking, and setting pixel values of all pixel points in the defect area to zero to obtain a target image; inputting the surface image into a convolutional layer to obtain a plurality of description vectors and a segmentation image, and inputting the target image into the convolutional layer to obtain a plurality of target description vectors;
constructing a loss function of each convolutional layer based on the corresponding description vector and the target description vector of each convolutional layer; obtaining a comprehensive loss function according to the loss functions of all convolution layers;
in any training, acquiring a feature description value of any convolutional layer input by the surface image, acquiring a target feature description value of the same convolutional layer input by the target image, and acquiring a difference value between the feature description value and the target feature description value as a feature description value variation; obtaining the reference attention of the corresponding network parameters according to the feature description value variable quantities of all the convolution layers;
obtaining values of all network parameters in training to form a first sequence; training a corresponding loss function to form a second sequence every time, and obtaining an adjustment coefficient according to the first sequence and the second sequence; obtaining attention weight of the network parameter based on the adjustment coefficient and the reference attention;
obtaining an updating gradient of the network parameters according to the attention weight, training a neural network based on the updating gradient, and inputting the surface image of the PCB into the trained neural network to obtain an actual defect area;
wherein the step of constructing a loss function for each convolutional layer based on the description vector corresponding to each convolutional layer and the target description vector comprises:
calculating the difference value of the data corresponding to each description vector and the target description vector, obtaining the occurrence probability of each difference value, obtaining the logarithm of the probability, and obtaining the sum of the products of the probability corresponding to all the data and the logarithm of the probability as the loss function.
2. The method of claim 1, wherein the step of obtaining a composite loss function according to the loss functions of all convolutional layers comprises:
and acquiring a cross entropy loss function between the segmentation image and the label image and the sum of corresponding loss functions of all convolution layers, wherein the sum of all the loss functions and the cross entropy loss function is the comprehensive loss function.
3. The PCB defect identification method based on the neural network as claimed in claim 1, wherein the step of obtaining the reference attention of the corresponding network parameter according to the variation of the feature description value of all the convolutional layers comprises:
obtaining the mean value of the variation of the corresponding feature description of each convolution layer, calculating the summation of the mean values of the variation corresponding to the network parameters, and obtaining the reference attention of the corresponding network parameters according to the ratio of the mean value of the variation corresponding to each network parameter to the summation of all the mean values of the variation.
4. The PCB defect identification method based on the neural network as claimed in claim 1, wherein the step of obtaining the adjustment coefficient according to the first sequence and the second sequence comprises:
and acquiring a partial correlation coefficient between the first sequence and the second sequence, and adding 1 to an absolute value of the partial correlation coefficient to obtain the adjustment coefficient.
5. The PCB defect identification method based on the neural network as claimed in claim 1, wherein the step of obtaining the attention weight of the network parameter based on the adjustment coefficient and the reference attention comprises:
the product of the adjustment coefficient and the reference attention is the attention weight.
6. The PCB defect identification method based on the neural network as claimed in claim 1, wherein the step of obtaining the updated gradient of the network parameters according to the attention weight comprises:
and acquiring the gradient of the network parameter, wherein the product of the gradient and the attention degree weight is the updating gradient.
CN202211219124.XA 2022-10-08 2022-10-08 PCB defect identification method based on neural network Pending CN115294115A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211219124.XA CN115294115A (en) 2022-10-08 2022-10-08 PCB defect identification method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211219124.XA CN115294115A (en) 2022-10-08 2022-10-08 PCB defect identification method based on neural network

Publications (1)

Publication Number Publication Date
CN115294115A true CN115294115A (en) 2022-11-04

Family

ID=83834103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211219124.XA Pending CN115294115A (en) 2022-10-08 2022-10-08 PCB defect identification method based on neural network

Country Status (1)

Country Link
CN (1) CN115294115A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117893513A (en) * 2024-01-19 2024-04-16 桂林诗宇电子科技有限公司 PCB detection method and system based on visual neural network algorithm

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657688A (en) * 2017-10-04 2019-04-19 斯特拉德视觉公司 The test method and device of network are up-sampled with the learning method and device of feature up-sampling network raising CNN performance and with feature
CN114882039A (en) * 2022-07-12 2022-08-09 南通透灵信息科技有限公司 PCB defect identification method applied to automatic PCB sorting process

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657688A (en) * 2017-10-04 2019-04-19 斯特拉德视觉公司 The test method and device of network are up-sampled with the learning method and device of feature up-sampling network raising CNN performance and with feature
CN114882039A (en) * 2022-07-12 2022-08-09 南通透灵信息科技有限公司 PCB defect identification method applied to automatic PCB sorting process

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117893513A (en) * 2024-01-19 2024-04-16 桂林诗宇电子科技有限公司 PCB detection method and system based on visual neural network algorithm

Similar Documents

Publication Publication Date Title
CN112561910B (en) Industrial surface defect detection method based on multi-scale feature fusion
CN113642574B (en) Small sample target detection method based on feature weighting and network fine tuning
CN107016413B (en) A kind of online stage division of tobacco leaf based on deep learning algorithm
CN116258707A (en) PCB surface defect detection method based on improved YOLOv5 algorithm
CN112200045A (en) A method for establishing a remote sensing image target detection model based on context enhancement and its application
CN113239930A (en) Method, system and device for identifying defects of cellophane and storage medium
CN113379686B (en) PCB defect detection method and device
CN115147418A (en) Compression training method and device for defect detection model
CN115564983B (en) Target detection method, device, electronic device, storage medium and application thereof
CN116205876B (en) Unsupervised notebook appearance defect detection method based on multi-scale standardized flow
CN117152746B (en) Method for acquiring cervical cell classification parameters based on YOLOV5 network
CN111539456B (en) Target identification method and device
CN118864453B (en) Steel surface defect detection method and system based on local-global context perception
CN112288700A (en) Rail defect detection method
CN111415353A (en) Detection structure and detection method for fastener burr defects based on ResNet58 network
CN112070727A (en) Metal surface defect detection method based on machine learning
CN119229106B (en) Industrial product appearance defect semantic segmentation method and system
CN107742132A (en) Potato Surface Defect Detection Method Based on Convolutional Neural Network
CN114429461A (en) A cross-scenario strip surface defect detection method based on domain adaptation
CN109978014A (en) A kind of flexible base board defect inspection method merging intensive connection structure
CN111161244A (en) Surface defect detection method of industrial products based on FCN+FC-WXGBoost
CN115631186B (en) Industrial element surface defect detection method based on double-branch neural network
CN117523394A (en) SAR vessel detection method based on aggregation characteristic enhancement network
CN116883393A (en) Metal surface defect detection method based on anchor frame-free target detection algorithm
CN115294115A (en) PCB defect identification method based on neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination