CN119478521A - A collaborative assisted label correction method for medical image analysis - Google Patents
A collaborative assisted label correction method for medical image analysis Download PDFInfo
- Publication number
- CN119478521A CN119478521A CN202411587160.0A CN202411587160A CN119478521A CN 119478521 A CN119478521 A CN 119478521A CN 202411587160 A CN202411587160 A CN 202411587160A CN 119478521 A CN119478521 A CN 119478521A
- Authority
- CN
- China
- Prior art keywords
- noise
- samples
- sample
- label
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/0895—Weakly supervised learning, e.g. semi-supervised or self-supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
Tags in medical image datasets often have noise or false labels that greatly impact the performance of the Deep Neural Network (DNNs) in medical image analysis. To address this problem, a novel framework is presented herein, an assisted tag correction network (Co-ASSISTANT NETWORKS FOR LABEL CORRECTION, CNLC), for simultaneously detecting and correcting corrupted tags. The framework consists of two core modules, namely a noise detection module and a noise cleaning module. The noise detection module predicts sample labels by using a Convolutional Neural Network (CNN) and divides samples into three categories of clean, uncertain and destroyed by an anti-noise loss function. The noise cleaning module corrects the uncertain and corrupted sample labels based on a graph rolling network (GCN) model while maintaining the local topological relationship between samples. In order to optimize the cooperation effect of the two modules, a double-layer optimization algorithm is designed, so that label detection and correction processes are alternately performed, and finally, the robustness of the model and the accuracy of label correction are improved. Experimental results on three widely used medical image datasets show that the CNLC framework is significantly superior to current advanced methods in dealing with the tag noise problem.
Description
Technical Field
The present invention relates to the field of application of deep neural networks, and more particularly to a label correction method for processing medical image datasets containing false labels.
Background
In medical image analysis, the performance of deep neural networks is largely dependent on the quality of the data set. However, the labeling process of medical image datasets is often complex and expensive, and error labels are prone to be generated. Wrong labels can severely degrade the performance of the deep learning model and can also lead to wrong diagnostic results. Therefore, it is important to develop an effective method capable of automatically detecting and correcting an erroneous label.
Existing tag correction methods can be broadly divided into two categories, robustness enhancement methods and tag correction methods. The robustness enhancing method reduces the sensitivity of the model to the error labels through various technologies (such as data enhancement, loss regularization and the like) and outputs a more robust model, and the label correction method tries to detect and correct the error labels in the data set. However, the existing methods generally have problems that 1) all false labels cannot be detected and corrected at the same time, 2) relationships among samples are generally ignored, resulting in poor correction effects, and 3) robustness of the model itself cannot be improved.
Disclosure of Invention
The present invention provides a collaborative auxiliary tag correction framework, which aims to solve the above-mentioned problems in the prior art. The framework includes two main modules, a noise detector and a noise cleaner. The noise detector calculates the classification probability of the samples based on a Convolutional Neural Network (CNN) and classifies all training samples into three categories, clean, uncertain and erroneous using a loss function that resists overfitting. The noise cleaner uses the local topology between samples to correct the detected false labels based on a graph rolling network (GCN).
In addition, the invention also designs a double-layer optimization algorithm which alternately optimizes the noise detector and the noise cleaner in each iteration process, thereby enhancing the label correction effect. Experimental results indicate that the proposed method exhibits superior performance over a plurality of medical image datasets. Compared with the prior method, the method has the following beneficial effects:
1) The two-module design CNLC framework combines a Noise Detector (Noise Detector) responsible for identifying false labels in the dataset and a Noise Cleaner (Noise Cleaner) responsible for label correction based on class-based graph rolling network (GCN) to improve label correction accuracy. The two work cooperatively, so that the label correction process is more accurate and efficient;
2) The extensive applicability CNLC can make full use of marked samples and unmarked samples, enhancing the applicability of the model on the incomplete data set of the tag. Therefore CNLC is not only suitable for tag-complete datasets, but also for use in datasets with a large number of undetermined or unlabeled samples;
3) The method is suitable for the high-noise data set, and experimental results show that CNLC is particularly outstanding in processing the high-noise data set. Compared with the existing label correction method (such as Co-teaching, self-EnsembleLabel Correction, and the like), CNLC can effectively process data under different noise levels, and particularly, high classification accuracy and robustness are still maintained when the noise rate is high (40%).
Drawings
FIG. 1 is a workflow diagram of the present invention that assists in tag correction network frameworks;
FIG. 2 is a block diagram of a module in the present invention;
Detailed Description
The following description of the embodiments of the invention is presented in conjunction with the accompanying drawings to provide a better understanding of the invention to those skilled in the art. It is to be expressly noted that in the description below, detailed descriptions of known functions and designs are omitted here as perhaps obscuring the present invention.
Examples
FIG. 1 is a flowchart of the operation of the present invention to assist in tag correction of a network framework. As shown in fig. 1, the workflow chart for assisting the label correction network framework of the present invention specifically includes the following steps:
S101, initializing a model:
a standard Convolutional Neural Network (CNN) model is used to initialize the noise detection module and load the medical image dataset for training.
S102, forward propagation and loss calculation:
Forward propagating each input sample x i to output a predictive probability
Based on predictive probabilityAnd real tag calculation of loss value of sampleTo prevent overfitting of the model to noise labels, the anti-noise loss function L r is used instead of the conventional cross entropy loss function:
where the first term is the cross entropy loss, representing the model's classification loss for each sample, and the second term is a time-dependent smoothing term to prevent overfitting.
S103, judging whether the loss of L r is small enough (less than epsilon r).
S104, sample division:
And sorting all samples according to the calculated loss value. All training samples are classified into three categories, clean samples, samples with smaller loss values, and presumption that the label is correct. And (3) determining samples with uncertain loss values, wherein the model cannot accurately judge whether the labels of the samples are correct. Corrupted samples, samples with large loss values, and speculative tags are corrupted or noisy.
Dividing criteria the top n 1 samples of the penalty value rank are considered clean samples. A threshold (e.g., the first 5% or 10% of samples) is typically set to select these samples. The last n 1 samples of the loss value rank are considered corrupted samples (e.g., 5% or 10% of the samples with the largest loss value). Samples located between the clean sample and the corrupted sample (middle 90% or 80%) are considered to be uncertain samples.
In practical applications, the specific threshold (e.g., 5% or 10%) for dividing the sample may be adjusted according to the characteristics of the different data sets. The thresholds can also be dynamically adjusted, and the proportion of clean samples and destroyed samples is gradually optimized along with the progress of model training, so that more accurate label classification is ensured.
The output embedded features the noise detection module also generates embedded features for each sample that will be used in the noise cleaning module to further analyze and correct the tag S105.
The task of the noise detection module is to identify the label noise samples present in the training set and to divide the samples into clean samples, uncertain samples and corrupted samples. This is done by Convolutional Neural Networks (CNNs).
S106, GCN model construction:
A class-based graph rolling network (GCN) model is built for each class c. The GCN model adjusts the prediction label through the local topological relation (sample similarity) among samples, so that the label correction accuracy is improved.
S107, noise rate estimation:
Modeling the loss distribution of samples in the training set using a Gaussian Mixture Model (GMM), estimating the proportion of noise labels (i.e. noise rate) r:
where v i is the result of determining whether sample i is a noise sample, and n is the total number of training set samples.
S108, sample selection:
Based on the noise rate r, a portion of corrupted samples and uncertain samples are selected for further correction. The clean sample is taken as a positive sample and participates in the semi-supervised learning training of the GCN.
S109, semi-supervised learning:
the noise cleaning module jointly optimizes the GCN model through a binary cross entropy loss L bce and a mean square error loss L mse:
Lssl=Lbce+Lmse
Where L bce is the binary cross entropy loss for the labeled samples:
where z i is the true label of sample i and q i is the predicted probability of that sample in the current round of training.
L mse is the mean square error loss for unlabeled samples, representing the difference between the current prediction and the previous round of prediction:
Where q i is the predicted result of sample i in the current round, Is the predicted value of the sample in the previous round.
S110, judging whether L ssl is small enough (less than epsilon ssl);
S111, label correction:
Predicting the labels of the damaged samples and the uncertain samples by using a GCN model, and correcting the labels according to the prediction probability q ic, wherein a specific label correction formula is as follows:
Where q ic is the probability that sample i is predicted to be category c, and the category corresponding to the highest probability is selected as the corrected label.
The CNLC framework alternately optimizes the noise detection module and the noise cleaning module through a double-layer optimization algorithm, so that the model performance and the label correction accuracy are continuously improved. The optimization process is as follows:
The upper layer optimization (noise detection module optimization) is to optimize the noise detection module and minimize the anti-noise loss function L r so as to improve the classification capability of clean samples and noise samples. The embedded features of the noise detection module and the preliminary tag classification result are passed to the noise cleaning module.
And the lower layer optimization (noise cleaning module optimization) is that the noise cleaning module optimizes the GCN model of the noise cleaning module through semi-supervised learning by utilizing the embedded characteristics generated by the noise detection module, minimizes the semi-supervised loss L ssl and corrects the noise label. The corrected labels are fed back to the noise detection module, and label division of the sample is updated.
And (3) alternately iterating, namely continuously and alternately carrying out the optimization process of the two modules to form a closed-loop optimization system. In each iteration, the noise detection module generates more accurate label division, and the noise cleaning module corrects the damaged label better. The process continues until the loss function of the noise cleaning module converges, and finally an optimized model and an accurate label are obtained.
The double-layer optimization objective function of CNLC framework is:
Wherein θ is a parameter of the noise detection module, ω c is a parameter of a graph rolling network (GCN) in the noise cleaning module, and f t (x; θ) represents an output of the noise detection module at a t-th round; And A c and E c are respectively an adjacency matrix and a feature matrix of the class c.
Optimization of noise detection module the noise detection module is first optimized to generate embedded features and initial tag partitioning (clean, corrupted, indeterminate) for all samples. The optimization objective of the noise detection module is to minimize the anti-noise loss function L r to improve the recognition capability of the model on noise samples.
And optimizing the noise cleaning module, namely correcting the damaged label by utilizing the embedded characteristic generated by the noise detection module. The noise cleaning module minimizes L ssl through semi-supervised learning to improve accuracy of tag correction.
And (3) alternately optimizing the noise detection module and then the noise cleaning module in each round of iteration. The label generated by the noise detection module is used for guiding the correction process of the noise cleaning module, and the label corrected by the noise cleaning module is fed back to the noise detection module to form closed loop iterative optimization. The process is repeatedly carried out until the correction result of the noise cleaning module is converged, and finally an optimized model and an accurate label are obtained.
In order to better illustrate the technical effects of the invention, the invention is experimentally verified by adopting a specific example. In this experimental verification, the effectiveness of the present invention was evaluated using four node classification benchmark datasets in the field of data analysis, performed on three widely used medical image datasets, including BreakHis: containing 7,909 breast cancer histopathological images, 2,480 of which were benign samples and 5,429 of which were malignant samples. 5,537 images were randomly selected for training and 2,372 images were used for testing. ISIC, which contains 12,000 digital images of skin disease, 6,000 of which are normal samples and 6,000 of which are melanoma samples. 9,600 images were randomly selected for training and the rest for testing. NIHCC chest X-ray images of 10,280 were included, of which 5,110 were normal samples and 5,170 were lung disease samples. 8,574 images were randomly selected for training and the remaining samples were used for testing.
To evaluate the model performance in the presence of noise labels, experiments artificially introduced different degrees of noise labels (noise rate e= {0.2,0.4 }) on these datasets. This makes the model necessary to deal with the case of label inaccuracy, and thus to test its robustness under label noise conditions.
To verify the validity of CNLC framework, experiments compared it to the following 6 popular label correction or robust training methods:
CE (Cross-Entropy) conventional Cross entropy loss is used as a benchmark method. Co-training (CT) Co-trains through two models to cope with tag noise. NestedCo-training (NCT) improved Co-teaching method, using nested structures to enhance robustness. Self-PACEDRESISTANCELEARNING (SPRL), a learning method for resisting overfitting. Co-Correcting (CC) method for dual-mode mutual correction of labels. Self-EnsembleLabelCorrection (SELC), a Self-integration-based label correction method.
The experiment adopts the following four evaluation indexes to measure the performances of different methods:
ACC (Accuracy), model predicts the correct sample ratio. SEN (Sensitivity) the proportion of positive samples is correctly identified. SPE (specificity ) correctly recognizes the proportion of negative samples. AUC (area under ROC curve, areaUndertheCurve) is a comprehensive index reflecting the classification performance of the model.
Table 1 shows the classification results (mean ± standard deviation) of the three data sets in this example.
TABLE 1
As shown in table 1, it can be seen from the results in table 1 that the present invention CNLC achieves the best results in both noise cases for the three data sets, CNLC is significantly better than the existing methods in all noise conditions, especially in high noise environments. Compared with a reference method CE, the CNLC framework improves the accuracy under the noise label condition by 10-20%. Thereby verifying the validity of the present invention.
While the foregoing describes illustrative embodiments of the present invention to facilitate an understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, but is to be construed as protected by the accompanying claims insofar as various changes are within the spirit and scope of the present invention as defined and defined by the appended claims.
Claims (4)
1. A collaborative auxiliary framework for label correction in deep neural networks, in particular for medical image datasets, characterized in that the framework comprises:
S1, a noise detection module, which processes training samples by using a Convolutional Neural Network (CNN), predicts the class probability of the samples, and divides the training samples into clean samples, uncertain samples and destroyed samples based on the calculated loss value;
S2, providing an anti-noise loss function for optimizing the noise detection module to prevent the model from being over fitted to the noise label, wherein the loss function is combined with a cross entropy loss and time related smoothing term and is defined by the following formula:
where b is the number of samples in the batch, p i[yi represents the predicted probability of the ith sample, the label of that sample, C is the number of categories, and λ (t) is the time-dependent hyper-parameter.
S3, the noise detection module divides the training sample into a clean sample, a sample with a smaller loss value, and a sample with an undetermined sample, a sample with a middle loss value, and a sample with a larger loss value, wherein the label is considered to be correct, the model cannot accurately judge whether the label is correct, and the damaged sample is a sample with a middle loss value, and is presumed that the label is damaged or noise exists.
S4, a noise cleaning module is further included, wherein the noise cleaning module uses a class-based graph rolling network (GCN) model to correct damaged and uncertain sample labels, and specifically comprises the following steps of:
s4.1, constructing a graph rolling network (GCN) model for each category, and ensuring that similar samples have similar labels in the same category by reserving local topological structures among the samples;
And S4.2, the noise cleaning module corrects the labels of the uncertain and damaged samples by using a semi-supervised learning method according to the embedded features generated by the noise detection module.
S4.2.1 the noise cleaning module estimates the noise ratio in the training samples by means of a Gaussian Mixture Model (GMM), and specifically calculates the noise ratio by means of the following formula:
where v i is the result of determining whether sample i is a noise sample, and n is the total number of training set samples.
S4.2.2 wherein the noise cleaning module is optimized by the following semi-supervised learned penalty function:
Lssl=Lbce+Lmse
Where L bce is the binary cross entropy loss for the labeled samples:
where z i is the true label of sample i and q i is the predicted probability of that sample in the current round of training.
L mse is the mean square error loss for unlabeled samples, representing the difference between the current prediction and the previous round of prediction:
Where q i is the predicted result of sample i in the current round, Is the predicted value of the sample in the previous round.
2. The method of claim 1, wherein the noise cleaning module corrects the labels of the uncertain samples and corrupted samples using a graph rolling network (GCN) model and determines the corrected labels according to the following formula:
Where q ic is the probability that sample i is predicted to be category c, and the category corresponding to the highest probability is selected as the corrected label.
3. The method according to claims 1-2, further comprising a double-layer optimization algorithm, wherein the optimization algorithm improves robustness of the model and accuracy of label correction by alternately optimizing the noise detection module and the noise cleaning module, and specifically comprises an upper layer optimization of the noise detection module, improving recognition capability of the model on noise labels by minimizing an anti-noise loss function L r, and a lower layer optimization of the noise cleaning module, and enabling the model to achieve efficient label correction on marked and unmarked samples by minimizing a semi-supervised learning loss function L ssl.
4. A method according to claim 3, wherein the objective function of the double-layer optimization algorithm is defined as follows:
Wherein θ is a parameter of the noise detection module, ω c is a parameter of a graph rolling network (GCN) in the noise cleaning module, and f t (x; θ) represents an output of the noise detection module at a t-th round; And A c and E c are respectively an adjacency matrix and a feature matrix of the class c.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411587160.0A CN119478521A (en) | 2024-11-08 | 2024-11-08 | A collaborative assisted label correction method for medical image analysis |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411587160.0A CN119478521A (en) | 2024-11-08 | 2024-11-08 | A collaborative assisted label correction method for medical image analysis |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN119478521A true CN119478521A (en) | 2025-02-18 |
Family
ID=94576700
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411587160.0A Pending CN119478521A (en) | 2024-11-08 | 2024-11-08 | A collaborative assisted label correction method for medical image analysis |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119478521A (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090129684A1 (en) * | 2007-11-15 | 2009-05-21 | Seung Soo Lee | Method and apparatus for compressing text and image |
| KR20210093068A (en) * | 2020-01-17 | 2021-07-27 | 정인호 | Gamma Correction Method and Auto Encoder-based Image Correction Method |
| CN117456306A (en) * | 2023-11-17 | 2024-01-26 | 电子科技大学 | Label self-correction method based on meta-learning |
-
2024
- 2024-11-08 CN CN202411587160.0A patent/CN119478521A/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090129684A1 (en) * | 2007-11-15 | 2009-05-21 | Seung Soo Lee | Method and apparatus for compressing text and image |
| KR20210093068A (en) * | 2020-01-17 | 2021-07-27 | 정인호 | Gamma Correction Method and Auto Encoder-based Image Correction Method |
| CN117456306A (en) * | 2023-11-17 | 2024-01-26 | 电子科技大学 | Label self-correction method based on meta-learning |
Non-Patent Citations (2)
| Title |
|---|
| 王达 等: "受损图片分类的双并行交叉降噪卷积神经网络", 计算机工程与应用, no. 18, 15 September 2018 (2018-09-15) * |
| 陈玄 等: "CO-ASSISTANT NETWORKS FOR LABEL CORRECTION", MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2023, 1 October 2023 (2023-10-01), pages 159 - 168, XP047671256, DOI: 10.1007/978-3-031-43898-1_16 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US7007001B2 (en) | Maximizing mutual information between observations and hidden states to minimize classification errors | |
| CN113326731A (en) | Cross-domain pedestrian re-identification algorithm based on momentum network guidance | |
| CN115457332B (en) | Image multi-label classification method based on graph convolutional neural network and class activation mapping | |
| CN111930844B (en) | Financial prediction system based on block chain and artificial intelligence | |
| CN113703048B (en) | Method and system for detecting high-resolution earthquake fault of antagonistic neural network | |
| CN113052873A (en) | Single-target tracking method for on-line self-supervision learning scene adaptation | |
| CN116912568A (en) | Noise-containing label image recognition method based on self-adaptive class equalization | |
| CN116304906A (en) | Trusted graph neural network node classification method | |
| CN111144462B (en) | Unknown individual identification method and device for radar signals | |
| CN117541562A (en) | Semi-supervised non-reference image quality evaluation method based on uncertainty estimation | |
| CN107862375A (en) | A kind of two stage equipment fault diagnosis method | |
| US8200017B2 (en) | Face alignment via component-based discriminative search | |
| CN119741499A (en) | A method, system and medium for colorectal cancer image segmentation | |
| CN115470834A (en) | Multi-label learning algorithm for inaccurate labeling based on label propagation to correct label confidence | |
| Hao et al. | A model-agnostic approach for learning with noisy labels of arbitrary distributions | |
| Wang et al. | SPA-GPT: general pulse tailor for simple power analysis based on reinforcement learning | |
| CN116630751B (en) | A trusted target detection method integrating information bottleneck and uncertainty perception | |
| Wang et al. | Application of Uncertainty to Out-of-Distribution Detection for Autonomous Driving Perception Safety | |
| CN112465016A (en) | Partial multi-mark learning method based on optimal distance between two adjacent marks | |
| CN119478521A (en) | A collaborative assisted label correction method for medical image analysis | |
| CN116543259B (en) | A method, system and storage medium for modeling and correcting noise labels in deep classification networks | |
| Wang et al. | MCMC algorithm based on Markov random field in image segmentation | |
| CN115116115A (en) | Face recognition and model training method and device thereof | |
| CN115249513A (en) | A neural network copy number variation detection method and system based on Adaboost integration idea | |
| Zhao et al. | Co‐training semi‐supervised medical image segmentation based on pseudo‐label weight balancing |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |