[go: up one dir, main page]

CN110458829A - Image quality control method, device, equipment and storage medium based on artificial intelligence - Google Patents

Image quality control method, device, equipment and storage medium based on artificial intelligence Download PDF

Info

Publication number
CN110458829A
CN110458829A CN201910745023.8A CN201910745023A CN110458829A CN 110458829 A CN110458829 A CN 110458829A CN 201910745023 A CN201910745023 A CN 201910745023A CN 110458829 A CN110458829 A CN 110458829A
Authority
CN
China
Prior art keywords
image
fundus image
module
quality
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910745023.8A
Other languages
Chinese (zh)
Other versions
CN110458829B (en
Inventor
边成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Healthcare Shenzhen Co Ltd
Original Assignee
Tencent Healthcare Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Healthcare Shenzhen Co Ltd filed Critical Tencent Healthcare Shenzhen Co Ltd
Priority to CN201910745023.8A priority Critical patent/CN110458829B/en
Publication of CN110458829A publication Critical patent/CN110458829A/en
Application granted granted Critical
Publication of CN110458829B publication Critical patent/CN110458829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The embodiment of the present application discloses a kind of image quality control method based on artificial intelligence, this method comprises: obtaining the target eye fundus image to Quality Control;The quality type prediction result of target eye fundus image is obtained by eye fundus image quality control system, which includes the mutual exclusion probability that target eye fundus image belongs to different quality type;Eye fundus image quality control system includes discrimination module, attention mechanism module and restriction module;Discrimination module is used to extract characteristics of image by least two discrimination models and export to attention mechanism module;Attention mechanism module is used to extract attention feature by attention mechanism network for characteristics of image and export to restriction module;Restriction module by restriction model to attention feature for being merged and being exported the mutual exclusion probability that image belongs to different quality type;According to quality type prediction result, determine whether target eye fundus image is qualified.In this way, realizing the Quality Control for eye fundus image in front end.

Description

Image quality control method, device, equipment and storage medium based on artificial intelligence
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to an image quality control method, apparatus, device, and storage medium based on artificial intelligence.
Background
With the development of image deep learning technology, the application requirements of image-based screening systems in various fields are increasingly significant, such as image-based disease screening in the medical field, image-based quality control in the product production field, and the like.
The screening accuracy of the image-based screening system depends on the performance of the screening system and more importantly depends on the image quality of the front-end input, so that the image quality of the front-end input needs to be controlled in the application scene of the image-based screening system.
At present, in order to meet the above practical application requirements, it is urgently needed to provide a solution that can implement quality control for images and ensure the accuracy of quality control.
Disclosure of Invention
The embodiment of the application provides an image quality control method, device, equipment and storage medium based on artificial intelligence, which can be used for carrying out quality control on images and ensuring the accuracy of the quality control.
In view of the above, a first aspect of the present application provides an image quality control method based on artificial intelligence, including:
acquiring a target fundus image to be quality controlled;
obtaining a quality type prediction result of the target fundus image through a fundus image quality control system, wherein the quality type prediction result of the target fundus image comprises mutual exclusion probabilities that the target fundus image belongs to different quality types; the fundus image quality control system comprises a discrimination module, an attention mechanism module and a limit constraint module which are obtained based on training of a fundus image training sample set; the discrimination module is used for extracting image characteristics through at least two discrimination models and outputting the image characteristics to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics through an attention mechanism network aiming at the image characteristics and outputting the attention characteristics to the limiting constraint module; the limiting and constraining module is used for fusing the attention features through a limiting and constraining model and outputting mutual exclusion probabilities that the images belong to different quality types;
and determining whether the target eye fundus image is qualified or not according to the quality type prediction result of the target eye fundus image.
The second aspect of the present application provides an image quality control method based on artificial intelligence, including:
acquiring a target image to be quality-controlled;
obtaining a quality type prediction result of the target image through an image quality control system, wherein the quality type prediction result of the target image comprises mutual exclusion probabilities that the target image belongs to different quality types; the image quality control system comprises a discrimination module, an attention mechanism module and a limit constraint module, wherein the discrimination module, the attention mechanism module and the limit constraint module are obtained by training based on an image training sample set; the discrimination module is used for extracting image characteristics through at least two discrimination models and outputting the image characteristics to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics through an attention mechanism network aiming at the image characteristics and outputting the attention characteristics to the limiting constraint module; the limiting and constraining module is used for fusing the attention features through a limiting and constraining model and outputting mutual exclusion probabilities that the images belong to different quality types;
and determining whether the target image is qualified or not according to the quality type prediction result of the target image.
The third aspect of the present application provides an image quality control device based on artificial intelligence, including:
the acquisition module is used for acquiring a target fundus image to be quality controlled;
the processing module is used for obtaining a quality type prediction result of the target fundus image through a fundus image quality control system, wherein the quality type prediction result of the target fundus image comprises mutual exclusion probabilities that the target fundus image belongs to different quality types; the fundus image quality control system comprises a discrimination module, an attention mechanism module and a limit constraint module which are obtained based on training of a fundus image training sample set; the discrimination module is used for extracting image characteristics through at least two discrimination models and outputting the image characteristics to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics through an attention mechanism network aiming at the image characteristics and outputting the attention characteristics to the limiting constraint module; the limiting and constraining module is used for fusing the attention features through a limiting and constraining model and outputting mutual exclusion probabilities that the images belong to different quality types;
and the determining module is used for determining whether the target fundus image is qualified or not according to the quality type prediction result of the target fundus image.
The present application in a fourth aspect provides an image quality control device based on artificial intelligence, comprising:
the acquisition module is used for acquiring a target image to be quality controlled;
the processing module is used for obtaining a quality type prediction result of the target image through an image quality control system, wherein the quality type prediction result of the target image comprises mutual exclusion probabilities that the target image belongs to different quality types; the image quality control system comprises a discrimination module, an attention mechanism module and a limit constraint module, wherein the discrimination module, the attention mechanism module and the limit constraint module are obtained by training based on an image training sample set; the discrimination module is used for extracting image characteristics through at least two discrimination models and outputting the image characteristics to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics through an attention mechanism network aiming at the image characteristics and outputting the attention characteristics to the limiting constraint module; the limiting and constraining module is used for fusing the attention features through a limiting and constraining model and outputting mutual exclusion probabilities that the images belong to different quality types;
and the determining module is used for determining whether the target image is qualified or not according to the quality type prediction result of the target image.
A fifth aspect of the present application provides an apparatus comprising a processor and a memory:
the memory is used for storing a computer program;
the processor is configured to perform the steps of the artificial intelligence based image quality control method according to the computer program according to the first or second aspect.
A fourth aspect of the present application provides a computer-readable storage medium for storing a computer program for executing the artificial intelligence based image quality control method according to the first or second aspect.
A fifth aspect of the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the artificial intelligence based image quality control method of the first or second aspect.
According to the technical scheme, the embodiment of the application has the following advantages:
the embodiment of the application provides an image quality control method based on Artificial Intelligence, which is characterized in that a fundus image quality control system obtained by training based on a machine learning algorithm is utilized, and the quality of a fundus image is judged at the front end, so that a qualified fundus image is provided for a fundus Artificial Intelligence (AI) screening system at the rear end, and the diagnosis accuracy of the fundus AI screening system is improved. Specifically, in the image quality control method provided in the embodiment of the present application, after a target fundus image to be quality-controlled is acquired, determining the quality type prediction result of the target fundus image through a fundus image quality control system, the quality type prediction result includes mutual exclusion probabilities that the target fundus image belongs to different quality types, the fundus image quality control system comprises a fundus image training sample set training-based distinguishing module, an attention mechanism module and a limiting and restricting module, the system comprises a discrimination module, an attention mechanism module, a limitation constraint module and a processing module, wherein the discrimination module is used for extracting image characteristics through at least two discrimination models and outputting the image characteristics to the attention mechanism module; further, it is determined whether or not the target fundus image is acceptable, based on the quality type prediction result of the target fundus image. Therefore, whether the target fundus image to be controlled is a qualified image or not is intelligently and accurately judged by using the fundus image quality control system comprising a plurality of judgment models, the attention mechanism network and the restriction constraint model, and the quality control of the fundus image at the front end is realized.
Drawings
Fig. 1 is a schematic view of an application scenario of an image quality control method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an image quality control method according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a fundus image quality control system according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a mobile-Net network structure according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of a training method for a fundus image quality control system according to an embodiment of the present disclosure;
fig. 6 is a schematic view of a working structure of a fundus image quality control system according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram illustrating an experimental result of an image quality control method according to an embodiment of the present application;
fig. 8 is a schematic flowchart of another image quality control method according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an image quality control apparatus according to an embodiment of the present disclosure;
FIG. 10 is a schematic structural diagram of another image quality control apparatus according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of another image quality control apparatus according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of another image quality control apparatus according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the medical field, fundus AI screening systems have been increasingly widely used, however, the confidence of many doctors feeding back fundus AI screening systems is low, the reason is that the fundus AI screening systems usually screen directly on the basis of fundus images obtained by manual shooting, most of fundus images obtained by manual shooting have the problems of inaccurate exposure, contamination and the like, and screening directly on the basis of fundus images with quality problems seriously affects the confidence of system screening, so that many invalid screens are caused.
In order to improve the confidence coefficient of a fundus AI screening system, the embodiment of the application provides an image quality control method based on artificial intelligence, and the method can be used for judging the quality of a fundus image at the front end, so that a qualified fundus image is guaranteed to be provided for a fundus AI screening system at the rear end, and the confidence coefficient of the fundus AI screening system is improved. Specifically, the image quality control method provided by the embodiment of the application discriminates a target fundus image to be quality-controlled by using a fundus image quality control system obtained by training based on a machine learning algorithm, determines a quality type prediction result of the target fundus image, namely determines the mutual exclusion probability that the target fundus image belongs to different quality types, and accordingly determines whether the target fundus image is qualified or not based on the quality type prediction result; the fundus image quality control system comprising the plurality of discrimination models, the attention mechanism network and the limiting constraint model can intelligently and accurately identify whether each fundus image to be quality-controlled is a qualified image or not, and realize accurate quality control on the fundus images at the front end.
It should be noted that the image quality control method based on artificial intelligence provided in the embodiment of the present application can be applied to not only a scene in which the quality of fundus images is controlled, but also other scenes in which the quality of images is controlled, such as a scene in which product quality monitoring is performed based on images in the field of product production, a scene in which quality control is performed on images of other organs in the field of medical care, and the like; the application scenario to which the image quality control method provided in the embodiment of the present application is applied is not limited at all.
It should be understood that the image quality control method based on artificial intelligence provided by the embodiment of the present application can be applied to devices with data processing capability, such as terminal devices, servers, and the like; the terminal device may be a computer, a Personal Digital Assistant (PDA), or the like; the server may specifically be an application server or a Web server, and in actual deployment, the server may be an independent server or a cluster server.
In order to facilitate understanding of the technical solutions provided in the embodiments of the present application, taking an example that the image quality control method provided in the embodiments of the present application is applied to a server, an application scenario in which the image quality control method for quality control of fundus images provided in the embodiments of the present application is applied is exemplarily described below.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of an artificial intelligence-based image quality control method according to an embodiment of the present application. As shown in fig. 1, the application scenario includes: a fundus image capturing apparatus 110, a quality control server 120, and a screening server 130; among them, the fundus image capturing apparatus 110 can capture a fundus image for a patient under the correct operation of the operator and upload the fundus image to the quality control server 120; the quality control server 120 is configured to execute the image quality control method provided in the embodiment of the present application, and determine whether the fundus image uploaded by the fundus image capturing device 110 is a qualified image, and a fundus image quality control system is running in the quality control server 120; the screening server 130 is used for acquiring the fundus images judged to be qualified images by the quality control server 120, screening the fundus images based on the acquired qualified fundus images, and accordingly generating a diagnosis report and providing diagnosis reference opinions for doctors.
In a specific application, the fundus image capturing apparatus 110 uploads a fundus image captured by it to the quality control server 120. After receiving the fundus image, the quality control server 120 inputs the fundus image as a target fundus image to be quality-controlled to a self-running fundus image quality control system, and determines a quality type prediction result of the target fundus image by using the fundus image quality control system, wherein the quality type prediction result comprises mutual exclusion probabilities that the target fundus image belongs to different quality types; further, the quality control server 120 may determine whether or not the fundus image uploaded by the fundus image capturing apparatus 110 is acceptable, based on the quality type prediction result of the target fundus image. If the quality control server 120 determines that the fundus image is qualified, the quality control server 120 may further transmit the qualified fundus image to the screening server 130, so that the screening server 130 performs screening according to the fundus image to generate a relevant diagnosis report.
It should be noted that the fundus image quality control system running in the quality control server 120 is obtained based on the training of a fundus image training sample set, and includes a discrimination module, an attention mechanism module and a limitation constraint module; the discrimination module is used for extracting image characteristics of a target fundus image through at least two discrimination models and outputting the image characteristics extracted by each discrimination model to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics aiming at the input image characteristics through an attention mechanism network and outputting the extracted attention characteristics to the limiting and constraining module; the limiting and constraining module is used for fusing the input attention characteristics through a limiting and constraining model, so that mutual exclusion probabilities that the target fundus images belong to different quality types are generated, namely quality type prediction results of the target fundus images are generated.
Therefore, the fundus image to be controlled is accurately identified based on the fundus image quality control system comprising a plurality of discrimination models, the attention mechanism network and the restriction constraint model so as to determine whether the fundus image is a qualified image, so that the quality control of the fundus image is realized at the front end (namely the quality control server 120), the qualified fundus image is provided for the rear end (namely the screening server 130), and the screening confidence coefficient of the rear end is improved.
It should be understood that the application scenario shown in fig. 1 is only an example, and in practical applications, the image quality control method based on artificial intelligence provided in the embodiment of the present application may be applied to other application scenarios that require quality control of an image besides quality control of a fundus image, and the application scenario to which the image quality control method based on artificial intelligence provided in the embodiment of the present application is applied is not limited at all.
The following describes an image quality control method based on artificial intelligence provided by the present application by way of example.
Referring to fig. 2, fig. 2 is a schematic flowchart of an image quality control method based on artificial intelligence according to an embodiment of the present application, where the image quality control method is suitable for quality control of fundus images. For convenience of description, the following embodiments take a server as an example of an execution subject, and describe the image quality control method. As shown in fig. 2, the image quality control method includes the following steps:
step 201: and acquiring a target fundus image to be quality controlled.
In a scene of quality control of fundus images, fundus image shooting equipment can shoot fundus images of a patient under the operation of an operator, in order to ensure the screening confidence of a rear-end fundus AI screening system, a server can capture the fundus images shot by the fundus image shooting equipment as target fundus images to be quality controlled before the fundus images shot by the fundus image shooting equipment are transmitted to the fundus AI screening system, quality control is carried out on the target fundus images, and whether the target fundus images are qualified images is judged.
In practical applications, the fundus image capturing apparatus may transmit only one fundus image to the server at a time, or may transmit a plurality of fundus images to the server at a time, without any limitation on the number of target fundus images acquired by the server at a time.
It is to be understood that, in some cases, due to the influence of the operator's operation irregularity, the patient's fit being out of place, and the like, the image captured by the fundus image capturing apparatus may not actually be a fundus image, and after such an image is uploaded to the server, the server may also perform quality control on such an image as a target fundus image.
Step 202: and obtaining a quality type prediction result of the target fundus image through a fundus image quality control system, wherein the quality type prediction result of the target fundus image comprises mutual exclusion probabilities that the target fundus image belongs to different quality types.
After the server acquires a target fundus image, inputting the target fundus image into a fundus image quality control system which operates per se, analyzing and processing the target fundus image by using the fundus image quality control system, and further acquiring an output result of the fundus image quality control system as a quality type prediction result of the target fundus image, wherein the quality type prediction result comprises the mutually exclusive probability that the target fundus image belongs to different quality types, namely the quality type prediction result comprises the probability that the target fundus image belongs to different quality types, and the sum of the probabilities is 1.
The fundus image quality control system is obtained by training in an end-to-end training mode based on a fundus image training sample set, and comprises a discrimination module, an attention mechanism module and a limitation constraint module. The discrimination module comprises at least two discrimination models, each discrimination model is used for extracting corresponding image characteristics aiming at an input target fundus image, and the discrimination module further outputs the image characteristics extracted by each discrimination model to the attention mechanism module; the attention mechanism module is composed of an attention mechanism network, the attention mechanism network can extract attention characteristics based on the image characteristics output by the discrimination module, specifically, the attention mechanism network can adaptively enhance the weight of the image characteristics which have a large influence on the prediction result so as to obtain the attention characteristics, and then the attention mechanism module outputs the attention characteristics extracted by the attention mechanism network to the limitation constraint module; the limiting and constraining module is composed of a limiting and constraining model, attention characteristics output by the attention mechanism module can be fused through the limiting and constraining model, the distance between the image characteristics is enlarged, the image characteristics in different distributions are approximated to similar distributions, and therefore the mutual exclusion probability that the target fundus image belongs to different quality types is obtained.
In order to further understand the fundus image quality control system in the embodiment of the present application, the discrimination module, the attention mechanism module, and the restriction constraint model will be described in detail below.
In practical applications, the determination module in the fundus image quality control system may specifically include the following six determination models: a clear discrimination model, a refraction interstitial turbidity discrimination model, a global exposure discrimination model, a local exposure discrimination model, a large-area contamination discrimination model and other classification discrimination models. The clear discrimination model is used for extracting image features for discriminating whether the image belongs to a clear type of the fundus image aiming at the input image; the refraction interstitial turbidity distinguishing model is used for extracting image characteristics for distinguishing the image belongs to refraction interstitial turbidity type aiming at the input image; the global exposure discrimination model is used for extracting image features for discriminating that the image belongs to a global exposure type aiming at the input image; the local exposure model is used for extracting image features for judging that the image belongs to a local exposure type aiming at an input image; the large-area contamination distinguishing model is used for extracting image features for distinguishing the image belongs to a large-area contamination type aiming at the input image; the other category discrimination model is used to extract, for an input image, an image feature for discriminating that the image does not belong to the type of fundus image.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an exemplary fundus image quality control system according to an embodiment of the present application. As shown in fig. 3, the determination module includes: a clear discrimination model, a refraction interstitial turbidity discrimination model, a global exposure discrimination model, a local exposure discrimination model, a large-area contamination discrimination model and other classification discrimination models. After a target fundus image is input into a fundus image quality control system, each discrimination model in the discrimination module correspondingly extracts image characteristics for discriminating that the target fundus image belongs to a quality type corresponding to the discrimination model aiming at the target fundus image; specifically, the clear discrimination model extracts image features for discriminating that the target fundus image belongs to a clear type of fundus images, the refractive-interstitial-turbid discrimination model extracts image features for discriminating that the target fundus image belongs to a refractive-interstitial-turbid type, the global exposure discrimination model extracts image features for discriminating that the target fundus image belongs to a global exposure type, the local exposure discrimination model extracts image features for discriminating that the target fundus image belongs to a local exposure type, the large-area contamination discrimination model extracts image features for discriminating that the target fundus image belongs to a large-area contamination type, and the other-category discrimination model extracts image features for discriminating that the target fundus image does not belong to a fundus image type. And then the discrimination module outputs the image characteristics extracted by the clear discrimination model, the refraction interstitial turbidity discrimination model, the global exposure discrimination model, the local exposure discrimination model, the large-area contamination discrimination model and other classification discrimination models to a characteristic constraint framework based on an attention mechanism.
It should be noted that, in practical application, the determination module may include, in addition to six determination models, namely, a clear determination model, a refractive interstitial turbidity determination model, a global exposure determination model, a local exposure determination model, a large-area contamination determination model, and other types of determination models, other types of determination models according to actual requirements, and more or fewer determination models may be set in the determination module according to actual requirements, where no limitation is made on the types and the number of the determination models included in the determination module.
In practical application, at least two discrimination models in the discrimination module can both adopt a lightweight mobile-Net network structure, so that the acceleration of the network is realized, and the online application requirement of the fundus image quality control system is met. Referring to fig. 4, fig. 4 is a schematic diagram of an exemplary mobile-Net network structure provided in the embodiment of the present application, where the mobile-Net network structure can be applied to six discriminant models in the discriminant module shown in fig. 3.
It should be noted that, in practical applications, the discriminant model in the discriminant module may adopt, in addition to a mobile-Net Network structure, other Network structures, such as a deep residual Network (ResNet), a densenet, a VGGNet (Visual Geometry Group Network), and the like, and no limitation is made on the Network structure adopted by the discriminant model. In addition, the network structures adopted by the discrimination models in the discrimination module can be the same or different.
It should be noted that each discrimination model in the discrimination module can output, in addition to the image features extracted for the input target fundus image, the confidence that the target fundus image belongs to the quality type corresponding to the discrimination model, for example, for the clear discrimination model, it can output, in addition to the image features for discriminating that the target fundus image belongs to the clear type of fundus image, the confidence that the target fundus image belongs to the clear type of fundus image; the confidence coefficient value range is between 0% and 100%, and when the fundus image quality control system is trained, model parameters of the discriminant model can be adjusted through two-class cross entropy loss based on the confidence coefficient output by the discriminant model.
In order to effectively utilize the image features extracted by each discriminant model in the discriminant module, expand the difference between the image features, and reduce the intra-group error between the models, the fundus image quality control system in the embodiment of the present application further adopts a feature constraint architecture based on an attention mechanism to perform subsequent processing on the image features extracted by each discriminant model in the discriminant module, wherein the feature constraint architecture comprises an attention mechanism module and a limitation constraint module.
In one possible implementation, the attention mechanism network employed by the attention mechanism module includes a first network branch and a second network branch; the first network branch comprises a convolutional layer, a global pooling layer and a full-connection layer, and is used for extracting attention weight aiming at input image features; the second network branch comprises a channel multiplier which is used for carrying out channel multiplication on the attention weight extracted by the first network branch and the input image characteristic to obtain the attention characteristic.
In specific application, the discrimination module outputs the image features extracted by each discrimination model to the attention mechanism module, wherein the first network branch processes the superposed image features by utilizing the convolution layer of the first network branch to obtain features to be processed, then further processes the features to be processed by utilizing a compression channel consisting of a global pooling layer to obtain compression features, and then processes the compression features by utilizing an extraction layer consisting of two full-connection layers to obtain attention weight; and then, the second network branch performs channel multiplication on the attention weight extracted by the first network branch and the input image feature to obtain the attention feature, namely, the attention weight extracted by the first network branch is used for weighting the image features extracted by different discriminant models, and the weight of the image feature which has a large influence on the prediction result is adaptively enhanced, so that the attention feature is obtained.
It should be understood that the structure of the attention mechanism network is merely an example, and in practical applications, a network model with other structures may also be adopted as the attention mechanism network according to actual needs, and no limitation is made to the structure of the attention mechanism network here.
In a possible implementation manner, the constraint model in the constraint module may adopt four closely-connected BottleNeck network structures, thereby enhancing gradient back propagation in the training process and improving the accuracy of the constraint model.
When the method is applied specifically, after the attention mechanism module generates the attention feature, the attention feature is correspondingly output to the limit constraint module, the limit constraint model in the limit constraint module is composed of four densely connected BottleNeck network structures, the BottleNeck network structures are a technology widely used for improving the operation speed of the compression model, the original 3x3 convolution can be changed into 1x1- >3x3- >1x1 convolution by utilizing the channel compression characteristic of 1x1 convolution, and the calculation amount of the network is greatly reduced; in addition, the densely connected network structure is beneficial to strengthening the back propagation of the gradient in the training process and improving the precision of the constraint model; and finally, adding a global pooling layer to obtain the probability corresponding to each type of quality type, and performing secondary limitation on the model by using N (the value of N is the number of the discriminant models) classification cross entropy loss, so that the distance between the image features is enlarged, the image features in different distributions are approximated to similar distributions, and the classification capability of the model on ambiguous images is improved.
It should be noted that the model structure of the above-mentioned constraint model is only an example, and in practical application, the number of convolution layers in the BottleNeck network structure may be increased or decreased correspondingly, and the constraint module may include more or less BottleNeck network structures, and the global pooling layer of the last layer may also be replaced by a fully-connected layer, etc.; the model structure of the above constraint model is not specifically limited herein.
Step 203: and determining whether the target eye fundus image is qualified or not according to the quality type prediction result of the target eye fundus image.
And after the server acquires the quality type prediction result determined by the fundus image quality control system, whether the target fundus image is qualified or not can be further determined according to the quality type prediction result. Specifically, the quality type prediction result includes mutual exclusion probabilities that the target fundus image belongs to different quality types, so the server may determine the quality type corresponding to the maximum probability in the quality type prediction result, where the quality type is the quality type corresponding to the target fundus image, and further determine whether the target fundus image is qualified according to the quality type corresponding to the target fundus image.
In specific implementation, the server can select a quality type corresponding to the maximum probability from the quality type prediction results of the target fundus image, when the quality type corresponding to the maximum probability belongs to a preset qualified quality type, the server can determine that the target fundus image is qualified, and when the quality type corresponding to the maximum probability belongs to a preset unqualified quality type, the server can determine that the target fundus image is unqualified.
Taking the discrimination module in the fundus image quality control system including a clear discrimination model, a refraction-interstitial-turbidity discrimination model, a global exposure discrimination model, a local exposure discrimination model, a large-area contamination discrimination model and other discrimination models as examples, the quality type prediction result finally output by the fundus image quality control system will include a first probability that the target fundus image belongs to the clear type of the fundus image, a second probability that the target fundus image belongs to the refraction-interstitial-turbidity type, a third probability that the target fundus image belongs to the global exposure type, a fourth probability that the target fundus image belongs to the local exposure type, a fifth probability that the target fundus image belongs to the large-area contamination type and a sixth probability that the target fundus image does not belong to the fundus image type, the first probability, the second probability, the third probability, the fourth probability, the fifth probability and the sixth probability are mutually exclusive probabilities, the sum of these six values is 1.
And determining the quality type corresponding to the maximum probability value in the quality type prediction result, determining that the target fundus image is qualified when the quality type corresponding to the maximum probability value is any one of a clear type, a local exposure type and a large-area contamination type of the fundus image, and determining that the target fundus image is unqualified when the quality type corresponding to the maximum probability value is any one of a refraction interstitial turbidity type, a global exposure type and a type which does not belong to the fundus image.
It should be understood that, when the quality type that can be determined by the fundus image quality control system is another type, the server may set a qualified quality type and an unqualified quality type for these types in advance, and then determine whether the target fundus image is a qualified image based on the quality type prediction result of the target fundus image; the quality type that can be discriminated by the fundus image quality control system in the present application is not limited at all.
Optionally, the image quality control method provided in the embodiment of the present application may further prompt an operator of the fundus image capturing device whether the target fundus image is qualified or not according to a quality control result of the target fundus image, and correspondingly give a reason why the target fundus image is unqualified when the target fundus image is determined to be unqualified, so that the operator can conveniently capture the qualified fundus image again.
Specifically, when the target fundus image is unqualified, the server can acquire the reason why the target fundus image is unqualified and perform information prompt according to the reason so as to prompt the user of the reason why the target fundus image is unqualified and unqualified.
Taking the quality types which can be judged by the fundus image quality control system to comprise a fundus image clear type, a refraction interstitial turbidity type, a global exposure type, a local exposure type, a large-area contamination type and a type which does not belong to a fundus image as an example, wherein the preset qualified quality types comprise the fundus image clear type, the local exposure type and the large-area contamination type, and the preset unqualified quality types comprise the refraction interstitial turbidity type, the global exposure type and the type which does not belong to the fundus image; when the server judges that the target fundus image belongs to the refractive interstitial turbidity type, the server can prompt that the unqualified reason of the target fundus image is the refractive interstitial turbidity type.
The image quality control method based on artificial intelligence can be used for judging the quality of the fundus images at the front end, so that qualified fundus images can be provided for a fundus AI screening system at the rear end, and the confidence coefficient of the fundus AI screening system is improved. Specifically, the image quality control method provided by the embodiment of the application discriminates a target fundus image to be quality-controlled by using a fundus image quality control system obtained by training based on a machine learning algorithm, determines a quality type prediction result of the target fundus image, namely determines the mutual exclusion probability that the target fundus image belongs to different quality types, and accordingly determines whether the target fundus image is qualified or not based on the quality type prediction result; the fundus image quality control system comprising the plurality of discrimination models, the attention mechanism network and the limiting constraint model can intelligently and accurately identify whether each fundus image to be quality-controlled is a qualified image or not, and realize accurate quality control on the fundus images at the front end.
It should be understood that, in practical applications, whether the image quality control method provided by the embodiment of the present application can accurately perform quality control on a target fundus image to be quality controlled mainly depends on the working performance of the fundus image quality control system, and the working performance of the fundus image quality control system is closely related to the training process of the fundus image quality control system. The following describes a training method of a fundus image quality control system provided in an embodiment of the present application by way of an embodiment.
Referring to fig. 5, fig. 5 is a schematic flowchart of a training method of a fundus image quality control system according to an embodiment of the present application. For convenience of description, the following embodiments describe a training method of the fundus image quality control system, taking a server as an example of an execution subject. As shown in fig. 5, the training method of the fundus image quality control system includes the steps of:
step 501: and acquiring the fundus image training sample set, wherein the fundus image training sample set comprises a plurality of fundus image samples and the labeling quality types corresponding to the fundus image samples.
Before training an fundus image quality control system, a large number of fundus image training samples are generally required to be obtained to form a fundus image training sample set, and each fundus image training sample comprises a fundus image sample and a labeling quality type corresponding to the fundus image sample.
In practical application, after approval of relevant institutions, the server can collect fundus image samples from databases corresponding to hospitals or community hospitals, and then, the quality types of the collected fundus image samples are marked in a manual marking mode, so that fundus image training samples are obtained.
It should be noted that, in order to ensure that a better training effect is obtained, the server may pre-process each fundus image sample acquired by the server, for example, adjust each fundus image sample to a preset size (e.g., 512 × 512), perform normalization processing on each fundus image sample (e.g., subtracting an image mean and dividing by an image variance), and so on.
In order to increase the data volume of the fundus image training sample, the server can also perform operations such as random horizontal overturning, random elastic deformation, random color spot (spot) adding and the like on the acquired fundus image sample, and further label the quality type of each fundus image sample obtained through the processing, so that the data volume of the fundus image training sample is increased, and the scale of the fundus image training sample set is enlarged.
Step 502: and initializing parameters of the pre-constructed fundus image quality control system.
Constructing each module in the fundus image quality control system by adopting a specific network model according to actual requirements, namely constructing a discrimination module, an attention mechanism module and a limit constraint module in the fundus image quality control system; further, the parameters of each module in the fundus image quality control system are initialized, that is, the parameter initialization processing is performed on each discrimination model in the discrimination module, the attention mechanism network in the attention mechanism module, and the limit constraint model in the limit constraint module.
It should be noted that, in an actual application, the server may execute step 501 first and then step 502, may execute step 502 first and then step 501, and may also execute step 501 and step 502 simultaneously, where the execution order of step 501 and step 502 is not limited at all.
Step 503: and training parameters on each model in the fundus image quality control system with initialized parameters according to the fundus image training sample set until the fundus image quality control system meeting the training end conditions is obtained by training.
After acquiring the fundus image training sample set and completing parameter initialization processing on the fundus image quality control system, the server can further train parameters on each model in the fundus image quality control system after parameter initialization by using the acquired fundus image training sample set, namely, parameters of each discrimination model, the attention mechanism network and the limitation constraint model after parameter initialization are trained until the fundus image quality control system meeting the training end conditions is obtained after training.
During specific training, the server can input fundus image samples in a fundus image training sample set into the fundus image quality control system after parameter initialization, then obtain the probability that fundus image samples output by each discrimination model in the fundus image quality control system belong to different quality types, and obtain the exclusive probability that fundus image samples output by the restriction constraint model in the fundus image quality control system belong to different quality types; further, according to the probability that the fundus image samples belong to different quality types, parameters on each discrimination model are adjusted through two-class cross entropy loss respectively; and (3) according to the mutual exclusion probability that the fundus image samples belong to different quality types, adjusting parameters on each model in the fundus image quality control system through N (N is equal to the number of the discrimination models in the discrimination module) classification cross entropy loss, and repeatedly carrying out iterative training until the fundus image quality control system meeting the training end condition is obtained.
Because each discrimination model is only used for discriminating whether the input fundus image sample belongs to the quality type corresponding to the discrimination model, and the output probability of each discrimination model only represents the probability that the fundus image sample belongs to the quality type corresponding to the discrimination model, when the discrimination model is trained correspondingly based on the probability output by the discrimination model, the parameters of the discrimination model can be directly adjusted based on the binary cross entropy loss. The limiting constraint model is used for judging the possibility that the input fundus image samples belong to various quality types, the output probability of the limiting constraint model comprises the probability that the fundus image samples belong to various quality types, and the sum value of the probabilities is 1, so when the fundus image quality control system is trained on the basis of the exclusive probability output by the limiting constraint model, the parameters of each model in the fundus image quality control system need to be adjusted correspondingly through N (N is equal to the number of the quality types which can be judged by the fundus image quality control model, namely the number of the judgment models in the judgment module) classification cross entropy loss.
Taking the case that the discrimination module comprises a clear discrimination model, a refraction interstitial turbidity discrimination model, a global exposure discrimination model, a local exposure discrimination model, a large-area contamination discrimination model and other category discrimination models, when a server trains a fundus image quality control system comprising the discrimination module, the server can obtain respective output probabilities of the clear discrimination model, the refraction interstitial turbidity discrimination model, the global exposure discrimination model, the local exposure discrimination model, the large-area contamination discrimination model and other category discrimination models, further, the parameters of the clear discrimination model are adjusted by adopting binary cross entropy loss based on the probability output by the clear discrimination model, the parameters of the refraction interstitial turbidity discrimination model are adjusted by adopting binary cross entropy loss based on the probability output by the refraction interstitial turbidity discrimination model, and the parameters of the global exposure discrimination model are adjusted by adopting binary cross entropy loss based on the probability output by the global exposure discrimination model, and adjusting parameters of the local exposure discrimination model by adopting two-class cross entropy loss based on the probability output by the local exposure discrimination model, adjusting parameters of the large-area contamination discrimination model by adopting two-class cross entropy loss based on the probability output by the large-area contamination discrimination model, and adjusting parameters of other-class discrimination models by adopting two-class cross entropy loss based on the probability output by other-class discrimination models.
In addition, the server needs to acquire the mutual exclusion probability limiting the output of the constraint model in the fundus image quality control system, wherein the mutual exclusion probability includes the probability that a fundus image sample belongs to a clear type of a fundus image, the probability that the fundus image sample belongs to a refraction interstitial turbidity type, the probability that the fundus image sample belongs to a global exposure type, the probability that the fundus image sample belongs to a local exposure type, the probability that the fundus image sample belongs to a large-area contamination type and the probability that the fundus image sample does not belong to a fundus image type, and the sum of the probabilities is; furthermore, the server can adjust the parameters of each model in the fundus image quality control system by adopting six-classification cross entropy loss based on the mutual exclusion probability.
Specifically, when judging whether the fundus image quality control system meets the training end condition, a first system can be verified by using a test sample, wherein the first system is a model obtained by performing a first round of training on the fundus image quality control system by using fundus image training samples in a fundus image training sample set; specifically, the server inputs the fundus image sample in the test sample into the first system, and the first system is utilized to correspondingly process the input fundus image sample to obtain the mutual exclusion probability that the fundus image sample belongs to each quality type; and then, determining the prediction accuracy of the first system according to the labeling quality type corresponding to the fundus image sample in the test sample and the output result of the first system, and when the prediction accuracy is greater than a preset threshold value, determining that the working performance of the first system is better and meets the requirement, and determining that the first system is a fundus image quality control system meeting the training end condition.
In addition, when judging whether the fundus image quality control system meets the training end condition, whether the fundus image quality control system is continuously trained or not can be determined according to a plurality of systems obtained by a plurality of rounds of training so as to obtain the fundus image quality control system with the optimal working performance. Specifically, a plurality of fundus image quality control systems obtained through a plurality of rounds of training can be verified respectively by using test samples, if the difference between the prediction accuracy rates of the fundus image quality control systems obtained through each round of training is determined to be small, the performance of the fundus image quality control system is considered to have no space for improvement, and the fundus image quality control system with the highest prediction accuracy rate can be selected as the fundus image quality control system meeting the training end condition; if the difference between the prediction accuracy rates of the fundus image quality control systems obtained by each training is determined to be large, the performance of the fundus image quality control system is considered to have a space for improving, and the fundus image quality control system can be continuously trained until the fundus image quality control system with the most stable and optimal performance is obtained.
The test samples may be obtained from a fundus image training sample set, for example, a plurality of fundus image training samples may be extracted from the fundus image training sample set as test samples according to a preset ratio.
The training method of the fundus image quality control system adopts a pre-acquired fundus image training sample set to repeatedly and iteratively train the parameters of each model in the fundus image quality control system after the parameters are initialized until the fundus image quality control system meeting the training end conditions is obtained through training. Therefore, the trained fundus image quality control system has better working performance, and accurate quality control of the fundus image to be quality controlled based on the fundus image quality control system is guaranteed in practical application.
In order to further understand the image quality control method provided by the embodiment of the present application, the following examples of quality types that can be determined by the fundus image quality control system include a fundus image clear type, a refractive interstitial turbidity type, a global exposure type, a local exposure type, a large-area contamination type, and a type that does not belong to a fundus image are taken to give an overall description of the image quality control method provided by the embodiment of the present application.
Referring to fig. 6, fig. 6 is a schematic diagram of an operating architecture of an exemplary fundus image quality control system according to an embodiment of the present application. The fundus image quality control system comprises a discrimination module, an attention mechanism module and a limit constraint module; the judging module comprises a clear judging model, a refraction interstitial turbidity judging model, a global exposure judging model, a local exposure judging model, a large-area contamination judging model and other classification judging models, and the six judging models can all adopt a mobile-Net network structure; the attention mechanism module comprises an attention mechanism network comprising a compression fetch branch (corresponding to the first network branch above) and a residual scaling branch (corresponding to the second network branch above); the constraint module includes a constraint model that employs four densely connected BottleNeck network structures.
In a specific application, the server may acquire a fundus image uploaded by the fundus image capturing apparatus, and input the fundus image as a target fundus image to be quality-controlled into the fundus image quality control system shown in fig. 6. The fundus image quality control system can effectively utilize information among the six discrimination models and expand the difference among image characteristics output by each discrimination model, thereby reducing the intra-group error among the models and improving the score among the models.
After the target fundus image is input into the fundus image quality control system, each discrimination model in the discrimination module correspondingly analyzes and processes the input target fundus image to obtain image characteristics for discriminating the target fundus image to belong to the quality type corresponding to the target fundus image; specifically, the clear discrimination model outputs image features for discriminating that the target fundus image belongs to a clear type of fundus image, the refractive interstitial turbidity discrimination model outputs image features for discriminating that the target fundus image belongs to a refractive interstitial turbidity type, the global exposure discrimination model outputs image features for discriminating that the target fundus image belongs to a global exposure type, the local exposure discrimination model outputs image features for discriminating that the target fundus image belongs to a local exposure type, the large-area contamination discrimination model outputs image features for discriminating that the target fundus image belongs to a large-area contamination type, and the other-type discrimination model outputs image features for discriminating that the target fundus image does not belong to a fundus image type. Furthermore, the discrimination module further outputs the image features output by each discrimination model to the attention mechanism module.
After the attention mechanism network in the attention mechanism module superposes all image features, the features to be processed are obtained through processing of a convolution layer with the convolution kernel size of 1x1 in a compression extraction branch, then compression features (1x1xc) are obtained through a compression channel (formed by a global pooling layer) in the compression extraction branch, and then attention weights are generated through an extraction layer (formed by two full connection layers); and then, carrying out channel multiplication on the attention weight and the feature to be processed in the residual scaling branch to obtain the attention feature. The attention feature is then output to a limit constraint module.
The restriction constraint model in the restriction constraint module is composed of four densely connected BottleNeck network structures, the original 3x3 convolution is changed into the convolution of 1x1- >3x3- >1x1 by utilizing the channel compression characteristic of 1x1 convolution, so that the calculated amount of the network can be greatly reduced, the global pooling module is added at last to obtain the prediction probability corresponding to each type of quality type, and the model is secondarily restricted by utilizing six-class cross entropy loss. Therefore, the distance between the features can be enlarged, and the features in different distributions are similar to the similar distributions, so that the classification capability of the fundus image quality control system on ambiguous fundus images is improved.
After the server obtains the quality type prediction result output by the fundus image quality control system, the quality type corresponding to the maximum probability in the quality type prediction result can be determined, the quality type is the quality type of the target fundus image, if the quality type belongs to the preset qualified quality type, the target fundus image can be determined to be qualified, and if the quality type belongs to the preset unqualified quality type, the target fundus image can be determined to be unqualified. When the target fundus image is determined to be unqualified, the server can correspondingly prompt relevant workers of the reason why the target fundus image is unqualified.
Experimental research proves that the image quality control method provided by the embodiment of the application can effectively help fundus image shooting technicians to finish high-quality fundus image acquisition, before the image quality control method is not used, the fundus image shooting technicians need to upload fundus images every time in average, so that doctors can obtain the fundus images which can be used as diagnosis reference bases, and after the image quality control method is used, the doctors can obtain the fundus images which can be used as the diagnosis reference bases by only uploading the fundus images every time in every shooting, so that the working efficiency of the doctors is greatly improved.
The experimental results obtained by using the image quality control method provided by the embodiment of the application are shown in fig. 7. By adopting the image quality control method provided by the embodiment of the application, the classification quality scores (F1) of all classes can be effectively improved, the classification quality scores can be improved to be more than 0.8 particularly in the classes of clearness and refraction turbidity, meanwhile, the recall rate of about 90% can be obtained, and the accuracy rate of more than 86% can be obtained in the classes of local exposure with smaller samples and non-fundus oculi. In addition, different from the traditional classification network, the image quality control method provided by the embodiment of the application can achieve at least 20% improvement in the classification accuracy of other classes.
It should be noted that the image quality control method provided in the embodiment of the present application can be applied to quality control of fundus images, and can also be applied to other scenes in which quality control of images is required, such as a scene in which product quality monitoring is performed based on images in the field of product production, and a scene in which quality control of images of other organs is performed in the field of medical care, and the like. The following describes image quality control methods applied to other scenes by embodiments.
Referring to fig. 8, fig. 8 is a schematic flowchart of an artificial intelligence-based image quality control method according to an embodiment of the present disclosure. For convenience of description, the following embodiments are described taking a server as an execution subject. As shown in fig. 8, the method comprises the steps of:
step 801: and acquiring a target image to be quality controlled.
In a scene of quality control on other images, the corresponding image shooting equipment is used for shooting images, then the image shooting equipment can send the images obtained through shooting to the server, and after the server obtains the images sent by the image shooting equipment, the images are used as target images to be quality controlled to carry out quality control on the target images, and whether the target images are qualified images is judged.
Step 802: and obtaining a quality type prediction result of the target image through an image quality control system, wherein the quality type prediction result of the target image comprises mutual exclusion probabilities that the target image belongs to different quality types.
After the server acquires a target image, inputting the target image into an image quality control system which operates per se, analyzing and processing the target image by using the image quality control system, and further acquiring an output result of the image quality control system as a quality type prediction result of the target image, wherein the quality type prediction result comprises mutually exclusive probabilities that the target image belongs to different quality types, namely the quality type prediction result comprises probabilities that the target image belongs to different quality types, and the sum of the probabilities is 1.
It should be noted that the image quality control system is obtained by training in an end-to-end training mode based on an image training sample set, and includes a discrimination module, an attention mechanism module, and a restriction constraint module. The discrimination module comprises at least two discrimination models, each discrimination model is used for extracting corresponding image characteristics aiming at an input target image, and the discrimination module further outputs the image characteristics extracted by each discrimination model to the attention mechanism module; the attention mechanism module is composed of an attention mechanism network, the attention mechanism network can extract attention characteristics based on the image characteristics output by the discrimination module, specifically, the attention mechanism network can adaptively enhance the weight of the image characteristics which have a large influence on the prediction result so as to obtain the attention characteristics, and then the attention mechanism module outputs the attention characteristics extracted by the attention mechanism network to the limitation constraint module; the limiting and constraining module is composed of a limiting and constraining model, attention features output by the attention mechanism module can be fused through the limiting and constraining model, distances among the image features are enlarged, the image features in different distributions are approximated to similar distributions, and therefore the mutual exclusion probability that the target image belongs to different quality types is obtained.
In a possible implementation manner, the determination module may include a first determination model and a second determination model, where the first determination model is configured to extract, for a target image, an image feature for determining that the target image belongs to a qualified quality type; and the second judging model is used for extracting image features for judging whether the target image belongs to the unqualified quality type aiming at the target image.
It should be noted that, in different application scenarios, the measurement standards specifically corresponding to the qualified quality type and the unqualified quality type are different, and accordingly, the image features to be actually extracted by the first discrimination model and the second discrimination model are different, and no limitation is made on the image features to be actually extracted by the first discrimination model and the second discrimination model.
It should be understood that, in practical applications, the first and second discrimination models may adopt the same network structure, or may adopt different network structures, and no limitation is made to the network structures specifically adopted by the first and second discrimination models. In addition, the determination module may include more determination models besides the first determination model and the second determination model, and the number of the determination models included in the determination module is not limited at all.
It should be noted that the structure of the image quality control system in the embodiment of the present application is similar to that of the fundus image quality control system in the embodiment shown in fig. 2, and for details of the specific structure of the image quality control system, reference may be made to the description related to the fundus image quality control system in the embodiment shown in fig. 2, and details thereof are not repeated here. The training process of the image quality control system in the embodiment of the present application is similar to the training process of the fundus image quality control system in the embodiment shown in fig. 5, except that the sample images based on the training process are different, and for the training process of the image quality control system, reference may be made to the flow shown in fig. 5, which is not described herein again.
Step 803: and determining whether the target image is qualified or not according to the quality type prediction result of the target image.
And after obtaining the quality type prediction result determined by the image quality control system, the server can further determine whether the target image is qualified according to the quality type prediction result. Specifically, the quality type prediction result includes mutual exclusion probabilities that the target image belongs to different quality types, so the server may first determine the quality type corresponding to the maximum probability in the quality type prediction result, where the quality type is the quality type corresponding to the target image, and further determine whether the target image is qualified according to the quality type corresponding to the target image.
During specific implementation, the server can select a quality type corresponding to the maximum probability from the quality type prediction result of the target image, and when the quality type corresponding to the maximum probability belongs to a preset qualified quality type, the server can determine that the target image is qualified; when the quality type corresponding to the maximum probability belongs to a preset unqualified quality type, the server can determine that the target image is unqualified.
In the case that the target image is determined to be unqualified, the server can further make a relevant prompt, namely prompt the photographer of the target image that the target image is unqualified and the reason of the unqualified target image, so that the photographer of the target image can conveniently shoot qualified images again based on the reason.
The image quality control method based on artificial intelligence distinguishes the target image to be quality controlled by utilizing an image quality control system obtained by training based on a machine learning algorithm, determines the quality type prediction result of the target image, namely determines the mutual exclusion probability that the target image belongs to different quality types, and accordingly determines whether the target image is qualified or not based on the quality type prediction result; the fundus image quality control system comprising the plurality of discrimination models, the attention mechanism network and the limiting constraint model can intelligently and accurately identify whether each image to be quality-controlled is a qualified image or not, and realize accurate quality control on the image at the front end.
For the above-described image quality control method based on artificial intelligence, the present application also provides a corresponding image quality control device based on artificial intelligence, so as to make the above-described image quality control method based on artificial intelligence applied and implemented in practice.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an artificial intelligence-based image quality control apparatus 900 corresponding to the artificial intelligence-based image quality control method shown in fig. 2, the apparatus including:
an acquiring module 901, configured to acquire a target fundus image to be quality controlled;
a processing module 902, configured to obtain, by a fundus image quality control system, a quality type prediction result of the target fundus image, where the quality type prediction result of the target fundus image includes mutual exclusion probabilities that the target fundus image belongs to different quality types; the fundus image quality control system comprises a discrimination module, an attention mechanism module and a limit constraint module which are obtained based on training of a fundus image training sample set; the discrimination module is used for extracting image characteristics through at least two discrimination models and outputting the image characteristics to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics through an attention mechanism network aiming at the image characteristics and outputting the attention characteristics to the limiting constraint module; the limiting and constraining module is used for fusing the attention features through a limiting and constraining model and outputting mutual exclusion probabilities that the images belong to different quality types;
a determining module 903, configured to determine whether the target fundus image is qualified according to a quality type prediction result of the target fundus image.
Optionally, on the basis of the image quality control apparatus shown in fig. 9, the determining module 903 is specifically configured to:
selecting a quality type corresponding to the maximum probability from the quality type prediction result of the target fundus image;
when the quality type corresponding to the maximum probability belongs to a preset qualified quality type, determining that the target fundus image is qualified;
and when the quality type corresponding to the maximum probability belongs to a preset unqualified quality type, determining that the target fundus image is unqualified.
Optionally, on the basis of the image quality control apparatus shown in fig. 9, referring to fig. 10, fig. 10 is a schematic structural diagram of another image quality control apparatus 1000 provided in the embodiment of the present application, and the apparatus further includes:
and the prompting module 1001 is used for acquiring the reason why the target fundus image is unqualified when the target fundus image is determined to be unqualified, and displaying information according to the reason to prompt a user of the reason why the target fundus image is unqualified and unqualified.
Optionally, on the basis of the image quality control apparatus shown in fig. 9, referring to fig. 11, fig. 11 is a schematic structural diagram of another image quality control apparatus 1100 provided in the embodiment of the present application, and the apparatus further includes:
a sample acquisition module 1101, configured to acquire the fundus image training sample set, where the fundus image training sample set includes a plurality of fundus image samples and a labeling quality type corresponding to each fundus image sample;
an initialization module 1102, configured to perform parameter initialization on a pre-constructed fundus image quality control system;
a training module 1103, configured to train parameters on each model in the fundus image quality control system with initialized parameters according to the fundus image training sample set until the fundus image quality control system meeting the training end condition is obtained through training.
Optionally, on the basis of the image quality control apparatus shown in fig. 11, the training module 1103 is specifically configured to:
inputting fundus image samples in the fundus image training sample set into the fundus image quality control system initialized by the parameters, acquiring the probability that the fundus image samples output by the at least two discrimination models in the fundus image quality control system belong to different quality types, and acquiring the mutual exclusion probability that the fundus image samples output by the restriction constraint model in the fundus image quality control system belong to different quality types;
adjusting parameters on the at least two discrimination models respectively through two-class cross entropy loss according to the probability that the fundus image sample belongs to different quality types;
and adjusting parameters on each model in the fundus image quality control system through N-classification cross entropy loss according to the mutual exclusion probability that the fundus image samples belong to different quality types, and repeatedly performing iterative training until the fundus image quality control system meeting the training end condition is obtained through training, wherein the N value is the number of the discrimination models.
Optionally, on the basis of the image quality control apparatus shown in fig. 9, the at least two discriminant models in the discriminant module adopt a mobile-Net network structure.
Optionally, on the basis of the image quality control apparatus shown in fig. 9, the attention mechanism network adopted by the attention mechanism module includes a first network branch and a second network branch, where the first network branch includes a convolutional layer, a global pooling layer, and a full connection layer, and is used to extract attention weights for the input image features; the second network branch comprises a channel multiplier, and the channel multiplier is used for carrying out channel multiplication on the attention weight and the input image characteristic to obtain the attention characteristic.
Optionally, on the basis of the image quality control apparatus shown in fig. 9, the constraint model adopts four closely-connected BottleNeck network structures.
Optionally, on the basis of the image quality control apparatus shown in fig. 9, the at least two discrimination modules include six discrimination models, where the six discrimination models include: a clear discrimination model, a refraction interstitial turbidity discrimination model, a global exposure discrimination model, a local exposure discrimination model, a large-area contamination discrimination model and other category discrimination models; the clear discrimination model is used for extracting image features for discriminating whether an image belongs to a clear type of a fundus image aiming at an input image; the refraction interstitial turbidity distinguishing model is used for extracting image features for distinguishing whether the image belongs to refraction interstitial turbidity type aiming at the input image; the global exposure discrimination model is used for extracting image features for discriminating that the image belongs to a global exposure type aiming at the input image; the local exposure distinguishing model is used for extracting image features for distinguishing the image belongs to a local exposure type aiming at the input image; the large-area contamination distinguishing model is used for extracting image features for distinguishing that the image belongs to a large-area contamination type aiming at the input image; the other classification discrimination model is used to extract, for an input image, an image feature for discriminating that the image does not belong to a fundus image type.
The image quality control device based on artificial intelligence can judge the quality of the fundus images at the front end, so that qualified fundus images are guaranteed to be provided for a fundus AI screening system at the rear end, and the confidence coefficient of the fundus AI screening system is improved. Specifically, the image quality control device provided in the embodiment of the present application discriminates a target fundus image to be quality-controlled by using a fundus image quality control system obtained by training based on a machine learning algorithm, determines a quality type prediction result of the target fundus image, that is, determines a mutual exclusion probability that the target fundus image belongs to different quality types, and accordingly determines whether the target fundus image is qualified or not based on the quality type prediction result; the fundus image quality control system comprising the plurality of discrimination models, the attention mechanism network and the limiting constraint model can intelligently and accurately identify whether each fundus image to be quality-controlled is a qualified image or not, and realize accurate quality control on the fundus images at the front end.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an artificial intelligence based image quality control apparatus 1200 corresponding to the artificial intelligence based image quality control method shown in fig. 8, and the apparatus includes:
an obtaining module 1201, configured to obtain a target image to be quality controlled;
a processing module 1202, configured to obtain, by an image quality control system, a quality type prediction result of the target image, where the quality type prediction result of the target image includes mutual exclusion probabilities that the target image belongs to different quality types; the image quality control system comprises a discrimination module, an attention mechanism module and a limit constraint module, wherein the discrimination module, the attention mechanism module and the limit constraint module are obtained by training based on an image training sample set; the discrimination module is used for extracting image characteristics through at least two discrimination models and outputting the image characteristics to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics through an attention mechanism network aiming at the image characteristics and outputting the attention characteristics to the limiting constraint module; the limiting and constraining module is used for fusing the attention features through a limiting and constraining model and outputting mutual exclusion probabilities that the images belong to different quality types;
a determining module 1203, configured to determine whether the target image is qualified according to a quality type prediction result of the target image.
Optionally, on the basis of the image quality control apparatus shown in fig. 12, the determining module 1203 is specifically configured to:
selecting a quality type corresponding to the maximum probability from the quality type prediction result of the target image;
when the quality type corresponding to the maximum probability belongs to a preset qualified quality type, determining that the target image is qualified;
and when the quality type corresponding to the maximum probability belongs to a preset unqualified quality type, determining that the target image is unqualified.
Optionally, on the basis of the image quality control apparatus shown in fig. 12, the determination module includes a first determination model and a second determination model, where the first determination model is used to extract, for an input image, an image feature for determining that the image belongs to a qualified quality type; the second discrimination model is used for extracting the probability for discriminating that the image belongs to the unqualified quality type aiming at the input image.
The image quality control device based on artificial intelligence distinguishes the target image to be quality controlled by utilizing an image quality control system obtained by training based on a machine learning algorithm, determines the quality type prediction result of the target image, namely determines the mutual exclusion probability that the target image belongs to different quality types, and can correspondingly determine whether the target image is qualified or not based on the quality type prediction result; the image quality control system comprising the plurality of discrimination models, the attention mechanism network and the limiting constraint model can intelligently and accurately identify whether each image to be quality-controlled is a qualified image or not, and realize accurate quality control on the images at the front end.
The embodiment of the present application further provides a server and a terminal device for quality control of an image, and the server and the terminal device for quality control of an image provided in the embodiment of the present application will be introduced from the perspective of hardware materialization.
Referring to fig. 13, fig. 13 is a schematic diagram of a server structure provided by an embodiment of the present application, where the server 1300 may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1322 (e.g., one or more processors) and a memory 1332, and one or more storage media 1330 (e.g., one or more mass storage devices) storing an application program 1342 or data 1344. Memory 1332 and storage medium 1330 may be, among other things, transitory or persistent storage. The program stored on the storage medium 1330 may include one or more modules (not shown), each of which may include a sequence of instructions operating on a server. Still further, the central processor 1322 may be arranged in communication with the storage medium 1330, executing a sequence of instruction operations in the storage medium 1330 on the server 1300.
The server 1300 may also include one or more power supplies 1326, one or more wired or wireless network interfaces 1350, one or more input-output interfaces 1358, and/or one or more operating systems 1341, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The steps performed by the server in the above embodiment may be based on the server structure shown in fig. 13.
CPU1322 is configured to perform the following steps:
acquiring a target fundus image to be quality controlled;
obtaining a quality type prediction result of the target fundus image through a fundus image quality control system, wherein the quality type prediction result of the target fundus image comprises mutual exclusion probabilities that the target fundus image belongs to different quality types; the fundus image quality control system comprises a discrimination module, an attention mechanism module and a limit constraint module which are obtained based on training of a fundus image training sample set; the discrimination module is used for extracting image characteristics through at least two discrimination models and outputting the image characteristics to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics through an attention mechanism network aiming at the image characteristics and outputting the attention characteristics to the limiting constraint module; the limiting and constraining module is used for fusing the attention features through a limiting and constraining model and outputting mutual exclusion probabilities that the images belong to different quality types;
and determining whether the target eye fundus image is qualified or not according to the quality type prediction result of the target eye fundus image.
Or, the following steps are executed:
acquiring a target image to be quality-controlled;
obtaining a quality type prediction result of the target image through an image quality control system, wherein the quality type prediction result of the target image comprises mutual exclusion probabilities that the target image belongs to different quality types; the image quality control system comprises a discrimination module, an attention mechanism module and a limit constraint module, wherein the discrimination module, the attention mechanism module and the limit constraint module are obtained by training based on an image training sample set; the discrimination module is used for extracting image characteristics through at least two discrimination models and outputting the image characteristics to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics through an attention mechanism network aiming at the image characteristics and outputting the attention characteristics to the limiting constraint module; the limiting and constraining module is used for fusing the attention features through a limiting and constraining model and outputting mutual exclusion probabilities that the images belong to different quality types;
and determining whether the target image is qualified or not according to the quality type prediction result of the target image.
Optionally, CPU1322 may also perform method steps of any specific implementation of the image quality control method in the embodiment of the present application.
Referring to fig. 14, fig. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present application. For convenience of explanation, only the parts related to the embodiments of the present application are shown, and details of the specific technology are not disclosed. The terminal can be any terminal equipment including a mobile phone, a tablet computer, a Personal digital assistant (english full name: Personal digital assistant, english abbreviation: PDA), a computer, and the like, taking the terminal as a computer as an example:
fig. 14 is a block diagram showing a partial structure of a computer related to a terminal provided in an embodiment of the present application. Referring to fig. 14, the computer includes: radio Frequency (RF) circuit 1410, memory 1420, input unit 1430, display unit 1440, sensor 1450, audio circuit 1460, wireless fidelity (WiFi) module 1470, processor 1480, and power supply 1490. Those skilled in the art will appreciate that the computer architecture shown in FIG. 14 is not intended to be limiting of computers, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
The memory 1420 may be used to store software programs and modules, and the processor 1480 executes various functional applications and data processing of the computer by operating the software programs and modules stored in the memory 1420. The memory 1420 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the computer, etc. Further, memory 1420 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 1480 is a control center of the computer, connects various parts of the entire computer using various interfaces and lines, performs various functions of the computer and processes data by operating or executing software programs and/or modules stored in the memory 1420, and calls data stored in the memory 1420, thereby monitoring the entire computer. Alternatively, the processor 1480 may include one or more processing units; preferably, the processor 1480 may integrate an application processor, which handles primarily operating systems, user interfaces, and applications, among others, with a modem processor, which handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1480.
In the embodiment of the present application, the processor 1480 included in the terminal also has the following functions:
acquiring a target fundus image to be quality controlled;
obtaining a quality type prediction result of the target fundus image through a fundus image quality control system, wherein the quality type prediction result of the target fundus image comprises mutual exclusion probabilities that the target fundus image belongs to different quality types; the fundus image quality control system comprises a discrimination module, an attention mechanism module and a limit constraint module which are obtained based on training of a fundus image training sample set; the discrimination module is used for extracting image characteristics through at least two discrimination models and outputting the image characteristics to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics through an attention mechanism network aiming at the image characteristics and outputting the attention characteristics to the limiting constraint module; the limiting and constraining module is used for fusing the attention features through a limiting and constraining model and outputting mutual exclusion probabilities that the images belong to different quality types;
and determining whether the target eye fundus image is qualified or not according to the quality type prediction result of the target eye fundus image.
Alternatively, the following functions are provided:
acquiring a target image to be quality-controlled;
obtaining a quality type prediction result of the target image through an image quality control system, wherein the quality type prediction result of the target image comprises mutual exclusion probabilities that the target image belongs to different quality types; the image quality control system comprises a discrimination module, an attention mechanism module and a limit constraint module, wherein the discrimination module, the attention mechanism module and the limit constraint module are obtained by training based on an image training sample set; the discrimination module is used for extracting image characteristics through at least two discrimination models and outputting the image characteristics to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics through an attention mechanism network aiming at the image characteristics and outputting the attention characteristics to the limiting constraint module; the limiting and constraining module is used for fusing the attention features through a limiting and constraining model and outputting mutual exclusion probabilities that the images belong to different quality types;
and determining whether the target image is qualified or not according to the quality type prediction result of the target image.
Optionally, the processor 1480 is further configured to execute the steps of any implementation manner of the image quality control method provided in this embodiment of the application.
The embodiment of the present application further provides a computer-readable storage medium, configured to store a computer program, where the computer program is configured to execute any implementation manner of the artificial intelligence based image quality control method described in the foregoing embodiments.
The present application further provides a computer program product including instructions, which when run on a computer, causes the computer to execute any one implementation of the artificial intelligence based image quality control method described in the foregoing embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing computer programs.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

1. An image quality control method based on artificial intelligence is characterized by comprising the following steps:
acquiring a target fundus image to be quality controlled;
obtaining a quality type prediction result of the target fundus image through a fundus image quality control system, wherein the quality type prediction result of the target fundus image comprises mutual exclusion probabilities that the target fundus image belongs to different quality types; the fundus image quality control system comprises a discrimination module, an attention mechanism module and a limit constraint module which are obtained based on training of a fundus image training sample set; the discrimination module is used for extracting image characteristics through at least two discrimination models and outputting the image characteristics to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics through an attention mechanism network aiming at the image characteristics and outputting the attention characteristics to the limiting constraint module; the limiting and constraining module is used for fusing the attention features through a limiting and constraining model and outputting mutual exclusion probabilities that the images belong to different quality types;
and determining whether the target eye fundus image is qualified or not according to the quality type prediction result of the target eye fundus image.
2. The method according to claim 1, wherein said determining whether the target fundus image is qualified according to the quality type prediction result of the target fundus image comprises:
selecting a quality type corresponding to the maximum probability from the quality type prediction result of the target fundus image;
when the quality type corresponding to the maximum probability belongs to a preset qualified quality type, determining that the target fundus image is qualified;
and when the quality type corresponding to the maximum probability belongs to a preset unqualified quality type, determining that the target fundus image is unqualified.
3. The method of claim 1, further comprising:
and when the target fundus image is determined to be unqualified, acquiring the reason why the target fundus image is unqualified, and displaying information prompt according to the reason to prompt a user of the reason why the target fundus image is unqualified and unqualified.
4. The method of claim 1, further comprising:
acquiring the fundus image training sample set, wherein the fundus image training sample set comprises a plurality of fundus image samples and labeling quality types corresponding to the fundus image samples;
initializing parameters of a pre-constructed fundus image quality control system;
and training parameters on each model in the fundus image quality control system with initialized parameters according to the fundus image training sample set until the fundus image quality control system meeting the training end conditions is obtained by training.
5. The method as claimed in claim 4, wherein the training of the parameters on each model in the fundus image quality control system with initialized parameters according to the fundus image training sample set until the training of the fundus image quality control system meeting the training end condition comprises:
inputting fundus image samples in the fundus image training sample set into the fundus image quality control system initialized by the parameters, acquiring the probability that the fundus image samples output by the at least two discrimination models in the fundus image quality control system belong to different quality types, and acquiring the mutual exclusion probability that the fundus image samples output by the restriction constraint model in the fundus image quality control system belong to different quality types;
adjusting parameters on the at least two discrimination models respectively through two-class cross entropy loss according to the probability that the fundus image sample belongs to different quality types;
and adjusting parameters on each model in the fundus image quality control system through N-classification cross entropy loss according to the mutual exclusion probability that the fundus image samples belong to different quality types, and repeatedly performing iterative training until the fundus image quality control system meeting the training end condition is obtained through training, wherein the N value is the number of the discrimination models.
6. The method according to any one of claims 1 to 4, wherein the at least two discriminant models in the discriminant module employ a mobile-Net network architecture.
7. The method according to any one of claims 1 to 4, wherein the attention mechanism network employed by the attention mechanism module comprises a first network branch and a second network branch, the first network branch comprising a convolutional layer and a global pooling layer and a fully connected layer for extracting attention weights for the input image features; the second network branch comprises a channel multiplier, and the channel multiplier is used for carrying out channel multiplication on the attention weight and the input image characteristic to obtain the attention characteristic.
8. The method of any one of claims 1 to 4, wherein the constraint model employs four densely connected BottleNeck network structures.
9. The method of any one of claims 1 to 4, wherein the at least two discriminant modules comprise six discriminant models comprising: a clear discrimination model, a refraction interstitial turbidity discrimination model, a global exposure discrimination model, a local exposure discrimination model, a large-area contamination discrimination model and other category discrimination models; wherein,
the clear discrimination model is used for extracting image features for discriminating whether the image belongs to clear type of the fundus image aiming at the input image; the refraction interstitial turbidity distinguishing model is used for extracting image features for distinguishing whether the image belongs to refraction interstitial turbidity type aiming at the input image; the global exposure discrimination model is used for extracting image features for discriminating that the image belongs to a global exposure type aiming at the input image; the local exposure distinguishing model is used for extracting image features for distinguishing the image belongs to a local exposure type aiming at the input image; the large-area contamination distinguishing model is used for extracting image features for distinguishing that the image belongs to a large-area contamination type aiming at the input image; the other classification discrimination model is used to extract, for an input image, an image feature for discriminating that the image does not belong to a fundus image type.
10. An image quality control method based on artificial intelligence is characterized by comprising the following steps:
acquiring a target image to be quality-controlled;
obtaining a quality type prediction result of the target image through an image quality control system, wherein the quality type prediction result of the target image comprises mutual exclusion probabilities that the target image belongs to different quality types; the image quality control system comprises a discrimination module, an attention mechanism module and a limit constraint module, wherein the discrimination module, the attention mechanism module and the limit constraint module are obtained by training based on an image training sample set; the discrimination module is used for extracting image characteristics through at least two discrimination models and outputting the image characteristics to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics through an attention mechanism network aiming at the image characteristics and outputting the attention characteristics to the limiting constraint module; the limiting and constraining module is used for fusing the attention features through a limiting and constraining model and outputting mutual exclusion probabilities that the images belong to different quality types;
and determining whether the target image is qualified or not according to the quality type prediction result of the target image.
11. The method of claim 10, wherein the determining whether the target image is qualified according to the quality type prediction result of the target image comprises:
selecting a quality type corresponding to the maximum probability from the quality type prediction result of the target image;
when the quality type corresponding to the maximum probability belongs to a preset qualified quality type, determining that the target image is qualified;
and when the quality type corresponding to the maximum probability belongs to a preset unqualified quality type, determining that the target image is unqualified.
12. The method according to claim 10, wherein the discrimination module comprises a first discrimination model and a second discrimination model, the first discrimination model is used for extracting image features for discriminating that the image belongs to an acceptable quality type for the input image; the second discrimination model is used for extracting the probability for discriminating that the image belongs to the unqualified quality type aiming at the input image.
13. An image quality control device based on artificial intelligence, comprising:
the acquisition module is used for acquiring a target fundus image to be quality controlled;
the processing module is used for obtaining a quality type prediction result of the target fundus image through a fundus image quality control system, wherein the quality type prediction result of the target fundus image comprises mutual exclusion probabilities that the target fundus image belongs to different quality types; the fundus image quality control system comprises a discrimination module, an attention mechanism module and a limit constraint module which are obtained based on training of a fundus image training sample set; the discrimination module is used for extracting image characteristics through at least two discrimination models and outputting the image characteristics to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics through an attention mechanism network aiming at the image characteristics and outputting the attention characteristics to the limiting constraint module; the limiting and constraining module is used for fusing the attention features through a limiting and constraining model and outputting mutual exclusion probabilities that the images belong to different quality types;
and the determining module is used for determining whether the target fundus image is qualified or not according to the quality type prediction result of the target fundus image.
14. An apparatus, comprising a processor and a memory:
the memory is used for storing a computer program;
the processor is adapted to perform the method of any of claims 1 to 12 in accordance with the computer program.
15. A computer-readable storage medium for storing a computer program for performing the method of any one of claims 1 to 12.
CN201910745023.8A 2019-08-13 2019-08-13 Image quality control method, device, equipment and storage medium based on artificial intelligence Active CN110458829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910745023.8A CN110458829B (en) 2019-08-13 2019-08-13 Image quality control method, device, equipment and storage medium based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910745023.8A CN110458829B (en) 2019-08-13 2019-08-13 Image quality control method, device, equipment and storage medium based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN110458829A true CN110458829A (en) 2019-11-15
CN110458829B CN110458829B (en) 2024-01-30

Family

ID=68486249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910745023.8A Active CN110458829B (en) 2019-08-13 2019-08-13 Image quality control method, device, equipment and storage medium based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN110458829B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815606A (en) * 2020-07-09 2020-10-23 浙江大华技术股份有限公司 Image quality evaluation method, storage medium, and computing device
CN111832601A (en) * 2020-04-13 2020-10-27 北京嘀嘀无限科技发展有限公司 State detection method, model training method, storage medium and electronic device
CN112690809A (en) * 2020-02-04 2021-04-23 首都医科大学附属北京友谊医院 Method, device, server and storage medium for determining equipment abnormality reason
CN113128373A (en) * 2021-04-02 2021-07-16 西安融智芙科技有限责任公司 Color spot scoring method based on image processing, color spot scoring device and terminal equipment
CN113449774A (en) * 2021-06-02 2021-09-28 北京鹰瞳科技发展股份有限公司 Fundus image quality control method, device, electronic apparatus, and storage medium
CN113487608A (en) * 2021-09-06 2021-10-08 北京字节跳动网络技术有限公司 Endoscope image detection method, endoscope image detection device, storage medium, and electronic apparatus
CN115953622A (en) * 2022-12-07 2023-04-11 广东省新黄埔中医药联合创新研究院 Image classification method combining attention mutual exclusion regularization
WO2023155488A1 (en) * 2022-02-21 2023-08-24 浙江大学 Fundus image quality evaluation method and device based on multi-source multi-scale feature fusion
CN117455970A (en) * 2023-12-22 2024-01-26 山东科技大学 Registration method of airborne laser bathymetry and multispectral satellite images based on feature fusion
CN119477899A (en) * 2025-01-10 2025-02-18 厦门眼科中心有限公司 A method for identifying and grading diabetic retinopathy based on wide-angle fundus images

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110096313A1 (en) * 2009-10-26 2011-04-28 International Business Machines Corporation Constrained Optimization Of Lithographic Source Intensities Under Contingent Requirements
US20180157063A1 (en) * 2016-12-02 2018-06-07 Carl Zeiss Vision International Gmbh Method, a system and a computer readable medium for optimizing an optical system, and a method of evaluating attentional performance
CN108229580A (en) * 2018-01-26 2018-06-29 浙江大学 Sugared net ranking of features device in a kind of eyeground figure based on attention mechanism and Fusion Features
WO2018121690A1 (en) * 2016-12-29 2018-07-05 北京市商汤科技开发有限公司 Object attribute detection method and device, neural network training method and device, and regional detection method and device
CN109146856A (en) * 2018-08-02 2019-01-04 深圳市华付信息技术有限公司 Picture quality assessment method, device, computer equipment and storage medium
CN109191457A (en) * 2018-09-21 2019-01-11 中国人民解放军总医院 A kind of pathological image quality validation recognition methods
CN109285149A (en) * 2018-09-04 2019-01-29 杭州比智科技有限公司 Appraisal procedure, device and the calculating equipment of quality of human face image
CN109360178A (en) * 2018-10-17 2019-02-19 天津大学 A reference-free stereo image quality assessment method based on fusion images
CN109389587A (en) * 2018-09-26 2019-02-26 上海联影智能医疗科技有限公司 A kind of medical image analysis system, device and storage medium
CN109543606A (en) * 2018-11-22 2019-03-29 中山大学 A kind of face identification method that attention mechanism is added
CN109543719A (en) * 2018-10-30 2019-03-29 浙江大学 Uterine neck atypia lesion diagnostic model and device based on multi-modal attention model
CN109685116A (en) * 2018-11-30 2019-04-26 腾讯科技(深圳)有限公司 Description information of image generation method and device and electronic device
CN109815965A (en) * 2019-02-13 2019-05-28 腾讯科技(深圳)有限公司 A kind of image filtering method, device and storage medium
CN109978165A (en) * 2019-04-04 2019-07-05 重庆大学 A kind of generation confrontation network method merged from attention mechanism
CN110009614A (en) * 2019-03-29 2019-07-12 北京百度网讯科技有限公司 Method and apparatus for outputting information

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110096313A1 (en) * 2009-10-26 2011-04-28 International Business Machines Corporation Constrained Optimization Of Lithographic Source Intensities Under Contingent Requirements
US20180157063A1 (en) * 2016-12-02 2018-06-07 Carl Zeiss Vision International Gmbh Method, a system and a computer readable medium for optimizing an optical system, and a method of evaluating attentional performance
WO2018121690A1 (en) * 2016-12-29 2018-07-05 北京市商汤科技开发有限公司 Object attribute detection method and device, neural network training method and device, and regional detection method and device
CN108229580A (en) * 2018-01-26 2018-06-29 浙江大学 Sugared net ranking of features device in a kind of eyeground figure based on attention mechanism and Fusion Features
CN109146856A (en) * 2018-08-02 2019-01-04 深圳市华付信息技术有限公司 Picture quality assessment method, device, computer equipment and storage medium
CN109285149A (en) * 2018-09-04 2019-01-29 杭州比智科技有限公司 Appraisal procedure, device and the calculating equipment of quality of human face image
CN109191457A (en) * 2018-09-21 2019-01-11 中国人民解放军总医院 A kind of pathological image quality validation recognition methods
CN109389587A (en) * 2018-09-26 2019-02-26 上海联影智能医疗科技有限公司 A kind of medical image analysis system, device and storage medium
CN109360178A (en) * 2018-10-17 2019-02-19 天津大学 A reference-free stereo image quality assessment method based on fusion images
CN109543719A (en) * 2018-10-30 2019-03-29 浙江大学 Uterine neck atypia lesion diagnostic model and device based on multi-modal attention model
CN109543606A (en) * 2018-11-22 2019-03-29 中山大学 A kind of face identification method that attention mechanism is added
CN109685116A (en) * 2018-11-30 2019-04-26 腾讯科技(深圳)有限公司 Description information of image generation method and device and electronic device
CN109815965A (en) * 2019-02-13 2019-05-28 腾讯科技(深圳)有限公司 A kind of image filtering method, device and storage medium
CN110009614A (en) * 2019-03-29 2019-07-12 北京百度网讯科技有限公司 Method and apparatus for outputting information
CN109978165A (en) * 2019-04-04 2019-07-05 重庆大学 A kind of generation confrontation network method merged from attention mechanism

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
CHENG BIAN ET AL: ""Boundary Regularized Convolutional Neural Network for Layer Parsing of Breast Anatomy in Automated Whole Breast Ultrasound"", 《MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION − MICCAI 2017》 *
HAOHAN WANG ET AL: ""Select-Additive Learning: Improving Generalization in Multimodal Sentiment "", 《ARXIV》 *
YINGCHAO YU ET AL: ""A Parallel Feature Expansion Classification Model with Feature-based Attention Mechanism"", 《2018 IEEE 7TH DATA DRIVEN CONTROL AND LEARNING SYSTEMS CONFERENCE (DDCLS)》 *
万程;游齐靖;孙晶;沈建新;俞秋丽: "基于FA-Net的视网膜眼底图像质量评估", 中华实验眼科杂志, vol. 37, no. 008 *
刘娇: ""基于深度学习的多语种短文本分类方法的研究"", 《 CNKI优秀硕士学位论文全文库》 *
王凡;倪晋平;董涛;郭荣礼;: "结合视觉注意力机制和图像锐度的无参图像质量评价方法", 应用光学, no. 01 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112690809A (en) * 2020-02-04 2021-04-23 首都医科大学附属北京友谊医院 Method, device, server and storage medium for determining equipment abnormality reason
CN112690809B (en) * 2020-02-04 2021-09-24 首都医科大学附属北京友谊医院 Method, apparatus, server and storage medium for determining cause of equipment abnormality
CN111832601A (en) * 2020-04-13 2020-10-27 北京嘀嘀无限科技发展有限公司 State detection method, model training method, storage medium and electronic device
CN111815606A (en) * 2020-07-09 2020-10-23 浙江大华技术股份有限公司 Image quality evaluation method, storage medium, and computing device
CN111815606B (en) * 2020-07-09 2023-09-01 浙江大华技术股份有限公司 Image quality evaluation method, storage medium, and computing device
CN113128373B (en) * 2021-04-02 2024-04-09 西安融智芙科技有限责任公司 Image processing-based color spot scoring method, color spot scoring device and terminal equipment
CN113128373A (en) * 2021-04-02 2021-07-16 西安融智芙科技有限责任公司 Color spot scoring method based on image processing, color spot scoring device and terminal equipment
CN113449774A (en) * 2021-06-02 2021-09-28 北京鹰瞳科技发展股份有限公司 Fundus image quality control method, device, electronic apparatus, and storage medium
CN113487608A (en) * 2021-09-06 2021-10-08 北京字节跳动网络技术有限公司 Endoscope image detection method, endoscope image detection device, storage medium, and electronic apparatus
WO2023030370A1 (en) * 2021-09-06 2023-03-09 北京字节跳动网络技术有限公司 Endoscope image detection method and apparatus, storage medium, and electronic device
WO2023155488A1 (en) * 2022-02-21 2023-08-24 浙江大学 Fundus image quality evaluation method and device based on multi-source multi-scale feature fusion
CN115953622A (en) * 2022-12-07 2023-04-11 广东省新黄埔中医药联合创新研究院 Image classification method combining attention mutual exclusion regularization
CN115953622B (en) * 2022-12-07 2024-01-30 广东省新黄埔中医药联合创新研究院 Image classification method combining attention mutual exclusion rules
CN117455970A (en) * 2023-12-22 2024-01-26 山东科技大学 Registration method of airborne laser bathymetry and multispectral satellite images based on feature fusion
CN117455970B (en) * 2023-12-22 2024-05-10 山东科技大学 Registration method of airborne laser bathymetry and multispectral satellite images based on feature fusion
CN119477899A (en) * 2025-01-10 2025-02-18 厦门眼科中心有限公司 A method for identifying and grading diabetic retinopathy based on wide-angle fundus images

Also Published As

Publication number Publication date
CN110458829B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
CN110458829B (en) Image quality control method, device, equipment and storage medium based on artificial intelligence
CN109919928B (en) Medical image detection method and device and storage medium
CN110517759B (en) Method for determining image to be marked, method and device for model training
US11900647B2 (en) Image classification method, apparatus, and device, storage medium, and medical electronic device
Zhou et al. Salient region detection via integrating diffusion-based compactness and local contrast
CN110414631B (en) Medical image-based focus detection method, model training method and device
CN114820584B (en) Lung focus positioner
CN109346159B (en) Case image classification method, device, computer equipment and storage medium
CN110941990A (en) Method and device for evaluating human body actions based on skeleton key points
CN110807495A (en) Multi-label classification method and device, electronic equipment and storage medium
CN110752028A (en) Image processing method, device, equipment and storage medium
CN110136809A (en) Medical image processing method and device, electronic medical equipment and storage medium
CN112949654B (en) Image detection method and related device and equipment
CN110675385A (en) An image processing method, apparatus, computer equipment and storage medium
CN108198177A (en) Image acquisition method, device, terminal and storage medium
CN111402217B (en) Image grading method, device, equipment and storage medium
CN114495241B (en) Image recognition method and device, electronic equipment and storage medium
CN107958453A (en) Detection method, device and the computer-readable storage medium of galactophore image lesion region
CN113724188B (en) A method for processing lesion images and related devices
CN111429414B (en) Artificial intelligence-based focus image sample determination method and related device
CN113408332A (en) Video mirror splitting method, device, equipment and computer readable storage medium
CN111553880A (en) Model generation method, label labeling method, iris image quality evaluation method and device
Yang et al. A robust iris segmentation using fully convolutional network with dilated convolutions
CN111080592A (en) Rib extraction method and device based on deep learning
CN111598144A (en) Training method and device of image recognition model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant