CN110517766B - Method and device for identifying brain atrophy - Google Patents
Method and device for identifying brain atrophy Download PDFInfo
- Publication number
- CN110517766B CN110517766B CN201910736591.1A CN201910736591A CN110517766B CN 110517766 B CN110517766 B CN 110517766B CN 201910736591 A CN201910736591 A CN 201910736591A CN 110517766 B CN110517766 B CN 110517766B
- Authority
- CN
- China
- Prior art keywords
- key
- diameter
- determining
- key frame
- brain atrophy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20152—Watershed segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Quality & Reliability (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a method and a device for identifying brain atrophy, which relate to the technical field of artificial intelligence, and the method comprises the following steps: and determining a key frame in the brain image sequence by adopting a key frame detection module, detecting key points in the key frame by adopting a key point detection module, and determining a first-class grading index according to the key points in the key frame. And (4) segmenting the key frame by adopting an image segmentation module, and determining a second class of grading indexes. And then identifying the brain atrophy according to the first classification index and the second classification index. Different detection modes are adopted according to the characteristics of different grading indexes, so that the detection precision of the grading indexes is improved, and the brain atrophy is identified by using the first grading index and the second grading index, and the brain atrophy identification precision is also improved. And secondly, the brain atrophy is automatically identified by adopting a neural network model, and compared with manual measurement and calculation, the brain atrophy identification method is small in manual dependence and high in efficiency.
Description
Technical Field
The embodiment of the invention relates to the technical field of artificial intelligence, in particular to a method and a device for identifying brain atrophy.
Background
Brain atrophy is an imaging expression of organic lesions of brain tissue and a more normal reduction in volume, and can be caused by various factors such as hereditary, nervous system diseases, poisoning, malnutrition, and the like. Among them, atrophy of cerebral cortex is the most common, and it is manifested as flattening of cerebral gyrus, widening of cerebral sulcus, enlargement of ventricular brain pool, and weight reduction of brain, and clinical manifestations of hypomnesis, decreased thinking ability, unstable mood, inability to concentrate attention, etc., and in severe cases, dementia, language disorder, loss of intelligence, etc. will develop. About 1500 million people die of various brain atrophy-related diseases every year worldwide, and the death rate increases year by year. Brain atrophy related diseases are usually long in course and slow in onset, so that the brain atrophy related diseases are not easy to detect, cannot be reversed once obvious symptoms appear, and seriously affect the work and life of patients. Therefore, the diagnosis and treatment of the early brain atrophy have important effects on improving the survival rate and the life quality of the brain atrophy patients.
Currently, the main methods for diagnosing brain atrophy based on imaging are brain tissue volume measurement and linear measurement. The linear measurement method reflects the change of intracranial cerebrospinal fluid volume by measuring one-dimensional linear indexes of ventricles, sulci and fissure of brain, and further indirectly reflects the change of brain parenchyma volume. The measurement part is clear and fixed, the method is easy to implement, and the method is widely applied clinically. However, the method depends on manual measurement and calculation by doctors, and has strong subjectivity and low efficiency.
Disclosure of Invention
The embodiment of the invention provides a method and a device for identifying encephalatrophy, which have the problems of strong subjectivity and low efficiency because the conventional method for diagnosing encephalatrophy based on imaging relies on manual measurement and calculation of doctors.
In one aspect, embodiments of the present invention provide a method for identifying brain atrophy, including:
determining a key frame in the brain image sequence by adopting a key frame detection module;
detecting key points in the key frames by adopting a key point detection module, and determining a first class of grading indexes according to the key points in the key frames;
an image segmentation module is adopted to segment the key frame, and a second class of grading indexes are determined;
identifying brain atrophy from the first and second categories of grading indices.
Optionally, the first class of grading index comprises the maximum diameter between anterior horns, the minimum diameter between anterior horns, the lateral ventricular choroid plexus diameter, and the lateral ventricular parietal outer diameter;
the detecting key points in the key frame by using the key point detecting module and determining a first class of grading indexes according to the key points in the key frame include:
detecting anterior horn key points and lateral ventricle key points in the key frame by adopting a key point detection module;
determining the maximum diameter and the minimum diameter of the front angle according to the front angle key points;
and determining the lateral ventricle choroid plexus diameter and the lateral ventricle parietal outer diameter according to the lateral ventricle key point.
Optionally, the second category of grading index comprises a three ventricle widest diameter;
the step of segmenting the key frame by adopting the image segmentation module and determining a second class of grading indexes comprises the following steps:
determining a first area according to key points in the key frame;
carrying out binarization processing on the first area to determine a second area;
segmenting the second region by adopting an image segmentation algorithm to determine three ventricle regions;
and determining the widest diameter of the three ventricles according to the three ventricles region.
Optionally, the second type of grading index comprises a skull maximum outer diameter and a skull maximum inner diameter;
the step of segmenting the key frame by adopting the image segmentation module and determining a second class of grading indexes comprises the following steps:
segmenting the key frame according to the CT value corresponding to the skull to determine a first boundary;
adopting an image segmentation algorithm to segment the first boundary and determining a skull boundary;
and determining the maximum outer diameter of the skull and the maximum inner diameter of the skull according to the boundary of the skull.
Optionally, the identifying brain atrophy from the first and second categories of grading indices comprises:
determining a brain atrophy assessment index from the first and second categories of grading indices;
inputting the brain atrophy evaluation index into a brain atrophy model, and identifying brain atrophy.
Optionally, the key point detection module and the key frame detection module are convolutional neural networks.
In one aspect, an embodiment of the present invention provides an apparatus for identifying brain atrophy, including:
the key frame detection module is used for detecting key frames in the brain image sequence;
the key point detection module is used for detecting key points in the key frames and determining a first type of grading index according to the key points in the key frames;
the image segmentation module is used for segmenting the key frame and determining a second class of grading indexes;
and the identification module is used for identifying the brain atrophy according to the first classification index and the second classification index.
Optionally, the first class of grading index comprises the maximum diameter between anterior horns, the minimum diameter between anterior horns, the lateral ventricular choroid plexus diameter, and the lateral ventricular parietal outer diameter;
the key point detection module includes:
the first detection module is used for detecting anterior horn key points and lateral ventricle key points in the key frame;
the first determining module is used for determining the maximum diameter and the minimum diameter of the front angle according to the key point of the front angle;
and the second determination module is used for determining the lateral ventricle choroid plexus diameter and the lateral ventricle parietal outer diameter according to the lateral ventricle key point.
Optionally, the second category of grading index comprises a three ventricle widest diameter;
the image segmentation module comprises:
the second detection module is used for determining a first area according to the key points in the key frame;
the third detection module is used for carrying out binarization processing on the first area and determining a second area;
the first segmentation module is used for segmenting the second region by adopting an image segmentation algorithm to determine three ventricle regions;
and the third determining module is used for determining the widest diameter of the three ventricles according to the three ventricle areas.
Optionally, the second type of grading index comprises a skull maximum outer diameter and a skull maximum inner diameter;
the image segmentation module comprises:
the second segmentation module is used for segmenting the key frame according to the CT value corresponding to the skull to determine a first boundary;
the third segmentation module is used for segmenting the first boundary by adopting an image segmentation algorithm to determine a skull boundary;
and the fourth determination module is used for determining the maximum outer diameter of the skull and the maximum inner diameter of the skull according to the boundary of the skull.
Optionally, the identification module comprises:
a fifth determining module, configured to determine a brain atrophy evaluation index according to the first classification index and the second classification index;
and a sixth determination module for inputting the brain atrophy evaluation index into a brain atrophy model to identify brain atrophy.
In one aspect, embodiments of the present invention provide a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method of identifying brain atrophy when executing the program.
In one aspect, embodiments of the invention provide a computer readable storage medium storing a computer program executable by a computer device, the program, when run on the computer device, causing the computer device to perform the steps of a method of identifying brain atrophy.
In the embodiment of the invention, the key frames in the brain image sequence are detected firstly, then the key points in the key frames are detected, the first classification index is determined based on the key points, the second classification index is determined by segmenting the key frames, and different detection modes are adopted according to the characteristics of different classification indexes, so that the detection precision of the classification indexes is improved. The brain atrophy is identified by using the first classification index and the second classification index, and the accuracy of identifying the brain atrophy is also improved. And secondly, the brain atrophy is automatically identified by adopting a neural network model, and compared with manual measurement and calculation, the brain atrophy identification method is small in manual dependence and high in efficiency.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a method for identifying brain atrophy according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a brain image according to an embodiment of the present invention;
fig. 3a is a schematic structural diagram of a key frame detection module according to an embodiment of the present invention;
fig. 3b is a schematic structural diagram of a fast shrinking portion in a key frame detection module according to an embodiment of the present invention;
fig. 3c is a schematic structural diagram of a feature extraction part in a key frame detection module according to an embodiment of the present invention;
fig. 3d is a schematic structural diagram of a feature extraction sub-module in the feature extraction part according to an embodiment of the present invention;
fig. 3e is a schematic structural diagram of a classification neural network portion in a key frame detection module according to an embodiment of the present invention;
FIG. 4a is a diagram illustrating a key frame according to an embodiment of the present invention;
FIG. 4b is a diagram illustrating a key frame according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a method for detecting a second type of rating indicator according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a key frame according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating a method for detecting a second type of rating indicator according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating a key frame according to an embodiment of the present invention;
fig. 9 is a schematic flow chart of a method for determining a brain atrophy level according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an apparatus for identifying brain atrophy according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The method for identifying the encephalatrophy in the embodiment of the invention can be applied to the scene of auxiliary diagnosis of the encephalatrophy, for example, the CT image of the brain of a patient is obtained, then the method for identifying the encephalatrophy in the embodiment of the invention is adopted to analyze the CT image of the brain of the patient, the identification result of the encephalatrophy of the patient is output, and a doctor diagnoses the patient by combining the output identification result.
Based on the above application scenario, the embodiment of the present invention provides a flow of a method for identifying brain atrophy, where the flow of the method may be executed by an apparatus for identifying brain atrophy, as shown in fig. 1, and includes the following steps:
step S101, a brain image sequence is obtained.
Specifically, the brain image sequence includes a plurality of brain images, and the brain images may be Computed Tomography (CT) images, magnetic resonance images, and the like of the brain. Taking a CT image sequence as an example, for a given CT modality Digital Imaging and Communications in Medicine (DICOM) image sequence, reading image information of each frame, performing interpolation and scaling to a fixed size (for example, 512 by 512 pixels), and adjusting to a fixed window width (W: 80, L: 40) to obtain a brain image sequence. Illustratively, the brain image may be embodied as shown in fig. 2.
Step S102, a key frame detection module is adopted to determine a key frame in the brain image sequence.
Specifically, the key frame detection module may be a 2D Convolutional Neural Networks (CNN) or a 3D CNN. The key frames are predefined brain images for detecting encephalatrophy, can be a basal ganglia bean nucleus maximum clearest layer and two lateral ventricle body display layers, and are one or more frames in a brain image sequence.
For example, by using the key frame detection module as a 2D CNN, a network structure of the key frame detection module is first introduced, and specifically as shown in fig. 3a, the network structure of the key frame detection module includes: a fast reduction part, a feature extraction part and a classification neural network part.
The fast reduction portion is shown in fig. 3b and is composed of a convolutional layer, a Batch Normalization (BN) layer, an activated Linear Unit (ReLU) layer, and a pooling layer. The convolutional kernel size is 5 x 5, with 2 pixels spacing. The pooling layer is a maximum pooling of 2 × 2, and the area of the brain image can be rapidly reduced by rapidly reducing the part, so that the side length is changed into the original 1/4.
The feature extraction part is composed of N feature extraction submodules as shown in fig. 3c, where N is an integer greater than 0. Each feature extraction sub-module comprises three bottleneck layers and one down-sampling layer as shown in fig. 3 d. The bottleneck layer and the downsampling layer each include three convolutional layers.
The bottleneck layer reduces the number of the characteristic diagrams output by the fast reducing part through the first convolution layer and the second convolution layer, increases the number of the characteristic diagrams output by the fast reducing part back to the original number of the characteristic diagrams through the third convolution layer, and directly adds the characteristic diagrams output by the third convolution layer and the characteristic diagrams output by the fast reducing part for output.
And the feature graph output by the fast reducing part is subjected to feature extraction sequentially through three bottleneck layers and then input into a down-sampling layer. The down-sampling layer reduces the number of the characteristic graphs of the characteristic graph output by the bottleneck layer through the first convolution layer and the second convolution layer, and increases the number of the characteristic graphs of the characteristic graph output by the bottleneck layer back to the original number of the characteristic graphs through the third convolution layer. Meanwhile, the third convolution layer reduces the size of the feature map to half by setting the convolution step to 2 while increasing the number of feature maps. And reducing the size of the characteristic diagram output by the bottleneck layer to the original half by 2-by-2 average pooling, and finally adding the characteristic diagram output by the third convolution layer and the characteristic diagram output by the bottleneck layer subjected to average pooling and outputting the sum.
The classified neural network part is shown in fig. 3e and comprises a global average pooling layer, a random inactivation (dropout) layer, a full connection layer and a softmax layer. The input of the neural network classification part is a feature map output by the feature extraction part, and the output is a prediction category of the brain image. Firstly, extracting a feature map into a feature vector through a global average pooling layer, inputting the feature vector into a dropout layer, a full-link layer and a softmax layer to obtain a classification confidence coefficient vector, wherein each bit represents the confidence coefficient of a class, the sum of all the confidence coefficients is 1, and the bit with the highest confidence coefficient is output as the prediction class of the brain image.
Setting the key frames in the brain image sequence as 4 frames, wherein the key frame No. 1, the key frame No. 2 and the key frame No. 3 are the largest and clearest display levels of the basal ganglia bean-shaped nucleus, and the key frame No. 4 is the display levels of the two lateral ventricle bodies. When training the key frame detection module, a large number of brain CT image sequences are collected, and then a doctor marks 4 key frames from each CT image sequence. And then, expanding the CT image sequence marked with the key frame in a data enhancement mode to obtain a training sample. The data enhancement mode comprises the following steps: the random up-down and left-right translation is 0-20 pixels, the random rotation is-20 to-20 degrees, the random scaling is 0.8-1.2 times, and the like. And inputting the training samples into a convolutional neural network for training. During training, the prediction category output by the convolutional neural network is compared with the category marked by the training sample, cross entry function is used as a target loss function, iteration is repeated through a back propagation algorithm by using an sgd optimization mode until the target function is converged, and the key frame detection module is obtained.
When a key frame detection module is adopted to detect a key frame in a brain image sequence, firstly, each frame of brain images from the 2 nd frame to the 2 nd frame in the brain image sequence is spliced with each frame of brain images before and after by taking each frame of brain images as a central frame to determine the brain image sequence comprising 3 frames of brain images. Then, the brain image sequence comprising the 3 frames of brain images is input into a key frame detection module, and the key frame detection module carries out 5-class prediction on the brain image sequence comprising the 3 frames of brain images to obtain confidence coefficients of 5 classes of the central frame. And classifying the 5 categories into 0-4, wherein 0 represents that the central frame is not a key frame, 1 represents that the central frame is a key frame No. 1,2 represents that the central frame is a key frame No. 2,3 represents that the central frame is a key frame No. 3,4 represents that the central frame is a key frame No. 4, and outputting the category with the highest confidence as the category to which the central frame belongs.
It should be noted that, when the key frame detection module is a 3D CNN, each frame of brain image from the 3 rd frame to the 3 rd frame in the brain image sequence may be stitched with the two frames of brain images before and after the frame of brain image as a central frame to determine the brain image sequence including 5 frames of brain images. And then inputting the brain image sequence comprising 5 frames of brain images into a key frame detection module for 5-class prediction, obtaining the confidence coefficients of 5 classes of the central frame, and outputting the class with the maximum confidence coefficient as the class of the central frame.
In addition, besides CNN, the key frame detection module may also be a conventional machine learning model, that is, the sequence of the stitched images is input into the key frame detection module, and the key frame detection module calculates the gray features and texture features of the brain images of each channel and stitches them into feature vectors. Then, the feature vectors are used as input, and a classifier (such as a support vector machine and a random forest) is used for classifying to obtain the category of the central frame.
Step S103, a key point detection module is adopted to detect key points in the key frame, and a first-class grading index is determined according to the key points in the key frame.
Specifically, the key point detection module may be a 2D CNN, including: a fast reduction part, a feature extraction part and a classification neural network part.
The fast reduction portion is composed of a convolution layer, a Batch Normalization (BN) layer, an activated function (RecU) layer and a pooling layer. The convolutional kernel size is 5 x 5, with 2 pixels spacing. The pooling layer is maximum pooling of 2 x 2, and the area of the key frame can be rapidly reduced by the rapid reduction part, so that the side length is changed into the original 1/4.
The feature extraction part is composed of M feature extraction sub-modules, and M is an integer larger than 0. Each feature extraction sub-module comprises three bottleneck layers and one down-sampling layer. The bottleneck layer and the downsampling layer each include three convolutional layers.
The bottleneck layer reduces the number of the characteristic diagrams output by the fast reducing part through the first convolution layer and the second convolution layer, and increases the number of the characteristic diagrams output by the fast reducing part back to the original number of the characteristic diagrams through the third convolution layer. And directly adding the characteristic diagram output by the third convolution layer and the characteristic diagram output by the fast reducing part and outputting the result.
The feature graph output by the fast reducing part is subjected to feature extraction sequentially through the three bottleneck layers and then input into the down-sampling layer, the number of the feature graphs output by the bottleneck layers is reduced by the down-sampling layer through the first convolution layer and the second convolution layer, and the number of the feature graphs output by the bottleneck layers is increased back to the original number of the feature graphs through the third convolution layer. Meanwhile, the third convolution layer reduces the size of the feature map to half by setting the convolution step to 2 while increasing the number of feature maps. And reducing the size of the characteristic diagram output by the bottleneck layer to the original half by 2-by-2 average pooling, and finally adding the characteristic diagram output by the third convolution layer and the characteristic diagram output by the bottleneck layer subjected to average pooling and outputting the sum.
The classified neural network part comprises a global average pooling layer, a random inactivation (dropout) layer, a full connection layer and a linear conversion layer. The input of the classification neural network part is the feature graph output by the feature extraction part, and the output is the key point coordinate. Firstly, extracting a feature map into a feature vector through a global average pooling layer, and then inputting the feature vector into a dropout layer, a full-connection layer and a linear conversion layer to obtain a two-dimensional coordinate vector, wherein the two-dimensional coordinate vector represents the positions of key points on an X axis and a Y axis.
When the above-mentioned key point detection module is trained, a large number of brain CT image sequences are collected, a doctor marks a key frame in each CT image sequence, and then marks key points in the key frame. And expanding the CT image sequence marked with the key frames and the key points in a data enhancement mode to obtain a training sample. The data enhancement mode comprises the following steps: the random up-down and left-right translation is 0-20 pixels, the random rotation is-20 to-20 degrees, the random scaling is 0.8-1.2 times, and the like. And inputting the training samples into a convolutional neural network for training. During training, the key point coordinates output by the convolutional neural network prediction are compared with the key point coordinates marked by the training samples, a Mean Square Error (MSE) function is used as a target loss function, and iteration is repeated by using an sgd optimization mode through a back propagation algorithm until the target function is converged to obtain a key point detection module. When the key point detection module is used for detecting key points in the key frame, the key frame is input into the key point detection module, and coordinates of the key points in the key frame are output.
Optionally, the first class of grading indices includes the maximum diameter between anterior horns, the minimum diameter between anterior horns, the lateral ventricular choroid plexus diameter, and the lateral ventricular parietal outer diameter. When the first class of hierarchical indexes are detected, a key point detection module is adopted to detect anterior horn key points and lateral ventricle key points in the key frame. And then determining the maximum diameter between the anterior horns and the minimum diameter between the anterior horns according to the key points of the anterior horns, and determining the lateral ventricular choroid plexus diameter and the lateral ventricular apical outer diameter according to the key points of the lateral ventricles.
Exemplarily, after the key frame detection module is set to detect the brain image sequence, 4 key frames are determined, and the key point detection module respectively performs key point detection on the 4 key frames to determine key points in each key frame. Setting two frames of key frames as shown in FIG. 4a and FIG. 4b, the front corner key point is key point a in FIG. 4a1Key point a2Key point b1Key point b2The lateral ventricle key point is the key point d in FIG. 4a1Key point d2And key point e in FIG. 4b1Key point e2. For the key point a1And key point a2Detecting the key point a in each frame key frame1And key point a2Then comparing key points a in 4 frames of key frames1And key point a2The maximum distance is determined as the maximum diameter a between the rake angles. For key point b1And key point b2Detecting a key point b in each frame key frame1And key point b2Then compare key points b in 4 key frames1And key point b2The maximum distance is determined as the rake angle minimum diameter B. For the key point d1And a key point d2Detecting a key point d in each frame key frame1And a key point d2Then compare the key points d in the 4-frame key frame1And a key point d2The maximum distance was determined as the lateral ventricular choroidal plexus diameter D. For key point e1And key point e2Detecting a key point e in each frame key frame1And key point e2Then compare key points e in 4 key frames1And key point e2The maximum distance is determined as the lateral ventriculo-parietal outside diameter E.
The first-class grading index is determined by detecting the key points in each key frame and comparing the distances between the key points in each key frame, and compared with the determination of the first-class grading index based on the key points in a single-frame key frame, the detection precision is improved.
And step S104, segmenting the key frame by adopting an image segmentation module, and determining a second class of grading indexes.
In a possible embodiment, the second classification index includes a three ventricle widest diameter, and detecting the second classification index includes the following steps, as shown in fig. 5:
step S501, a first region is determined according to the key points in the key frame.
Specifically, a keypoint detection module may be employed to detect three ventricle keypoints in the keyframe, and then determine the first region based on the three ventricle keypoints and the anterior corner keypoints. Exemplary embodiments of the inventionIn fig. 6, the key point of the three ventricles is the key point c shown in fig. 61And key point c2Binding key point b1Key point b2Key point c1And key point c2A first region is determined.
In step S502, binarization processing is performed on the first region, and a second region is determined.
Specifically, for each pixel point in the first region, when the intensity of the pixel point is greater than a preset threshold, the pixel point is determined as a part of three ventricles, otherwise, the pixel point is determined as a part of the background, and the pixel points in the first region, the intensity of which is greater than the preset threshold, form a second region.
And S503, segmenting the second region by adopting an image segmentation algorithm, and determining three ventricle regions.
In specific implementations, the image segmentation algorithm includes a threshold-based segmentation method, an edge-based segmentation method, a region-based segmentation method, a particular theory-based segmentation method, and the like.
The basic idea of the threshold-based segmentation method is to calculate one or more gray threshold values based on the gray features of an image, compare the gray value of each pixel point in the image with the threshold value, and finally classify the pixel points into proper categories according to the comparison result. Therefore, the most critical step of this type of method is to solve the optimal gray threshold according to some criterion function.
The edge-based segmentation method refers to a set of continuous pixel points on a boundary line of two different regions in an image, is a reflection of discontinuity of local features of the image, and reflects abrupt change of image characteristics such as gray scale, color, texture and the like. In general, edge-based segmentation methods refer to gray-value-based edge detection, which is a method based on the observation that edge gray values exhibit a step-type or roof-type change.
The region-based segmentation method is to divide an image into different regions according to a similarity criterion, and mainly comprises a seed region growing method, a region splitting and merging method, a watershed method and the like. The watershed method is a segmentation method of mathematical morphology based on a topological theory, and the basic idea is that an image is regarded as a topological landform on geodetic science, the gray value of each pixel point in the image represents the altitude of the point, each local minimum value and an influence area of the local minimum value are called as a catchbasin, and the boundary of the catchbasin forms a watershed. The implementation of the algorithm can be modeled as a flood process, with the lowest points of the image submerged first, and then the water gradually submerged the entire valley. When the water level reaches a certain height, the water level overflows, a dam is built at the position where the water overflows, the process is repeated until pixel points on the whole image are completely submerged, and the built dams become watershed separating basins. The watershed algorithm responds well to weak edges, but noise in the image can cause the watershed algorithm to generate an over-segmentation phenomenon.
And step S504, determining the widest diameter of the three ventricles according to the three ventricle area.
Specifically, the maximum width of the three ventricle region is determined as the three ventricle widest diameter C. In one possible implementation, when the key frames are multiple frames, the above method may be used to detect three ventricle regions in each frame of the key frame, and determine the maximum width of the three ventricle regions in each frame of the key frame. Then, the maximum widths of the three ventricle areas in each frame of key frame are sequenced from large to small, and the maximum width arranged at the first position is determined as the maximum width diameter of the three ventricles. The second classification index is detected by combining the key point detection and the image segmentation algorithm, so that the precision of detecting the second classification index is effectively improved.
In a possible embodiment, the second type of grading index comprises the maximum outer diameter of the skull and the maximum inner diameter of the skull, and the detecting the second type of grading index comprises the following steps, as shown in fig. 7:
and step S701, segmenting the key frame according to the CT value corresponding to the skull, and determining a first boundary.
Specifically, the tissues of different densities correspond different CT values, the CT value that the skull corresponds generally is greater than 400HU, and its density generally is greater than other tissues of brain, so can take the CT value that the skull corresponds to and cut apart the key frame, other tissues of filtering brain obtain first border, and first border includes the outer border of the inner border of skull and skull at least.
Step S702, an image segmentation algorithm is adopted to segment the first boundary, and the skull boundary is determined.
In specific implementations, the image segmentation algorithm includes a threshold-based segmentation method, an edge-based segmentation method, a region-based segmentation method, a particular theory-based segmentation method, and the like.
And step S703, determining the maximum outer diameter of the skull and the maximum inner diameter of the skull according to the boundary of the skull.
Illustratively, setting the boundary of the skull in the keyframe as shown in FIG. 8, the maximum outer diameter of the skull is distance F and the maximum inner diameter of the skull is distance G. In a possible embodiment, when the key frames are multiple frames, the above method can be used to detect the skull boundary in each frame of key frames, and then determine the maximum outer diameter of the skull and the maximum inner diameter of the skull in each frame of key frames. And then comparing the size of the maximum outer diameter of the skull in each key frame, and taking the maximum outer diameter of the skull as a second classification index. And comparing the maximum inner diameter of the skull in each key frame, and taking the maximum inner diameter of the skull as a second classification index. Because the density of the skull is greater than that of other tissues of the brain, when the key frame is segmented by adopting the CT value, the accurate first boundary of the skull can be obtained, then the accurate boundary of the skull is obtained by adopting the image segmentation algorithm, and then the maximum outer diameter and the maximum inner diameter of the skull are determined based on the accurate boundary of the skull, so that the accuracy of detecting the second classification index is effectively improved.
And step S105, identifying the brain atrophy according to the first classification index and the second classification index.
Optionally, when identifying brain atrophy based on the first classification index and the second classification index, the method specifically includes the following steps, as shown in fig. 9:
step S901, determining a brain atrophy evaluation index according to the first classification index and the second classification index.
Specifically, the brain atrophy evaluation index includes a harderian value, a ventriculus index, a lateral ventriculus body width index, a hook angle index, a third ventriculus width.
The Ha's value is the sum of the maximum diameter between anterior horns and the minimum diameter between anterior horns, and generally, the normal Ha's value for men ranges from 3 to 6.9, and the normal Ha's value for women ranges from 2.6 to 5.2.
The ventricular index is the ratio of the lateral ventricular choroid plexus diameter to the maximum anterior horn diameter, and generally speaking, the normal ventricular index of men is 1.1-3.3, and the normal ventricular index of women is 1.1-2.9.
The lateral ventricle body index is the ratio of the maximum outer diameter of the skull to the outer diameter between the lateral ventricle tops, and generally, the normal lateral ventricle body index of a male is in the range of 4.3 to 7.4, and the normal lateral ventricle body index of a female is in the range of 3.9 to 7.7.
The lateral ventricle body width index is the ratio of the maximum inner diameter of the skull to the outer diameter between the lateral ventricle tops, and generally, the lateral ventricle body width index range of normal males is 3.1-6.7, and the lateral ventricle body width index range of normal females is 3.5-6.8.
The index of the anterior horn is the ratio of the maximum internal diameter of the skull to the maximum diameter between the anterior horns, and generally, the index of the normal anterior horn of men ranges from 2.8 to 8.2, and the index of the normal anterior horn of women ranges from 3.0 to 8.5.
Generally, the normal third ventricular width of a male is in the range of 1 to 6.7, and the normal third ventricular width of a female is in the range of 0 to 7.
Step S902, inputting the brain atrophy evaluation index into a brain atrophy model, and identifying the brain atrophy.
In particular, the brain atrophy model may be used only to identify whether there is brain atrophy, but also to identify whether there is brain atrophy and the level of brain atrophy.
When the brain atrophy model is used to identify whether there is brain atrophy, the brain atrophy model may be a logistic regression model, a bayesian model, or the like.
In one possible embodiment, the brain atrophy model is a logistic regression model, which is in particular in accordance with the following formula (1):
y1=a1x1+a2x2+a3x3+a4x4+a5x5+a6x6……………(1)
wherein, y1Is the brain atrophy degree value, xiFor the evaluation of the index of brain atrophy, i ═ 1,2,3,4,5,6, ajIs a weighting coefficient, j is 1,2,3,4,5,6, 0 ≦ aj≤1。
When degree of brain atrophy y1When the value is larger than the first threshold value, the brain atrophy is determined, and when the brain atrophy degree value y is1And when the brain atrophy is not more than the first threshold value, determining that the brain atrophy does not exist.
In one possible embodiment, the brain atrophy model may be a bayesian model, which is in particular in accordance with the following formula (2):
wherein, y1In the brain atrophy class, xiIndex for brain atrophy, i ═ 1,2,3,4,5,6, CkAs a category item, C0,C1The specific categories are classified into the presence of brain atrophy and the absence of brain atrophy, and the presence of brain atrophy is indicated by 1 and the absence of brain atrophy is indicated by 0.
When the brain atrophy model is used to identify whether there is brain atrophy and the level of brain atrophy, the brain atrophy model includes a brain atrophy determination module and a brain atrophy classification module, and the brain atrophy determination module is first used to determine whether there is brain atrophy. When brain atrophy is determined, a brain atrophy grading module is adopted to further determine the brain atrophy grade. The brain atrophy determination module can be a logistic regression model, a Bayesian model and the like, and the brain atrophy grading module can be a logistic regression model, a Bayesian model and the like.
In one possible embodiment, the brain atrophy determination module is a logistic regression model, which is specifically consistent with the above formula (1) when the brain atrophy degree value y is measured1When the value is larger than the first threshold value, the brain atrophy is determined, and when the brain atrophy degree value y is1And when the brain atrophy is not more than the first threshold value, determining that the brain atrophy does not exist.
When the brain atrophy is determined, determining the brain atrophy grade by using a brain atrophy grading module, wherein the brain atrophy grading module is a logistic regression model which is in particular in accordance with the following formula (3):
y2=b1x1+b2x2+b3x3+b4x4+b5x5+b6x6……………(3)
wherein, y2As a atrophy rating value, xiFor the evaluation of the index of brain atrophy, i ═ 1,2,3,4,5,6, bkK is a weighting coefficient, 1,2,3,4,5,6, 0 ≦ bk≤1。
The relation comparison table between the brain atrophy classification value and the brain atrophy classification is preset, and after the brain atrophy classification value is determined by adopting a brain atrophy classification module, the brain atrophy classification can be directly obtained by inquiring the comparison table.
In one possible embodiment, the brain atrophy determination module may be a bayesian model, which is in particular in accordance with equation (2) above.
When the brain atrophy is determined, determining the brain atrophy level by using a brain atrophy grading module, wherein the brain atrophy grading module is a Bayesian model, and the model specifically conforms to the following formula (4):
wherein, y2In the brain atrophy class, xiIndex for brain atrophy, i ═ 1,2,3,4,5,6, DkAs category item, D0,D1,D2The brain atrophy can be classified into mild brain atrophy, moderate brain atrophy and severe brain atrophy, wherein 0 is used for mild brain atrophy, 1 is used for moderate brain atrophy, and 2 is used for severe brain atrophy.
In the embodiment of the invention, the key frames in the brain image sequence are detected firstly, then the key points in the key frames are detected, the first classification index is determined based on the key points, the second classification index is determined by segmenting the key frames, and different detection modes are adopted according to the characteristics of different classification indexes, so that the detection precision of the classification indexes is improved. The accuracy of identifying and determining brain atrophy grades is also improved by using the first classification index and the second classification index to identify and determine brain atrophy. And secondly, the neural network model is adopted to automatically identify the encephalatrophy and determine the encephalatrophy level, and compared with manual measurement and calculation, the method is low in manual dependence and high in efficiency.
Based on the same technical concept, an embodiment of the present invention provides an apparatus for identifying brain atrophy, which may perform a flow of a method for identifying brain atrophy, as shown in fig. 10, wherein the apparatus 1000 includes:
a key frame detection module 1001, configured to detect a key frame in a brain image sequence;
a key point detecting module 1002, configured to detect key points in the key frame, and determine a first category of ranking index according to the key points in the key frame;
an image segmentation module 1003, configured to segment the key frame and determine a second class of ranking index;
an identifying module 1004 for identifying brain atrophy according to the first category of grading index and the second category of grading index.
Optionally, the first class of grading index comprises the maximum diameter between anterior horns, the minimum diameter between anterior horns, the lateral ventricular choroid plexus diameter, and the lateral ventricular parietal outer diameter;
the key frame detection module 1001 includes:
the first detection module is used for detecting anterior horn key points and lateral ventricle key points in the key frame;
the first determining module is used for determining the maximum diameter and the minimum diameter of the front angle according to the key point of the front angle;
and the second determination module is used for determining the lateral ventricle choroid plexus diameter and the lateral ventricle parietal outer diameter according to the lateral ventricle key point.
Optionally, the second category of grading index comprises a three ventricle widest diameter;
the image segmentation module 1003 includes:
the second detection module is used for determining a first area according to the key points in the key frame;
the third detection module is used for carrying out binarization processing on the first area and determining a second area;
the first segmentation module is used for segmenting the second region by adopting an image segmentation algorithm to determine three ventricle regions;
and the third determining module is used for determining the widest diameter of the three ventricles according to the three ventricle areas.
Optionally, the second type of grading index comprises a skull maximum outer diameter and a skull maximum inner diameter;
the image segmentation module 1003 includes:
the second segmentation module is used for segmenting the key frame according to the CT value corresponding to the skull to determine a first boundary;
the third segmentation module is used for segmenting the first boundary by adopting an image segmentation algorithm to determine a skull boundary;
and the fourth determination module is used for determining the maximum outer diameter of the skull and the maximum inner diameter of the skull according to the boundary of the skull.
Optionally, the identifying module 1004 comprises:
a fifth determining module, configured to determine a brain atrophy evaluation index according to the first classification index and the second classification index;
and a sixth determination module for inputting the brain atrophy evaluation index into a brain atrophy model to identify brain atrophy.
Optionally, the keypoint detection module 1002 and the keyframe detection module 1001 are convolutional neural networks.
Based on the same technical concept, an embodiment of the present invention provides a computer device, as shown in fig. 11, including at least one processor 1101 and a memory 1102 connected to the at least one processor, where a specific connection medium between the processor 1101 and the memory 1102 is not limited in the embodiment of the present invention, and the processor 1101 and the memory 1102 are connected through a bus in fig. 11 as an example. The bus may be divided into an address bus, a data bus, a control bus, etc.
In an embodiment of the present invention, the memory 1102 stores instructions executable by the at least one processor 1101, and the at least one processor 1101 performs the steps included in the method for identifying brain atrophy described above by executing the instructions stored in the memory 1102.
The processor 1101 is a control center of the computer device, and may be connected to various parts of the computer device by various interfaces and lines, and may identify the brain atrophy by executing or executing instructions stored in the memory 1102 and calling up data stored in the memory 1102. Optionally, the processor 1101 may include one or more processing units, and the processor 1101 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1101. In some embodiments, the processor 1101 and the memory 1102 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 1101 may be a general purpose processor such as a Central Processing Unit (CPU), a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, configured to implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor.
Based on the same technical idea, embodiments of the present invention provide a computer-readable storage medium storing a computer program executable by a computer device, which when run on the computer device causes the computer device to perform the steps of the method of identifying brain atrophy.
It should be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. A method of identifying brain atrophy, comprising:
determining a key frame in the brain image sequence by adopting a key frame detection module;
detecting key points in the key frame by using a key point detection module, and determining a first class grading index according to the key points in the key frame, wherein the first class grading index comprises a maximum anterior angle diameter, a minimum anterior angle diameter, a lateral ventricle choroid plexus diameter and a lateral ventricle parietal outer diameter;
an image segmentation module is adopted to segment the key frame, and a second class of grading indexes are determined, wherein the second class of grading indexes comprise the maximum width diameter of three ventricles, the maximum outer diameter of the skull and the maximum inner diameter of the skull;
identifying brain atrophy from the first and second categories of grading indices.
2. The method of claim 1, wherein said detecting the keypoints in the keyframes using a keypoint detection module and determining a first type of ranking indicator based on the keypoints in the keyframes comprises:
detecting anterior horn key points and lateral ventricle key points in the key frame by adopting a key point detection module;
determining the maximum diameter and the minimum diameter of the front angle according to the front angle key points;
and determining the lateral ventricle choroid plexus diameter and the lateral ventricle parietal outer diameter according to the lateral ventricle key point.
3. The method of claim 1, wherein said segmenting said key frame using an image segmentation module to determine a second class of ranking indicators comprises:
determining a first area according to key points in the key frame;
carrying out binarization processing on the first area to determine a second area;
segmenting the second region by adopting an image segmentation algorithm to determine three ventricle regions;
and determining the widest diameter of the three ventricles according to the three ventricles region.
4. The method of claim 1, wherein said segmenting said key frame using an image segmentation module to determine a second class of ranking indicators comprises:
segmenting the key frame according to the CT value corresponding to the skull to determine a first boundary;
adopting an image segmentation algorithm to segment the first boundary and determining a skull boundary;
and determining the maximum outer diameter of the skull and the maximum inner diameter of the skull according to the boundary of the skull.
5. The method of any one of claims 1 to 4, wherein identifying brain atrophy based on the first type of grading index and the second type of grading index comprises:
determining a brain atrophy assessment index from the first and second categories of grading indices;
inputting the brain atrophy evaluation index into a brain atrophy model, and identifying brain atrophy.
6. The method of claim 1, in which the keypoint detection module and the keyframe detection module are convolutional neural networks.
7. An apparatus for identifying brain atrophy, comprising:
the key frame detection module is used for detecting key frames in the brain image sequence;
the key point detection module is used for detecting key points in the key frame and determining a first class of grading indexes according to the key points in the key frame, wherein the first class of grading indexes comprise a maximum anterior horn diameter, a minimum anterior horn diameter, a lateral ventricle choroid plexus diameter and a lateral ventricle parietal outer diameter;
the image segmentation module is used for segmenting the key frame and determining a second class of grading indexes, wherein the second class of grading indexes comprise the maximum width diameter of three ventricles, the maximum outer diameter of a skull and the maximum inner diameter of the skull;
and the identification module is used for identifying the brain atrophy according to the first classification index and the second classification index.
8. The apparatus of claim 7, wherein the keypoint detection module comprises:
the first detection module is used for detecting anterior horn key points and lateral ventricle key points in the key frame;
the first determining module is used for determining the maximum diameter and the minimum diameter of the front angle according to the key point of the front angle;
and the second determination module is used for determining the lateral ventricle choroid plexus diameter and the lateral ventricle parietal outer diameter according to the lateral ventricle key point.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of any one of claims 1 to 6 are performed by the processor when the program is executed.
10. A computer-readable storage medium, having stored thereon a computer program executable by a computer device, for causing the computer device to perform the steps of the method of any one of claims 1 to 6, when the program is run on the computer device.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910736591.1A CN110517766B (en) | 2019-08-09 | 2019-08-09 | Method and device for identifying brain atrophy |
| PCT/CN2019/130863 WO2021027240A1 (en) | 2019-08-09 | 2019-12-31 | Brain atrophy identification method, and device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910736591.1A CN110517766B (en) | 2019-08-09 | 2019-08-09 | Method and device for identifying brain atrophy |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN110517766A CN110517766A (en) | 2019-11-29 |
| CN110517766B true CN110517766B (en) | 2020-10-16 |
Family
ID=68625476
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910736591.1A Active CN110517766B (en) | 2019-08-09 | 2019-08-09 | Method and device for identifying brain atrophy |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN110517766B (en) |
| WO (1) | WO2021027240A1 (en) |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110517766B (en) * | 2019-08-09 | 2020-10-16 | 上海依智医疗技术有限公司 | Method and device for identifying brain atrophy |
| CN111462055B (en) * | 2020-03-19 | 2024-03-08 | 东软医疗系统股份有限公司 | Skull detection method and device |
| CN111862014A (en) * | 2020-07-08 | 2020-10-30 | 深圳市第二人民医院(深圳市转化医学研究院) | A kind of ALVI automatic measurement method and device based on left and right lateral ventricle segmentation |
| CN119338831B (en) * | 2024-12-23 | 2025-08-15 | 川北医学院附属医院 | Medical image processing method and system based on deep learning |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107103612A (en) * | 2017-03-28 | 2017-08-29 | 深圳博脑医疗科技有限公司 | Automate the quantitative calculation method of subregion brain atrophy |
Family Cites Families (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7250551B2 (en) * | 2002-07-24 | 2007-07-31 | President And Fellows Of Harvard College | Transgenic mice expressing inducible human p25 |
| US7952595B2 (en) * | 2007-02-13 | 2011-05-31 | Technische Universität München | Image deformation using physical models |
| US9558396B2 (en) * | 2013-10-22 | 2017-01-31 | Samsung Electronics Co., Ltd. | Apparatuses and methods for face tracking based on calculated occlusion probabilities |
| US10004471B2 (en) * | 2015-08-06 | 2018-06-26 | Case Western Reserve University | Decision support for disease characterization and treatment response with disease and peri-disease radiomics |
| CN105844617A (en) * | 2016-03-17 | 2016-08-10 | 电子科技大学 | Brain parenchyma segmentation realization based on improved threshold segmentation algorithm |
| CN109389002A (en) * | 2017-08-02 | 2019-02-26 | 阿里巴巴集团控股有限公司 | Biopsy method and device |
| CN109214451A (en) * | 2018-08-28 | 2019-01-15 | 北京安德医智科技有限公司 | A kind of classification method and equipment of brain exception |
| CN109389585B (en) * | 2018-09-20 | 2021-11-02 | 东南大学 | A brain tissue extraction method based on fully convolutional neural network |
| CN109509211B (en) * | 2018-09-28 | 2021-11-16 | 北京大学 | Feature point extraction and matching method and system in simultaneous positioning and mapping technology |
| CN109509177B (en) * | 2018-10-22 | 2021-02-23 | 杭州依图医疗技术有限公司 | Method and device for brain image recognition |
| CN110517766B (en) * | 2019-08-09 | 2020-10-16 | 上海依智医疗技术有限公司 | Method and device for identifying brain atrophy |
-
2019
- 2019-08-09 CN CN201910736591.1A patent/CN110517766B/en active Active
- 2019-12-31 WO PCT/CN2019/130863 patent/WO2021027240A1/en not_active Ceased
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107103612A (en) * | 2017-03-28 | 2017-08-29 | 深圳博脑医疗科技有限公司 | Automate the quantitative calculation method of subregion brain atrophy |
Non-Patent Citations (1)
| Title |
|---|
| Deep Learning in Medical Image Analysis;Annual Review of Biomedical Engineering;《Annual Review of Biomedical Engineering》;20171231;第1104-1109页 * |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2021027240A1 (en) | 2021-02-18 |
| CN110517766A (en) | 2019-11-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110517766B (en) | Method and device for identifying brain atrophy | |
| US20220092789A1 (en) | Automatic pancreas ct segmentation method based on a saliency-aware densely connected dilated convolutional neural network | |
| Lee et al. | Segmentation of overlapping cervical cells in microscopic images with superpixel partitioning and cell-wise contour refinement | |
| CN110060235A (en) | A kind of thyroid nodule ultrasonic image division method based on deep learning | |
| CN110120048B (en) | Three-dimensional brain tumor image segmentation method combining improved U-Net and CMF | |
| Zhao et al. | Al-net: Attention learning network based on multi-task learning for cervical nucleus segmentation | |
| Jiang et al. | A novel white blood cell segmentation scheme based on feature space clustering | |
| CN112308846A (en) | Blood vessel segmentation method and device and electronic equipment | |
| CN109919254B (en) | Breast density classification method, system, readable storage medium and computer device | |
| Song et al. | Kidney segmentation in CT sequences using SKFCM and improved GrowCut algorithm | |
| CN114612459B (en) | A brain tumor image segmentation method based on multi-level clustering | |
| Czipczer et al. | Adaptable volumetric liver segmentation model for CT images using region-based features and convolutional neural network | |
| CN112529886A (en) | Attention DenseUNet-based MRI glioma segmentation method | |
| Yuan et al. | Hybrid method combining superpixel, random walk and active contour model for fast and accurate liver segmentation | |
| CN116883341A (en) | Liver tumor CT image automatic segmentation method based on deep learning | |
| US12148166B2 (en) | Updating boundary segmentations | |
| CN114862799B (en) | A Fully Automatic Brain Volume Segmentation Method for FLAIR-MRI Sequences | |
| CN113408595B (en) | Pathological image processing method and device, electronic equipment and readable storage medium | |
| Sinha et al. | ROI segmentation for breast cancer classification: deep learning perspective | |
| CN105956587B (en) | An automatic extraction method of meniscus from knee MRI image sequence based on shape constraint | |
| Qi et al. | An efficient FCM-based method for image refinement segmentation | |
| Cao et al. | Boundary loss with non-euclidean distance constraint for ABUS mass segmentation | |
| Çay | Deep learning-based brain tumor segmentation: a comparison of U-Net and segNet algorithms | |
| Chen et al. | A level set method with dynamic prior for cell image segmentation | |
| CN111062962B (en) | A Multi-threshold Ultrasound Image Segmentation Method Based on Differential Search Algorithm |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| TR01 | Transfer of patent right | ||
| TR01 | Transfer of patent right |
Effective date of registration: 20220322 Address after: 100080 zone a, 21 / F, block a, No. 8, Haidian Street, Haidian District, Beijing Patentee after: BEIJING SHENRUI BOLIAN TECHNOLOGY Co.,Ltd. Patentee after: Hangzhou Shenrui Bolian Technology Co., Ltd Address before: Units 06 and 07, 23rd Floor, 523 Loushanguan Road, Changning District, Shanghai, 2003 Patentee before: SHANGHAI YIZHI MEDICAL TECHNOLOGY Co.,Ltd. |