CN119477806B - Focus detection method and system based on image recognition - Google Patents
Focus detection method and system based on image recognitionInfo
- Publication number
- CN119477806B CN119477806B CN202411444467.5A CN202411444467A CN119477806B CN 119477806 B CN119477806 B CN 119477806B CN 202411444467 A CN202411444467 A CN 202411444467A CN 119477806 B CN119477806 B CN 119477806B
- Authority
- CN
- China
- Prior art keywords
- lesion
- image
- focus
- target
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a focus detection method and a focus detection system based on image recognition, wherein the method comprises the steps of obtaining a medical image to be detected, inputting the medical image into a pre-trained focus area positioning model to determine a preliminary focus area containing a focus in the medical image to be detected, obtaining a preliminary focus area image of the preliminary focus area, inputting the preliminary focus area image into a pre-trained focus area extraction model to obtain a target focus area image of the focus, and inputting the target focus area image into the pre-trained focus recognition model to obtain the focus type in the medical image to be detected. The invention solves the problem of low accuracy in focus detection in the prior art.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a focus detection method and system based on image recognition.
Background
Today, medical image diagnosis mainly relies on traditional means such as subjective visual analysis and manual labeling of doctors. Doctors can identify and diagnose the focus by combining the image characteristics through experience and clinical knowledge. However, manually analyzing and labeling images is not only time consuming, but also is prone to fatigue for the physician in the face of a large number of images, which may lead to missed or misdiagnosis.
Because of these limitations, medical image recognition is increasingly being introduced to improve the accuracy and efficiency of lesion detection by means of deep learning and computer vision techniques. By adopting the image recognition technology, the burden of medical staff can be obviously reduced, and the objectivity and consistency of diagnosis are improved.
At present, when focus detection is based, medical images are directly detected to identify focus information, however, when focus range in the images is smaller, interference factors are more, and focus detection inaccuracy is possible because focus areas and specific types need to be identified at one time.
Disclosure of Invention
In view of the above, the present invention aims to provide a focus detection method and system based on image recognition, which aims to solve the problem of low accuracy in focus detection in the prior art.
An object of the present invention is to provide a lesion detection method based on image recognition for detecting a lesion in a medical image, the method comprising:
acquiring a medical image to be detected, and inputting the medical image into a pre-trained focus area positioning model to determine a preliminary focus area containing a focus in the medical image to be detected;
acquiring a preliminary focus area image of a preliminary focus area, and inputting the preliminary focus area image into a pre-trained focus area extraction model to obtain a target focus area image of a focus;
inputting the target focus area image into a pre-trained focus recognition model to acquire the focus type in the medical image to be detected.
Further, in the above focus detection method based on image recognition, the training process of the focus area positioning model includes:
Acquiring a historical medical image containing a focus to acquire a training set and a verification set of the focus area positioning model;
Constructing a first detection algorithm based on preset parameters, and training by using the first detection algorithm according to a training set and a verification set of the focus area positioning model;
And training to obtain the focus area positioning model until the performance of the focus area positioning model meets a preset standard.
Further, in the above focus detection method based on image recognition, the training process of the focus region extraction model includes:
acquiring a history preliminary focus area image to acquire a training set and a verification set of the focus area extraction model;
Constructing a second detection algorithm based on preset parameters, and training by using the second detection algorithm according to a training set and a verification set of the focus region extraction model;
and training to obtain the focus area extraction model until the performance of the focus area extraction model meets a preset standard.
Further, in the above focus detection method based on image recognition, the training process of the focus recognition model includes:
Constructing a preset convolutional neural network, and collecting a preset number of historical target focus area images and corresponding focus types as training samples;
And respectively taking a target focus area image containing a focus and a corresponding focus type as input and output of the convolutional neural network, and performing deep learning training on the convolutional neural network until the recognition result output by the convolutional neural network meets the set accuracy, so as to obtain the focus recognition model.
Further, the method for detecting a focus based on image recognition, wherein before the step of acquiring the historical medical image including the focus to acquire the training set and the verification set of the focus area positioning model, further includes:
acquiring a historical medical image, and respectively extracting a focus area image and a non-focus area image in the historical medical image;
randomly rotating, unevenly zooming and twisting the focus area image to obtain a first focus area image;
and determining a target historical medical image according to the first focus area image and the non-focus area image.
Further, the method for detecting a focus based on image recognition, wherein the step of acquiring the history medical image and extracting the focus area image and the non-focus area image in the history medical image respectively further comprises:
Acquiring contour information of the current focus area image, and determining a diffusion rule of the focus area according to focus types of focuses in the focus area image;
Performing contour expansion on the focus region image according to the contour information and the diffusion rule to obtain an expanded focus region image;
and acquiring a center point of the focus area image, establishing a coordinate system by taking the center point as a coordinate origin, filling features in the focus area image into the expanded focus area image in the same quadrant to obtain a target expanded focus area image, and determining a target historical medical image with the non-focus area image.
Further, in the above focus detection method based on image recognition, the step of filling the features in the focus region image into the extended focus region image in the same quadrant to obtain the target extended focus region image includes:
Acquiring a rectangular area surrounding the image of the extended focus area, and performing grid division on the rectangular area;
acquiring a central grid of the focus area image in the same quadrant, and determining a target central grid corresponding to the central grid in the extended focus area image;
and filling the features of the other grids into grids which do not contain features of the expanded focus region image based on the position relation between the central grid and other grids and the target central grid to obtain a target expanded focus region image.
Another object of the present invention is to provide a lesion detection system based on image recognition for detecting a lesion in a medical image, the system comprising:
the acquisition module is used for acquiring a medical image to be detected, and inputting the medical image into a pre-trained focus area positioning model so as to determine a preliminary focus area containing a focus in the medical image to be detected;
The extraction module is used for acquiring a preliminary focus area image of the preliminary focus area, and inputting the preliminary focus area image into a pre-trained focus area extraction model to obtain a target focus area image of a focus;
the detection module is used for inputting the target focus area image into a pre-trained focus recognition model so as to acquire the focus type in the medical image to be detected.
It is a further object of the present invention to provide a readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method of any of the above.
It is a further object of the invention to provide an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, which processor implements the steps of the method described above when executing the program.
The invention primarily locates the focus area by using the trained focus area locating model, primarily identifies the area possibly with focus, locks the specific position of focus with smaller focus range by using the focus area extracting model when encountering focus with smaller focus range, finally identifies the focus type by using the focus identifying model, realizes the detection of focus area and type, and avoids the interference of other factors when identifying focus type due to the multi-stage identifying mode, thereby improving the accuracy of focus identification. Solves the problem of low accuracy in focus detection in the prior art.
Drawings
Fig. 1 is a flowchart of a focus detection method based on image recognition according to a first embodiment of the present invention;
Fig. 2is a block diagram of a focus detection system based on image recognition according to a third embodiment of the present invention.
The invention will be further described in the following detailed description in conjunction with the above-described figures.
Detailed Description
In order that the invention may be readily understood, a more complete description of the invention will be rendered by reference to the appended drawings. Several embodiments of the invention are presented in the figures. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
It will be understood that when an element is referred to as being "mounted" on another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The terms "vertical," "horizontal," "left," "right," and the like are used herein for illustrative purposes only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
How to accurately detect small lesions in medical images will be described in detail below with reference to specific embodiments and accompanying drawings.
Example 1
Referring to fig. 1, a focus detection method based on image recognition in a first embodiment of the present invention is shown, and the method includes steps S10 to S12.
Step S10, a medical image to be detected is acquired, and the medical image is input into a pre-trained focus area positioning model to determine a preliminary focus area containing a focus in the medical image to be detected.
The medical image is an image containing a focus, such as a CT image, acquired by a medical device, and the focus area positioning model is used for roughly estimating the focus area in the medical image, and primarily identifying the area where the focus may exist.
Specifically, a lighter-weight target detection algorithm (such as YOLOv, SSD, etc.) can be selected, a region where a focus may exist can be quickly identified, illustratively, when training a focus region positioning model, a historical medical image containing the focus is collected to obtain a training set and a verification set of the focus region positioning model, a first detection algorithm (such as YOLOv, SSD, etc.) is constructed based on preset parameters, training is performed by using the first detection algorithm according to the training set and the verification set of the focus region positioning model, wherein a proper loss function (such as a combination of cross entropy loss and bounding box regression loss) can be selected for training, optimizers such as Adam, SGD, etc. are used for adjusting super parameters such as learning rate, iterative training is performed on the training set for a plurality of periods, the performance of the focus region positioning model is monitored by using the verification set, overfitting is prevented until the performance of the focus region positioning model meets preset standards, and the focus region positioning model is obtained through training.
Step S11, a preliminary focus area image of a preliminary focus area is obtained, and the preliminary focus area image is input into a pre-trained focus area extraction model to obtain a focus target focus area image.
Specifically, after determining a preliminary focus area, intercepting and obtaining a preliminary focus area image, obtaining a more accurate focus target focus area image by using a focus area extraction model, specifically, collecting a history preliminary focus area image to obtain a training set and a verification set of the focus area extraction model, constructing a second detection algorithm based on preset parameters, training by using the second detection algorithm according to the training set and the verification set of the focus area extraction model until the performance of the focus area extraction model meets preset standards to obtain a focus area extraction model by training, wherein the focus area extraction model aims at accurately identifying a specific contour of a focus, network models such as U-Net, mask R-CNN and the like can be adopted, a large number of training data sets can be collected, accurate contour labeling is carried out on the focus, generally, polygons or other forms are used, the accurate reflection of boundaries of the focus is ensured by the labels, and finally training is carried out to obtain a final focus area extraction model.
In addition, in some optional embodiments of the present invention, since the sizes of the extracted preliminary lesion area images are different, in order to ensure that the sizes of the images of the data sets for training are consistent and the original characteristics can be ensured, normalization processing, such as unified scaling processing, may be performed on the obtained preliminary lesion area images.
Step S12, inputting the target lesion area image into a pre-trained lesion recognition model to obtain a lesion type in the medical image to be detected.
The focus recognition model grasps the inherent logic of focus type recognition, and the focus type in the medical image to be detected can be accurately recognized by inputting the target focus region image into the pre-trained focus recognition model.
Specifically, a preset convolutional neural network is constructed, a preset number of historical target focus area images and corresponding focus types are collected to serve as training samples, the target focus area images containing focuses and the corresponding focus types are respectively used as input and output of the convolutional neural network, deep learning training is conducted on the convolutional neural network until the identification result output by the convolutional neural network meets the set accuracy, and a focus identification model is obtained, wherein the process of selecting and training the neural network in more detail is understood by a person skilled in the art, and is not repeated here.
In summary, according to the focus detection method based on image recognition in the above embodiment of the present invention, a trained focus area positioning model is used to initially position a focus area, an area where a focus may exist is initially identified, when a focus with a smaller focus range is encountered, a focus area extraction model is used to lock a specific position of a focus with a smaller focus range, and finally a focus recognition model is used to identify a focus type, so that focus area and type detection is realized. Solves the problem of low accuracy in focus detection in the prior art.
Example two
The present embodiment also proposes a focus detection method based on image recognition, which is different from the focus detection method based on image recognition in the first embodiment in that:
The step of acquiring the historical medical image containing the focus to acquire the training set and the verification set of the focus area positioning model further comprises the following steps:
acquiring a historical medical image, and respectively extracting a focus area image and a non-focus area image in the historical medical image;
randomly rotating, unevenly zooming and twisting the focus area image to obtain a first focus area image;
and determining a target historical medical image according to the first focus area image and the non-focus area image.
In practice, the image sample with the focus is difficult to obtain, and in order to ensure the accuracy of model training, enough various sample data are required to be ensured, so that in order to greatly enhance the diversity of a data set, the focus is randomly rotated to simulate different orientations, the focus is unevenly scaled, the focus with different sizes is adapted to the focus and affine transformation or perspective transformation is used, the shape of the focus is changed, the variation of biological tissues is simulated, the focus area is changed in a geometric transformation mode, and finally the focus area image and the extracted non-focus area image are synthesized to obtain a corresponding extended target historical medical image.
In addition, in practice, a certain randomness exists in the above-mentioned manner of improving the diversity of data, in order to improve the diversity of sample data pertinently, the diffusion rule of the focus area can be determined according to the current focus type after the focus area is extracted, thereby the diversity of the diffusion expansion sample can be carried out on the current focus area, specifically, the contour information of the current focus area image is obtained, the diffusion rule of the focus area is determined according to the focus type in the focus area image, the contour expansion is carried out on the focus area image according to the contour information and the diffusion rule to obtain an expanded focus area image, at this time, the contour features of the focus area are expanded, in order to further completely expand the features of the focus area, a central point of the focus area image is obtained, the coordinate system is established by taking the central point as the coordinate origin, the features in the focus area image are filled into the expanded focus area image in the same quadrant, then the target expanded focus area image is obtained, the target history image is determined by the non-focus area image, for example, the geometric center of the focus area image can be taken, the coordinate system is established according to the central point, the focus area image is obtained, the focus area image is expanded, the contour expansion is carried out on the focus area image according to the focus area image, the contour expansion is carried out on the focus area image, at the focus area image, and the final image is obtained, and the final coordinate system is obtained, and the final expansion image is obtained.
Additionally, in some optional embodiments of the present invention, the step of filling the features in the lesion area image into the extended lesion area image in the same quadrant to obtain the target extended lesion area image includes:
Acquiring a rectangular area surrounding the image of the extended focus area, and performing grid division on the rectangular area;
acquiring a central grid of the focus area image in the same quadrant, and determining a target central grid corresponding to the central grid in the extended focus area image;
and filling the features of the other grids into grids which do not contain features of the expanded focus region image based on the position relation between the central grid and other grids and the target central grid to obtain a target expanded focus region image.
When the feature migration filling is carried out, a grid division mode is adopted to divide a focus area into a plurality of small grids, and the migration is carried out according to the pixel features in each grid. In the embodiment of the invention, a center point mapping strategy is adopted, the center mapping is based on the relation between the center point of the focus area and the center point of the extension area, the position of a corresponding target center grid or the center point is found in the extension area, and each grid of the focus area is corresponding to the extension focus area according to the relative position of the grids.
For example, the features of the previous focus area may be reserved when filling, the features of the focus area may not be reserved, that is, the features may be filled in the whole extended focus area, or the features may be filled after the overlapping portion of the features and the focus area is removed.
In summary, according to the focus detection method based on image recognition in the above embodiment of the present invention, a trained focus area positioning model is used to initially position a focus area, an area where a focus may exist is initially identified, when a focus with a smaller focus range is encountered, a focus area extraction model is used to lock a specific position of a focus with a smaller focus range, and finally a focus recognition model is used to identify a focus type, so that focus area and type detection is realized. Solves the problem of low accuracy in focus detection in the prior art.
Example III
Referring to fig. 2, a focus detection system based on image recognition according to a third embodiment of the present invention is used for detecting a focus in a medical image, and the system includes:
An acquisition module 100, configured to acquire a medical image to be detected, and input the medical image into a pre-trained focus area positioning model, so as to determine a preliminary focus area containing a focus in the medical image to be detected;
The extraction module 200 is configured to obtain a preliminary focus area image of a preliminary focus area, and input the preliminary focus area image into a pre-trained focus area extraction model to obtain a target focus area image of a focus;
the detection module 300 is configured to input the target lesion area image into a pre-trained lesion recognition model, so as to obtain a lesion type in the medical image to be detected.
Further, in the focus detection system based on image recognition, the training process of the focus area positioning model includes:
Acquiring a historical medical image containing a focus to acquire a training set and a verification set of the focus area positioning model;
Constructing a first detection algorithm based on preset parameters, and training by using the first detection algorithm according to a training set and a verification set of the focus area positioning model;
And training to obtain the focus area positioning model until the performance of the focus area positioning model meets a preset standard.
Further, in the focus detection system based on image recognition, the training process of the focus region extraction model includes:
acquiring a history preliminary focus area image to acquire a training set and a verification set of the focus area extraction model;
constructing a first detection algorithm based on preset parameters, and training by using the second detection algorithm according to a training set and a verification set of the focus region extraction model;
and training to obtain the focus area extraction model until the performance of the focus area extraction model meets a preset standard.
Further, in the focus detection system based on image recognition, the training process of the focus recognition model includes:
Constructing a preset convolutional neural network, and collecting a preset number of historical target focus area images and corresponding focus types as training samples;
And respectively taking a target focus area image containing a focus and a corresponding focus type as input and output of the convolutional neural network, and performing deep learning training on the convolutional neural network until the recognition result output by the convolutional neural network meets the set accuracy, so as to obtain the focus recognition model.
Further, the foregoing focus detection system based on image recognition, wherein before the step of acquiring the historical medical image including the focus to obtain the training set and the verification set of the focus area positioning model, further includes:
acquiring a historical medical image, and respectively extracting a focus area image and a non-focus area image in the historical medical image;
randomly rotating, unevenly zooming and twisting the focus area image to obtain a first focus area image;
and determining a target historical medical image according to the first focus area image and the non-focus area image.
Further, the focus detection system based on image recognition, wherein the step of acquiring the history medical image and extracting the focus area image and the non-focus area image in the history medical image respectively further includes:
Acquiring contour information of the current focus area image, and determining a diffusion rule of the focus area according to focus types of focuses in the focus area image;
Performing contour expansion on the focus region image according to the contour information and the diffusion rule to obtain an expanded focus region image;
and acquiring a center point of the focus area image, establishing a coordinate system by taking the center point as a coordinate origin, filling features in the focus area image into the expanded focus area image in the same quadrant to obtain a target expanded focus area image, and determining a target historical medical image with the non-focus area image.
Further, in the above focus detection system based on image recognition, the step of filling the features in the focus area image into the extended focus area image in the same quadrant to obtain the target extended focus area image includes:
Acquiring a rectangular area surrounding the image of the extended focus area, and performing grid division on the rectangular area;
acquiring a central grid of the focus area image in the same quadrant, and determining a target central grid corresponding to the central grid in the extended focus area image;
and filling the features of the other grids into grids which do not contain features of the expanded focus region image based on the position relation between the central grid and other grids and the target central grid to obtain a target expanded focus region image.
The functions or operation steps implemented when the above modules are executed are substantially the same as those in the above method embodiments, and are not described herein again.
Example IV
Another aspect of the present invention also provides a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method described in any one of the first to second embodiments.
Example five
In another aspect, the present invention further provides an electronic device, where the electronic device includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor executes the program to implement the steps of the method described in any one of the first to second embodiments.
The technical features of the above embodiments may be arbitrarily combined, and for brevity, all of the possible combinations of the technical features of the above embodiments are not described, however, they should be considered as the scope of the description of the present specification as long as there is no contradiction between the combinations of the technical features.
Those of skill in the art will appreciate that the logic and/or steps represented in the flow diagrams or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable storage medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable storage medium would include an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer-readable storage medium may even be paper or other suitable medium upon which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of techniques known in the art, discrete logic circuits with logic gates for implementing logic functions on data signals, application specific integrated circuits with appropriate combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing examples illustrate only a few embodiments of the invention and are described in detail herein without thereby limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.
Claims (6)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411444467.5A CN119477806B (en) | 2024-10-16 | 2024-10-16 | Focus detection method and system based on image recognition |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411444467.5A CN119477806B (en) | 2024-10-16 | 2024-10-16 | Focus detection method and system based on image recognition |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119477806A CN119477806A (en) | 2025-02-18 |
| CN119477806B true CN119477806B (en) | 2025-08-01 |
Family
ID=94593549
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411444467.5A Active CN119477806B (en) | 2024-10-16 | 2024-10-16 | Focus detection method and system based on image recognition |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119477806B (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112581458A (en) * | 2020-12-24 | 2021-03-30 | 清华大学 | Image processing method and device |
| CN116596861A (en) * | 2023-04-28 | 2023-08-15 | 中山大学 | Method, system, device and storage medium for identifying tooth surface lesions |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100158332A1 (en) * | 2008-12-22 | 2010-06-24 | Dan Rico | Method and system of automated detection of lesions in medical images |
| US9978150B2 (en) * | 2015-08-05 | 2018-05-22 | Algotec Systems Ltd. | Method and system for spatial segmentation of anatomical structures |
| CN109993733A (en) * | 2019-03-27 | 2019-07-09 | 上海宽带技术及应用工程研究中心 | Detection method, system, storage medium, terminal and the display system of pulmonary lesions |
| CN111028206A (en) * | 2019-11-21 | 2020-04-17 | 万达信息股份有限公司 | Prostate cancer automatic detection and classification system based on deep learning |
| CN111127466B (en) * | 2020-03-31 | 2021-06-11 | 上海联影智能医疗科技有限公司 | Medical image detection method, device, equipment and storage medium |
-
2024
- 2024-10-16 CN CN202411444467.5A patent/CN119477806B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112581458A (en) * | 2020-12-24 | 2021-03-30 | 清华大学 | Image processing method and device |
| CN116596861A (en) * | 2023-04-28 | 2023-08-15 | 中山大学 | Method, system, device and storage medium for identifying tooth surface lesions |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119477806A (en) | 2025-02-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111127466B (en) | Medical image detection method, device, equipment and storage medium | |
| CN109741346B (en) | Region-of-interest extraction method, device, equipment and storage medium | |
| CN112529894B (en) | Thyroid nodule diagnosis method based on deep learning network | |
| JP5814504B2 (en) | Medical image automatic segmentation system, apparatus and processor using statistical model | |
| CN110717905B (en) | Brain image detection method, computer device, and storage medium | |
| CN110232383A (en) | A kind of lesion image recognition methods and lesion image identifying system based on deep learning model | |
| CN112734710A (en) | Device and system for constructing focus recognition model based on historical pathological information | |
| CN111862044A (en) | Ultrasound image processing method, apparatus, computer equipment and storage medium | |
| US20090022375A1 (en) | Systems, apparatus and processes for automated medical image segmentation | |
| CN114494215B (en) | Thyroid nodule detection method based on transducer | |
| JP6273291B2 (en) | Image processing apparatus and method | |
| JP2016531709A (en) | Image analysis technology for diagnosing disease | |
| US12106533B2 (en) | Method and system for segmenting interventional device in image | |
| CN111986206A (en) | Lung lobe segmentation method and device based on UNet network and computer-readable storage medium | |
| KR20240147616A (en) | Apparatus and method for quantitative chronic obstructive pulmonary disease evaluation using analysis of emphysema | |
| CN107582058A (en) | A kind of method of the intelligent diagnostics malignant tumour of magnetic resonance prostate infusion image | |
| CN113159040A (en) | Method, device and system for generating medical image segmentation model | |
| CN113706559A (en) | Blood vessel segmentation extraction method and device based on medical image | |
| WO2008036372A2 (en) | Method and system for lymph node segmentation in computed tomography images | |
| CN119477806B (en) | Focus detection method and system based on image recognition | |
| CN119170255A (en) | A pancreatic tumor intelligent analysis method and system based on multimodal medical image fusion | |
| CN117315378B (en) | Grading judgment method for pneumoconiosis and related equipment | |
| Delmoral et al. | Segmentation of pathological liver tissue with dilated fully convolutional networks: A preliminary study | |
| CN113554647B (en) | Registration method and device for medical images | |
| KR102400568B1 (en) | Method and apparatus for identifying anomaly area of image using encoder |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |