CN114820663B - Assistant positioning method for determining radio frequency ablation therapy - Google Patents
Assistant positioning method for determining radio frequency ablation therapy Download PDFInfo
- Publication number
- CN114820663B CN114820663B CN202210737793.XA CN202210737793A CN114820663B CN 114820663 B CN114820663 B CN 114820663B CN 202210737793 A CN202210737793 A CN 202210737793A CN 114820663 B CN114820663 B CN 114820663B
- Authority
- CN
- China
- Prior art keywords
- pixel point
- image
- abdominal
- value
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radiology & Medical Imaging (AREA)
- Animal Behavior & Ethology (AREA)
- Optics & Photonics (AREA)
- Pathology (AREA)
- High Energy & Nuclear Physics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- Pulmonology (AREA)
- Multimedia (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention relates to the field of image processing, and provides an auxiliary positioning method for determining radio frequency ablation treatment, which comprises the following steps: acquiring an abdominal CT image; the richness of each pixel point is obtained through the CT value of each pixel point and the neighborhood pixel points on the abdominal CT image; partitioning the abdominal CT image, and taking the historical tumor probability of each region on the abdominal CT image as the attention of each pixel point of the region; obtaining a first enhancement coefficient of each pixel point; obtaining the adjusted CT values of all the pixel points; obtaining a second enhancement coefficient of each pixel point; acquiring a CT reconstruction value of each pixel point, and acquiring a new abdomen CT image through the CT reconstruction values of all the pixel points; and carrying out threshold segmentation on the new abdomen CT image to obtain a tumor region. The invention can obtain clearer tumor boundary and has simple method.
Description
Technical Field
The invention relates to the field of artificial intelligence, in particular to an auxiliary positioning method for determining radio frequency ablation treatment.
Background
The liver is one of the five internal organs of the human body, mainly has metabolic function, and is an important organ for maintaining the life of the human body. Due to the influence of other factors such as poor eating habits, irregular work and rest and the like, tumor lesions of some tumors occur in liver parts, and then liver cancer is formed, which seriously threatens the life health of human beings. China is one of the high incidence areas of liver cancer, and the incidence rate and the fatality rate are high.
The radio frequency ablation treatment method is an effective treatment method for treating liver tumors, and the treatment method is to insert an electrode catheter into the center of a liver tumor through the femoral artery and vein, the internal jugular vein and the subclavian vein, then spread an electrode and start to perform radio frequency ablation. An important step before determining the radio frequency ablation treatment plan is to scan and locate the liver by CT, and (percutaneously) puncture the tumor after determining the tumor position and size. However, in clinical application, doctors still observe the tumor region in the CT image through their own knowledge and experience, but in the abdominal CT image, the tumor morphology is different between different individuals, and the gray scale difference between the tumor itself and the liver is small, so it is difficult to distinguish the clear tumor boundary, and therefore, a tumor CT image processing method is needed to improve the accuracy of the doctors in tumor identification.
The invention carries out image preprocessing on the liver region on the basis of carrying out liver segmentation on the abdominal CT image, so that the gray characteristic of the tumor is better extracted, the tumor boundary is clearer, and reliable early-stage data support and treatment assistance are provided for determining a radio frequency ablation treatment method and a treatment plan.
Disclosure of Invention
The invention provides an auxiliary positioning method for determining radio frequency ablation treatment, which aims to solve the problem that the boundary of the existing abdomen CT image is not clear.
The invention discloses an auxiliary positioning method for determining radio frequency ablation treatment, which adopts the following technical scheme that the method comprises the following steps:
acquiring an abdominal CT image;
obtaining the abundance of each pixel point on the abdominal CT image through the CT values of each pixel point and the adjacent pixel points on the abdominal CT image;
partitioning the abdominal CT image, and taking the historical tumor probability of each region on the abdominal CT image as the attention of each pixel point of the region;
obtaining a first enhancement coefficient of each pixel point according to the abundance and the attention of each pixel point on the abdominal CT image;
obtaining the adjusted CT values of all the pixel points through the first enhancement coefficient of each pixel point on the abdominal CT image and the CT values of all the pixel points on the abdominal CT image;
obtaining a second enhancement coefficient of each pixel point through the first enhancement coefficient of each pixel point and the maximum value in the adjusted CT values of all the pixel points;
reconstructing the CT value of each pixel point through the CT value, the first enhancement coefficient and the second enhancement coefficient of each pixel point on the abdominal CT image to obtain the CT reconstruction value of each pixel point, and obtaining a new abdominal CT image through the CT reconstruction values of all the pixel points;
and performing threshold segmentation on the new abdominal CT image to obtain a tumor region.
Further, according to the auxiliary positioning method for determining the radio frequency ablation therapy, the abdomen CT image is any one abdomen CT image in an abdomen CT image sequence of the same person, and other abdomen CT images in the abdomen CT image sequence are processed in the same way according to the processing method of the abdomen CT image.
Further, in the auxiliary positioning method for determining the radio frequency ablation therapy, the method for partitioning the abdominal CT image is:
and establishing a coordinate system on the abdominal CT image, and dividing each pixel point on the abdominal CT image into regions by using the coordinates of each pixel point on the abdominal CT image to obtain the region where each pixel point is located.
Further, in the auxiliary positioning method for determining the radio frequency ablation treatment, the CT value of each pixel point on the abdominal CT image is the redefined CT value of the pixel point;
the expression of the CT value redefined by the pixel point is as follows:
in the formula:to representThe newly defined CT value of the pixel point is processed,to representThe CT value of the pixel point is located,representing the maximum CT value in the abdominal CT image,representing the minimum CT value in an abdominal CT image.
Further, in the auxiliary positioning method for determining the rf ablation therapy, the expression of the CT reconstruction value of the pixel point is:
in the formula:to representThe reconstructed value of the CT at the pixel point,a first enhancement coefficient representing a pixel point,and a second enhancement coefficient representing a pixel point.
Further, in the method for determining an auxiliary location for rf ablation therapy, the expression of the second enhancement factor of the pixel point is:
in the formula:and expressing the maximum value in the adjusted CT values of all the pixel points obtained by the first enhancement coefficients of the pixel points and the redefined CT values of all the pixel points on the abdominal CT image.
Furthermore, in the method for determining the auxiliary positioning for the radiofrequency ablation therapy, the first enhancement coefficient of each pixel point is obtained by dividing the richness and the attention of each pixel point;
the expression of the first enhancement coefficient of the pixel point is as follows:
in the formula:the attention degree of the pixel point is represented,and expressing the richness of the pixel points.
Further, in the method for determining an auxiliary location for rf ablation therapy, the richness of the pixel points is expressed as:
in the formula:the first window of the central pixel pointA central pixel point ofThe position of the pixel point is determined,the window in which the central pixel point is positioned is expressedThe redefined CT value of each pixel point,the window in which the central pixel point is positioned is expressedThe redefined CT value of each pixel point is the redefined CT value of the central pixel point,and representing the redefined CT mean value of all pixel points on the abdominal CT image.
The beneficial effects of the invention are: the invention provides an auxiliary positioning method for determining radio frequency ablation treatment, which is characterized in that the abundance of each pixel point is obtained through the CT value of the pixel point, the attention of each pixel point is obtained by utilizing the probability of a tumor in each region in a database, and then the CT reconstruction value of each pixel point is determined, so that a new abdominal CT image is obtained, and treatment assistance is provided for the subsequent determination of the radio frequency ablation treatment method.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart diagram of an embodiment of an assisting localization method for determining a radio frequency ablation treatment according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
An embodiment of an assisting positioning method for determining a radio frequency ablation therapy of the present invention, as shown in fig. 1, includes:
101. an abdominal CT image is acquired.
Multi-sequence abdominal CT images containing the liver region are acquired, the multi-sequence being that the probe performs a cross-sectional scan one after the other around a part. The multi-sequence abdominal CT images have various text information and other noise interferences, and therefore, the interferences of such noises need to be removed before the CT images are preprocessed. The embodiment uses a DNN semantic segmentation network method to extract a human tissue area image in an image, namely an abdomen area image, and removes text background noise.
The content of the used DNN semantic segmentation network is:
the input data set of the network is an abdomen CT image set which is screened by a professional diagnostician and contains the liver.
Each CT image in the image set is manually marked, and the CT images are divided into two types: marking the target class of the human tissue area as 1; the text region background class is labeled 0.
In the embodiment, the DNN semantic is used to divide the network for classification, so the cross entropy function adopted by the network is a loss function.
Thus, a multi-sequence abdominal CT image is obtained by the semantic segmentation.
102. And obtaining the abundance of each pixel point on the abdominal CT image through the CT values of each pixel point and the adjacent pixel points on the abdominal CT image.
Due to the influence of CT equipment, environmental noise, tumor gray scale characteristics and the like, the boundary of the tumor in the CT image is fuzzy, and the texture of the tumor area is unclear. Therefore, before the step of extracting the liver tumor image, the liver tumor image needs to be preprocessed, the contrast between the tumor region and the liver region is enhanced, and the effect of threshold segmentation is improved, so that the extraction of the liver tumor region is facilitated. The abdominal CT image preprocessing method comprises the following steps:
the CT values in the abdominal CT image are distributed in the range of-1000 to 1000. In order to improve the calculation efficiency and reduce the calculation amount, the CT range is defined to be within the range of 0-255, and the expression of the redefined CT value is as follows:
in the formula:to representThe redefined CT value of the pixel point is in the range of 0 to 255,to representThe CT value of the pixel point is located,represents the maximum CT value in the CT image,representing the minimum CT value in the CT image.
And analyzing the CT image, wherein the CT value of the area where the tumor is located is different from other tissues of the human body. In the tumor region, the image is expressed in a shadow shape, and the difference between a target pixel point and other pixel points in the neighborhood exists among the pixel points; for the pixels located at the edge of the tumor region, the target pixel and the rest pixels in the neighborhood are different, and the target pixels are not the same type of CT value pixels. And the textural features of the tumor region are also expressed as the difference of the CT values of the pixel points in the tumor range, namely the CT values of the pixel points in the tumor region are also different.
The embodiment establishesAnd counting the difference of the CT values among the pixel points of the CT image through the neighborhood window with the size. Performing sliding window operation on the CT image, counting the CT values of neighborhood pixels of the target pixel point, and calculating the richness of the target pixel point according to the CT values of the pixels in the window. In the neighborhood of the target pixel point, the larger the difference of CT values among the pixel points is, the larger the richness is, and the better the image effect is; the smaller the difference of CT values is, the less the richness is, and the less obvious the image effect is. Richness degreeThe expression of (a) is:
in the formula:the window in which the central pixel point is positioned is expressedA central pixel point ofThe position of the pixel point is determined,the window in which the central pixel point is positioned is expressedThe redefined CT value of each pixel point,the window in which the central pixel point is positioned is expressedThe redefined CT value of each pixel point is the redefined CT value of the central pixel point,and representing the redefined CT mean value of all pixel points on the abdominal CT image.
Performing sliding window operation on all pixel points in the CT image, and obtaining the richness of each pixel point in the CT image according to the method。
According to the steps, the richness of each pixel point in the CT image is obtainedThe richness is calculated according to the redefined CT value of each pixel point in the CT image.
103. And partitioning the abdominal CT image, and taking the historical tumor probability of each region on the abdominal CT image as the attention of each pixel point of the region.
According to the embodiment, on the basis of partitioning the liver, the condition that tumors occur in different areas of the liver is counted according to a big data counting technology.
By analyzing a large number of multi-sequence abdominal liver CT images, due to the difference of human postures and the morphological difference between individuals during CT scanning, the acquired CT image sets are not in a uniform orientation, which may hinder the subsequent analysis. Therefore, the attention degree of each pixel point is calculatedIt is previously necessary to subject the images in the image set to a rotational translation operation.
In this embodiment, a coordinate system is established for each image with the vertebra as a center of the coordinate system, and the following rotational and translational alignment is performed, which specifically includes the steps of: 1) a CT image which is artificially corrected is taken as a standard, the center of a vertebral body (a pixel point row with the most central abscissa is selected by scanning pixel points of the image, a CT value sequence of the pixel point row is obtained, an area with a larger CT value is obtained, and the most central point of the area is found to be the center) is taken as a coordinate origin, and a coordinate system is established. 2) And establishing a coordinate system in each CT image, comparing the abscissa axis or the ordinate axis in the image with the abscissa axis or the ordinate axis of the standard image, and calculating the offset angle. 3) And carrying out translation rotation on the coordinate axis of the image according to the obtained offset angle to obtain a CT image set of a unified coordinate system.
After the coordinate system is established, the coordinates of each pixel point can be obtained. From a priori knowledge, it can be known that: the liver is located in the upper left and upper right positions of the vertebra. Therefore, straight lines are constructed at angles of 30 °, 60 °, 120 ° and 150 ° to the positive direction of the abscissa axis, respectively, with the origin of the coordinate system as an end point. The image liver area is divided into 6 sections and each section is numbered separately.
After the partition, the area of the pixel point can be judged according to the coordinate of the pixel point, and the expression is as follows:
through the expression, the area where the pixel point is located can be judged. For example, ifCoordinates of the pixel point satisfyJudging the pixel point as the firstPixel points of the region.
The embodiment adopts a big data statistical method and is realized by a professional doctorAnd judging the area of the liver tumor. The method comprises the following specific steps: setting each subarea accumulator, counting the frequency of the tumors in each area in all individuals in the database by the subarea accumulator, and using the frequency of the tumors in each areaIs shown (therein)Representing I, II, … and VI), the sequence number of the CT image sequence set isIs represented byAnd (3) classifying the tumor regions of the CT images in each CT image sequence set by each individual through interpretation of a professional doctor, counting the number of the regions of each individual with tumors in the corresponding CT image sequence to obtain the frequency of the tumors in each region of all the individuals, and calculating the frequency of the tumors in each region to be used as the probability of the tumors in each region. Calculating the probability of the tumor in each region. WhereinThe calculation expression of (a) is:
in the formula:indicates that the tumor is in the first placeThe probability of an individual region or regions is,indicates that the tumor is in the first placeNumber of image sequences of each region.
After big data statistics, the probability of the tumor in each region is obtained. If the probability is higher, the probability that the tumor appears in the region is higher, so that the attention degree of the pixel points in the region is higher. Thus, passing probabilityAnd obtaining the attention of each pixel point.
Obtaining the attention of each pixel point according to the area of the pixel point, wherein the attention of each pixel point is the probability counted by big dataThe attention of the pixel points of each region isI.e. the tumor is on the firstThe probability of the region is taken asAttention of each pixel point in each region. For the pixel points which do not belong to the I, II, … and VI areas, the attention degree is directly set to be 0.001.
In this embodiment, the image is preprocessed in a linear variation manner, and according to the richness of each pixel point in the obtained CT imageAnd degree of attentionAnd calculating the enhancement degree of each pixel point. Wherein the smaller the richness, the greater the attention, the greater the degree of enhancement; the greater the richness, the less attention, and the less enhancement. The expression for the linear variation is:
in the formula:a first enhancement coefficient representing a linear transformation,a second enhancement coefficient representing a linear variation,and the CT value of the pixel point after the linear change is represented, namely the CT reconstruction value of the pixel point.
104. And obtaining a first enhancement coefficient of each pixel point according to the abundance and the attention of each pixel point on the abdominal CT image.
The richness of each pixel point has been calculatedAnd degree of attentionBy richness of each pixelAnd degree of attentionObtaining a first enhancement coefficient of each pixel pointThe expression is:
105. and obtaining the adjusted CT value of all the pixel points through the first enhancement coefficient of each pixel point on the abdominal CT image and the CT values of all the pixel points on the abdominal CT image.
Due to the enhanced coefficientAfter the CT image is stretched, the histogram of the obtained CT image is not necessarily distributed in [0,255%]Within the range. Thus by another enhancement factorThe images are adjusted so that the CT images are distributed as much as possible over [0,255 ]]Within. Thus, the first enhancement factor is passed through each pixelStretching each pixel point in the CT image to obtain the range of CT values [ a, b ] of all pixel points after stretching]。
106. And obtaining a second enhancement coefficient of each pixel point through the first enhancement coefficient of each pixel point and the maximum value in the adjusted CT values of all the pixel points.
for example, if a certain pixel pointRichness ofAttention degreePassing through the pixel pointThe richness and the attention degree of the pixel point are obtainedA first enhancement coefficient of (i), i.e.Stretching each pixel point in the CT image through the first enhancement coefficient to obtain the stretched CT value of all the pixel points, namely multiplying the redefined CT value of each pixel point by the first enhancement coefficient to obtain the stretched CT value range of all the pixel points, and selecting the maximum value of the stretched CT value rangeI.e. b =By passingAnd a first enhancement coefficient of the pixelObtaining pixel pointsSecond enhancement coefficient of。
107. And reconstructing the CT value of each pixel point through the CT value, the first enhancement coefficient and the second enhancement coefficient of each pixel point on the abdominal CT image to obtain the CT reconstructed value of each pixel point, and obtaining a new abdominal CT image through the CT reconstructed values of all the pixel points.
The expression of linear transformation of each pixel point is obtained through the stepsAnd reconstructing the CT value of each pixel point of the CT image by a linear transformation expression to obtain the CT reconstructed value of each pixel point, thereby obtaining a preprocessed image, namely a new abdomen CT image.
So far, the CT value of each pixel point after enhancement is obtained through the linear transformationThereby obtaining an enhanced image.
108. And performing threshold segmentation on the new abdominal CT image to obtain a tumor region.
Performing threshold segmentation according to the obtained preprocessed image, extracting a liver tumor mask: and (3) segmenting according to the selected threshold by adopting an OTSU threshold selection method, setting the value of the pixel point which is greater than the threshold as 1, and setting the value of the pixel point which is less than the threshold as 0, thereby obtaining a mask of the tumor region.
The obtained mask of the tumor region and the CT image are covered to obtain the CT image of the tumor, so that a doctor can conveniently interpret the CT image, the diagnosis efficiency is improved, the position and the size of the tumor are further obtained, and reliable early-stage data support and treatment assistance are provided for determining a radio frequency ablation treatment method and a treatment plan.
The invention provides an auxiliary positioning method for determining radio frequency ablation treatment, which is characterized in that the abundance of each pixel point is obtained through the CT value of the pixel point, the attention of each pixel point is obtained by utilizing the probability of a tumor in each region in a database, and then the CT reconstruction value of each pixel point is determined, so that a new abdominal CT image is obtained, and treatment assistance is provided for the subsequent determination of the radio frequency ablation treatment method.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (3)
1. An assisted localization method for determining a radio frequency ablation treatment, comprising:
acquiring an abdominal CT image;
the richness of each pixel point on the abdominal CT image is obtained through the CT values of each pixel point on the abdominal CT image and the neighborhood pixel points;
the CT value of each pixel point on the abdominal CT image is the redefined CT value of the pixel point;
the expression of the CT value redefined by the pixel point is as follows:
in the formula:to representThe redefined CT value of the pixel point is processed,to representThe CT value of the pixel point is located,representing the maximum CT value in the abdominal CT image,represents the minimum CT value in an abdominal CT image;
the expression of the richness of the pixel points is as follows:
in the formula:the richness of the pixel points is represented,the window in which the central pixel point is positioned is expressedA central pixel point ofThe position of the pixel point is determined,the window in which the central pixel point is positioned is expressedThe redefined CT value of each pixel point,the window in which the central pixel point is positioned is expressedThe redefined CT value of each pixel is the redefined CT value of the central pixel,representing the redefined CT mean value of all pixel points on the abdominal CT image;
partitioning the abdominal CT image, and taking the historical tumor probability of each region on the abdominal CT image as the attention of each pixel point of the region;
the historical tumor probability of each region on the abdominal CT image is the historical tumor frequency of each region on the abdominal CT image;
obtaining a first enhancement coefficient of each pixel point according to the abundance and the attention of each pixel point on the abdominal CT image;
the expression of the first enhancement coefficient of the pixel point is as follows:
in the formula:a first enhancement coefficient representing a pixel point,the attention degree of the pixel point is represented,representing the richness of pixel points;
obtaining the adjusted CT values of all the pixel points through the first enhancement coefficient of each pixel point on the abdominal CT image and the CT values of all the pixel points on the abdominal CT image;
obtaining a second enhancement coefficient of each pixel point through the first enhancement coefficient of each pixel point and the maximum value of the adjusted CT values of all the pixel points;
the expression of the second enhancement coefficient of the pixel point is as follows:
in the formula:a second enhancement coefficient representing a pixel point,a first enhancement coefficient representing a pixel point,expressing the maximum value of the adjusted CT values of all the pixel points obtained by the first enhancement coefficients of the pixel points and the redefined CT values of all the pixel points on the abdominal CT image;
reconstructing the CT value of each pixel point through the CT value, the first enhancement coefficient and the second enhancement coefficient of each pixel point on the abdominal CT image to obtain the CT reconstruction value of each pixel point, and obtaining a new abdominal CT image through the CT reconstruction values of all the pixel points;
the expression of the CT reconstruction value of the pixel point is as follows:
in the formula:representThe reconstructed value of the CT at the pixel point,representThe newly defined CT value of the pixel point is processed,a first enhancement coefficient representing a pixel point,a second enhancement coefficient representing a pixel point;
and performing threshold segmentation on the new abdominal CT image to obtain a tumor region.
2. An auxiliary positioning method for determining radio frequency ablation treatment according to claim 1, wherein the abdominal CT image is any one of abdominal CT images in an abdominal CT image sequence of the same person, and other abdominal CT images in the abdominal CT image sequence are processed in the same way according to the processing method of the abdominal CT image.
3. An aided location method for determining radio frequency ablation treatment according to claim 1, wherein the method of partitioning the abdominal CT image is:
and establishing a coordinate system on the abdominal CT image, and dividing each pixel point on the abdominal CT image into regions by using the coordinates of each pixel point on the abdominal CT image to obtain the region where each pixel point is located.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210737793.XA CN114820663B (en) | 2022-06-28 | 2022-06-28 | Assistant positioning method for determining radio frequency ablation therapy |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210737793.XA CN114820663B (en) | 2022-06-28 | 2022-06-28 | Assistant positioning method for determining radio frequency ablation therapy |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN114820663A CN114820663A (en) | 2022-07-29 |
| CN114820663B true CN114820663B (en) | 2022-09-09 |
Family
ID=82522620
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210737793.XA Active CN114820663B (en) | 2022-06-28 | 2022-06-28 | Assistant positioning method for determining radio frequency ablation therapy |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114820663B (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116385315B (en) * | 2023-05-31 | 2023-09-08 | 日照天一生物医疗科技有限公司 | Image enhancement method and system for simulated ablation of tumor therapeutic instrument |
| CN116993628B (en) * | 2023-09-27 | 2023-12-08 | 四川大学华西医院 | A CT image enhancement system for tumor radiofrequency ablation guidance |
Family Cites Families (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7522779B2 (en) * | 2004-06-30 | 2009-04-21 | Accuray, Inc. | Image enhancement method and system for fiducial-less tracking of treatment targets |
| CN100573581C (en) * | 2006-08-25 | 2009-12-23 | 西安理工大学 | Semi-automatic segmentation method for lung CT image lesions |
| DE102007028270B4 (en) * | 2007-06-15 | 2013-11-14 | Siemens Aktiengesellschaft | Method for segmenting image data to identify a liver |
| CN107464250B (en) * | 2017-07-03 | 2020-12-04 | 深圳市第二人民医院 | Automatic segmentation method of breast tumor based on 3D MRI images |
| CN108596887B (en) * | 2018-04-17 | 2020-06-02 | 湖南科技大学 | Automatic segmentation method for liver tumor region image in abdominal CT sequence image |
| CN113674281B (en) * | 2021-10-25 | 2022-02-22 | 之江实验室 | Liver CT automatic segmentation method based on deep shape learning |
-
2022
- 2022-06-28 CN CN202210737793.XA patent/CN114820663B/en active Active
Also Published As
| Publication number | Publication date |
|---|---|
| CN114820663A (en) | 2022-07-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN114820663B (en) | Assistant positioning method for determining radio frequency ablation therapy | |
| CN115345893B (en) | Ovarian tissue canceration region segmentation method based on image processing | |
| CN117974692B (en) | Ophthalmic medical image processing method based on region growing | |
| CN105488781A (en) | Dividing method based on CT image liver tumor focus | |
| CN117237342B (en) | Intelligent analysis method for respiratory rehabilitation CT image | |
| CN111340825A (en) | Method and system for generating mediastinal lymph node segmentation model | |
| CN118096794B (en) | An intelligent segmentation method based on thoracic surgery CT images | |
| CN109544528B (en) | Lung nodule image identification method and device | |
| Pisupati et al. | Segmentation of 3D pulmonary trees using mathematical morphology | |
| Kamil et al. | Analysis of tissue abnormality in mammography images using gray level co-occurrence matrix method | |
| CN118411528B (en) | Stomach CT image feature recognition and segmentation system | |
| CN114764809B (en) | Self-adaptive threshold segmentation method and device for lung CT density elevation | |
| CN118154626B (en) | A method for image processing under ultrasound guidance for nerve block anesthesia | |
| CN112541907A (en) | Image identification method, device, server and medium | |
| Mustafa et al. | Mammography image segmentation: Chan-Vese active contour and localised active contour approach | |
| TWI629046B (en) | Progressive medical gray-level image subject segmentation method | |
| Priyadarsini et al. | Automatic Liver Tumor Segmentation in CT Modalities Using MAT-ACM. | |
| CN120107243B (en) | Pulmonary vascular tree reconstruction method based on multi-scale watershed segmentation | |
| Supe et al. | Image processing for medical image analysis: a review | |
| Hossain et al. | Brain tumor location identification and patient observation from MRI images | |
| CN120298437B (en) | Computer-aided plastic surgery navigation method and system | |
| CN120163928B (en) | Chest image three-dimensional reconstruction method for early lung cancer patient | |
| Manikandan et al. | Lobar fissure extraction in isotropic CT lung images—an application to cancer identification | |
| Gao et al. | Classification of pulmonary nodules by using improved convolutional neural networks | |
| Jose et al. | Liver cancer detection based on various sustainable segmentation techniques for CT images |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |