[go: up one dir, main page]

CN118383798B - Method and system for identifying throat space occupying lesions based on image identification - Google Patents

Method and system for identifying throat space occupying lesions based on image identification Download PDF

Info

Publication number
CN118383798B
CN118383798B CN202410522926.0A CN202410522926A CN118383798B CN 118383798 B CN118383798 B CN 118383798B CN 202410522926 A CN202410522926 A CN 202410522926A CN 118383798 B CN118383798 B CN 118383798B
Authority
CN
China
Prior art keywords
dimensional
path
mass
cyst
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410522926.0A
Other languages
Chinese (zh)
Other versions
CN118383798A (en
Inventor
王�华
孙蕾
李小鹏
王香茹
芦欣欣
赵方玺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202410522926.0A priority Critical patent/CN118383798B/en
Publication of CN118383798A publication Critical patent/CN118383798A/en
Application granted granted Critical
Publication of CN118383798B publication Critical patent/CN118383798B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Clinical applications
    • A61B8/0833Clinical applications involving detecting or locating foreign bodies or organic structures
    • A61B8/085Clinical applications involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/483Diagnostic techniques involving the acquisition of a 3D volume of data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5269Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving detection or reduction of artifacts
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/448Computed tomography involving metal artefacts, streaking artefacts, beam hardening or photon starvation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Multimedia (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Vascular Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及医学图像识别技术领域,具体是基于图像识别的喉占位性病变的识别方法及系统,包括将超声波探头以预设路径在患者喉部移动并采集二维超声波图像,通过扫描位置和扫描方向将不同扫描点对应的二维超声波图像的像素映射到空间三维坐标系中得到患者喉部的三维体素模型;在三维体素模型中提取边缘并输入检测网络得到囊肿中心坐标位置、囊肿形状、肿块中心坐标位置和肿块形状;连接囊肿中心坐标位置和肿块中心坐标位置得到路径,通过路径和肿块中心坐标位置伪影得到肿块修正中心坐标位置。本发明通过超声波针对二维图像重构三维图像,并当发现喉部存在肿块伪影时调整超声波频率形成新的伪影,通过多次测量求解得到肿块实际的坐标参数。The present invention relates to the field of medical image recognition technology, and specifically to a method and system for identifying laryngeal space-occupying lesions based on image recognition, including moving an ultrasonic probe in a patient's larynx along a preset path and collecting a two-dimensional ultrasonic image, mapping pixels of the two-dimensional ultrasonic image corresponding to different scanning points to a spatial three-dimensional coordinate system through scanning positions and scanning directions to obtain a three-dimensional voxel model of the patient's larynx; extracting edges from the three-dimensional voxel model and inputting them into a detection network to obtain a cyst center coordinate position, a cyst shape, a mass center coordinate position, and a mass shape; connecting the cyst center coordinate position and the mass center coordinate position to obtain a path, and obtaining a mass correction center coordinate position through the path and the mass center coordinate position artifact. The present invention reconstructs a three-dimensional image from a two-dimensional image through ultrasound, and when a mass artifact is found in the larynx, adjusts the ultrasound frequency to form a new artifact, and obtains the actual coordinate parameters of the mass through multiple measurements.

Description

Method and system for identifying throat space occupying lesions based on image identification
Technical Field
The invention relates to the technical field of medical image recognition, in particular to a method and a system for recognizing throat space occupying lesions based on image recognition.
Background
The laryngeal occupancy lesions include benign lesions such as polyp masses of the vocal cords, contact granuloma masses of the larynx, precancerous lesions such as leucorrhea spots of the vocal cords, keratosis of the larynx, malignant lesions such as laryngeal carcinoma. Ultrasonic imaging technology plays an important role in the field of medical diagnosis, and draws an image of a detection object based on relevant parameters of transmission and reception of high-frequency sound waves. The specific principle is that when sound waves pass through a human body, the difference of sound wave impedance of different tissues can cause reflection of the sound waves. These reflected waves are received by the probe and converted into images from which a physician can diagnose the disease. For example, patent publication number CN115471484a discloses a method and a system for processing intravascular ultrasound image, which processes intravascular ultrasound image frame by frame, calculates and obtains translational component and rotational component in imaging process according to translational and rotational amounts between two adjacent frames of images brought by heartbeat, so as to correct the acquired intravascular ultrasound image, but refraction artifact exists in the current ultrasonic imaging technology, and the refraction artifact is a common artifact caused by refraction of sound wave when passing through different medium interfaces. Such artifacts can create false structures in the imaging or shift the position of the actual anatomy, potentially interfering with the accuracy of the diagnosis.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a method and a system for identifying throat space occupying lesions based on image identification, which reconstruct a three-dimensional image through ultrasonic waves aiming at a two-dimensional image, adjust ultrasonic frequency to form new artifacts when the tumor artifacts exist in the throat, and obtain actual coordinate parameters of the tumor through multiple measurement and solving, so as to solve the problem that the throat space occupying lesions generate refraction artifacts in ultrasonic image reconstruction.
In order to achieve the above purpose, the technical scheme of the invention is as follows:
the method for identifying the throat space occupying lesion based on image identification comprises the following steps:
(1) The method comprises the steps of moving an ultrasonic probe in a patient throat in a preset path and acquiring a two-dimensional ultrasonic image, wherein the preset path comprises but is not limited to linear, rotary or free movement;
(2) The method comprises the steps of filtering and enhancing a two-dimensional ultrasonic image of a patient throat, mapping pixels of the two-dimensional ultrasonic image corresponding to different scanning points into a space three-dimensional coordinate system through scanning positions and scanning directions to obtain a three-dimensional voxel model of the patient throat, interpolating voxel blank positions in the three-dimensional voxel model, calculating the color and transparency of each voxel through the three-dimensional voxel model, obtaining a penetrating path of light rays of an observation point through the three-dimensional voxel model, and obtaining the accumulated transparency and accumulated color of each voxel in the three-dimensional image of the patient throat through the penetrating path;
(3) The method comprises the steps of obtaining three-dimensional gradients of a three-dimensional image of a patient throat, detecting edges of the three-dimensional image of the patient throat through the three-dimensional gradients, inputting the edges into a cyst detection network to obtain a cyst form, inputting the edges into a tumor detection network to obtain a tumor form, wherein the cyst form comprises a cyst central coordinate position and a cyst boundary, connecting the cyst central coordinate position and the tumor central coordinate position to obtain a first path when the cyst form and the tumor form exist simultaneously, changing ultrasonic frequency by an ultrasonic probe to move in the patient throat in a preset path to construct the three-dimensional image of the patient throat and identify the cyst form and the tumor form, connecting the cyst central coordinate position and the tumor central coordinate position again to obtain a second path, obtaining a tumor correction central coordinate position through the first path and the second path, obtaining throat space-occupying lesion parameters through the tumor correction central coordinate position and the tumor boundary, wherein the cyst detection network is a cyst edge feature classifier established through a neural network algorithm, and the tumor detection network is a tumor edge feature classifier established through a neural network algorithm.
Further, the method for filtering and enhancing the two-dimensional ultrasonic image of the throat of the patient comprises the following steps:
the Gaussian kernel of size n×n and standard deviation sigma are preset, and the kernel value G (i, j) at the kernel position (i, j) of the Gaussian kernel is:
The method comprises the steps of carrying out normalization processing on Gaussian kernels to enable the sum of kernel values of the Gaussian kernels to be equal to 1, carrying out convolution operation on the normalized Gaussian kernels and each pixel of a two-dimensional ultrasonic image, wherein the convolution operation comprises the steps of sequentially aligning the Gaussian kernels to the pixels of the two-dimensional ultrasonic image, denoising and enhancing pixel values I (u, v) of coordinates (u, v) to obtain corresponding pixel values I' (u, v) as follows:
Where k is the radius of the Gaussian kernel, I (u+i, v+j) is the pixel value of the offset position (u+i, v+j) relative to the current coordinate (u, v), and the convolution operation is repeated until all the pixel values I (u, v) of the two-dimensional ultrasonic image are denoised and enhanced to obtain a pixel value I' (u, v).
Further, the method for mapping pixels of two-dimensional ultrasonic images corresponding to different scanning points into a space three-dimensional coordinate system through scanning positions and scanning directions to obtain a three-dimensional voxel model of the throat of the patient comprises the following steps:
Obtaining translation vector by variation of scanning position Obtaining rotation vector by variation of scanning directionCoordinates (u, v) of pixels of the two-dimensional ultrasound image are mapped to coordinates (u ', v', w ') of voxels in a spatial three-dimensional coordinate system, wherein the coordinates (u', v ', w') are calculated as:
Wherein T is a transformation matrix, and the transformation matrix T is:
Further, the method for interpolating voxel blanks in the three-dimensional voxel model comprises the following steps:
Obtaining coordinate points P (u ', V ', w ') at voxel blank positions in a space three-dimensional coordinate system, and obtaining voxel values adjacent to the coordinate points P (u ', V ', w ') in three mutually perpendicular directions in the space three-dimensional coordinate system, wherein the voxel value interpolation V P of the coordinate points P (u ', V ', w ') is as follows:
VP(u′,v′,w′)=V000(1-u′)(1-v′)(1-w′)+V001(1-u′)(1-v′)w′+V010(1-u′)v′(1-w′)+V011(1-u′)v′w′+V100u′(1-v′)(1-w′)+V101u′(1-v′)w′+V110u′v′(1-w′)+V111u′v′w′;
Wherein V 000 is the minimum x, y, and z-axis direction voxel values, V 001 is the maximum z-axis direction voxel values and the minimum x and y-axis direction voxel values, V 010 is the maximum y-axis direction voxel values and the minimum x and z-axis direction voxel values, V 011 is the maximum z and y-axis direction voxel values and the minimum x-axis direction voxel values, V 100 is the maximum x-axis direction voxel values and the minimum y and z-axis direction voxel values, V 101 is the maximum x and z-axis direction voxel values and the minimum y-axis direction voxel values, V 110 is the maximum x and y-axis direction voxel values and the minimum z-axis direction voxel values, and V 111 is the maximum x, y, and z-axis direction voxel values.
Further, the method for obtaining the tumor correction center coordinate position through the first path and the second path comprises the following steps:
Acquiring incident point positions L 1 (u ', v', w ') and L 2 (u', v ', w') of ultrasonic waves corresponding to the first path and the second path respectively, and marking direction vectors of the ultrasonic waves corresponding to the first path and the second path as AndThe central coordinate positions of the tumor corresponding to the first path and the second path are respectively marked as P 1 (u ', v ', w ') and P 2 (u ', v ', w '), the central coordinate positions of the tumor correction are marked as P (u ', v ', w '), the refraction point positions of the ultrasonic wave penetrating through the cyst corresponding to the first path and the second path are marked as R 1 (u ', v ', w ') and R 2 (u ', v ', w '), and the ultrasonic wave refraction equation set is established as follows:
Wherein d 1 is the distance from R 1 (u ', v ', w ') to P 1 (u ', v ', w '), d 2 is the distance from R 2 (u ', v ', w ') to P 2 (u ', v ', w '), and the mass correction center coordinate position is calculated by the ultrasonic refraction equation set and is recorded as P (u ', v ', w ').
The invention also provides an identification system of throat space occupying lesion based on image identification, which comprises:
The image acquisition module is used for moving the ultrasonic probe at the throat of a patient in a preset path and acquiring a two-dimensional ultrasonic image, wherein the preset path comprises but is not limited to linear, rotary or free movement;
The three-dimensional reconstruction module is used for carrying out filtering enhancement on the two-dimensional ultrasonic image of the throat of the patient, and then mapping pixels of the two-dimensional ultrasonic image corresponding to different scanning points into a space three-dimensional coordinate system through scanning positions and scanning directions to obtain a three-dimensional voxel model of the throat of the patient;
The system comprises a throat space-occupying lesion recognition module, a cyst detection network, a cyst edge feature classifier, and a tumor edge feature classifier, wherein the throat space-occupying lesion recognition module is used for acquiring three-dimensional gradients of a three-dimensional image of a throat of a patient, detecting edges of the three-dimensional image of the throat of the patient through the three-dimensional gradients, inputting the edges into the cyst detection network to obtain a cyst form, inputting the edges into the tumor detection network to obtain a tumor form, wherein the tumor form comprises a tumor center coordinate position and a tumor boundary, when the cyst form and the tumor form exist simultaneously, connecting the tumor center coordinate position and the tumor center coordinate position to obtain a first path, changing an ultrasonic frequency to move in the throat of the patient in a preset path to construct the three-dimensional image of the throat of the patient and recognize the tumor form and the tumor form, connecting the tumor center coordinate position and the tumor center coordinate position again to obtain a second path, obtaining a tumor correction center coordinate position and a tumor boundary through the first path and the second path, wherein the cyst detection network is a tumor edge feature classifier established through a neural network algorithm.
Further, the system further comprises:
The image filtering module is used for presetting a Gaussian kernel with the size of n multiplied by n and a standard deviation sigma, and a kernel value G (i, j) at a kernel position (i, j) of the Gaussian kernel is as follows:
The method comprises the steps of carrying out normalization processing on Gaussian kernels to enable the sum of kernel values of the Gaussian kernels to be equal to 1, carrying out convolution operation on the normalized Gaussian kernels and each pixel of a two-dimensional ultrasonic image, wherein the convolution operation comprises the steps of sequentially aligning the Gaussian kernels to the pixels of the two-dimensional ultrasonic image, denoising and enhancing pixel values I (u, v) of coordinates (u, v) to obtain corresponding pixel values I' (u, v) as follows:
Where k is the radius of the Gaussian kernel, I (u+i, v+j) is the pixel value of the offset position (u+i, v+j) relative to the current coordinate (u, v), and the convolution operation is repeated until all the pixel values I (u, v) of the two-dimensional ultrasonic image are denoised and enhanced to obtain a pixel value I' (u, v).
Further, the system further comprises:
a coordinate transformation module for obtaining translation vector by changing scanning position Obtaining rotation vector by variation of scanning directionCoordinates (u, v) of pixels of the two-dimensional ultrasound image are mapped to coordinates (u ', v', w ') of voxels in a spatial three-dimensional coordinate system, wherein the coordinates (u', v ', w') are calculated as:
Wherein, T is the transformation matrix, and transformation matrix T is:
further, the system further comprises:
The interpolation module is used for obtaining coordinate points P (u ', V ', w ') at voxel blank positions in the space three-dimensional coordinate system, obtaining voxel values adjacent to the coordinate points P (u ', V ', w ') in three mutually perpendicular directions in the space three-dimensional coordinate system, and the voxel value interpolation V P of the coordinate points P (u ', V ', w ') is as follows:
VP(u′,v′,w′)=V000(1-u′)(1-v′)(1-w′)+V001(1-u′)(1-v′)w′+V010(1-u′)v′(1-w′)+V011(1-u′)v′w′+V100u′(1-v′)(1-w′)+V101u′(1-v′)w′+V110u′v′(1-w′)+V111u′v′w′;
Wherein V 000 is the minimum x, y, and z-axis direction voxel values, V 001 is the maximum z-axis direction voxel values and the minimum x and y-axis direction voxel values, V 010 is the maximum y-axis direction voxel values and the minimum x and z-axis direction voxel values, V 011 is the maximum z and y-axis direction voxel values and the minimum x-axis direction voxel values, V 100 is the maximum x-axis direction voxel values and the minimum y and z-axis direction voxel values, V 101 is the maximum x and z-axis direction voxel values and the minimum y-axis direction voxel values, V 110 is the maximum x and y-axis direction voxel values and the minimum z-axis direction voxel values, and V 111 is the maximum x, y, and z-axis direction voxel values.
Further, the system further comprises:
The artifact correction module is configured to obtain incident point positions L 1 (u ', v', w ') and L 2 (u', v ', w') of the ultrasonic waves corresponding to the first path and the second path, and record direction vectors of the ultrasonic waves corresponding to the first path and the second path as AndThe central coordinate positions of the tumor corresponding to the first path and the second path are respectively marked as P 1 (u ', v ', w ') and P 2 (u ', v ', w '), the central coordinate positions of the tumor correction are marked as P (u ', v ', w '), the refraction point positions of the ultrasonic wave penetrating through the cyst corresponding to the first path and the second path are marked as R 1 (u ', v ', w ') and R 2 (u ', v ', w '), and the ultrasonic wave refraction equation set is established as follows:
Wherein d 1 is the distance from R 1 (u ', v ', w ') to P 1 (u ', v ', w '), d 2 is the distance from R 2 (u ', v ', w ') to P 2 (u ', v ', w '), and the mass correction center coordinate position is calculated by the ultrasonic refraction equation set and is recorded as P (u ', v ', w ').
Compared with the prior art, the invention has the beneficial effects that:
(1) The two-dimensional ultrasonic image is subjected to filtering enhancement and then interpolation processing, so that the data are smoother and more accurate.
(2) The convolution operation is carried out on each pixel of the two-dimensional ultrasonic image, so that the aliasing of the image is solved, and the quality of the processed image is better than that of the image reconstructed by the original signal.
(3) According to the invention, the three-dimensional image is reconstructed by ultrasonic waves aiming at the two-dimensional image, when the tumor artifact exists in the throat, the frequency of the ultrasonic waves is adjusted to form a new artifact, and the actual coordinate parameters of the tumor are obtained by multiple measurement and solving.
In summary, the three-dimensional image is reconstructed by ultrasonic waves aiming at the two-dimensional image, when the tumor artifact exists in the throat, the frequency of the ultrasonic waves is adjusted to form new artifact, and the actual coordinate parameters of the tumor are obtained by measuring and solving for a plurality of times, so that the problem that refraction artifact is generated in the ultrasonic image reconstruction by the throat occupancy lesion is solved.
Drawings
Fig. 1 is a block diagram of a method for identifying a laryngeal placeholder lesion based on image identification in embodiment 1 of the present invention.
Fig. 2 is a schematic diagram of the ultrasound refraction artifact generation according to embodiment 1 of the present invention.
Fig. 3 is a block diagram of a system for recognizing laryngeal occupancy lesions based on image recognition in accordance with embodiment 2 of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Before the example, the application scenario of the present invention needs to be described, and the present invention is to solve the problem of refraction artifact in ultrasonic medical imaging, for example, a cyst is formed in a throat of a patient with a throat space lesion, and the inner core of the cyst is effusion, and refraction occurs when ultrasonic waves penetrate through the cyst. As shown in fig. 2, the ultrasonic wave part continues to advance after penetrating through the cyst fluid until the ultrasonic wave part is reflected after striking a hard nuclear tumor, and the transmitted ultrasonic wave original path penetrates through the cyst fluid to reach the ultrasonic wave receiving device. After the reflected wave of the ultrasonic wave is collected, a laryngeal image is drawn, but the imaging position of the tumor from the laryngeal image is the first position 1, and the actual position of the actual tumor is the second position 2.
As shown in fig. 1, the present embodiment provides a method for identifying a laryngeal placeholder lesion based on image identification, the method including:
Moving an ultrasonic probe in a patient's throat in a preset path including but not limited to linear, rotational or free movement and acquiring a two-dimensional ultrasonic image;
After filtering and enhancing a two-dimensional ultrasonic image of the throat of a patient, mapping pixels of the two-dimensional ultrasonic image corresponding to different scanning points into a space three-dimensional coordinate system through scanning positions and scanning directions to obtain a three-dimensional voxel model of the throat of the patient;
The method comprises the steps of obtaining three-dimensional gradients of a three-dimensional image of a patient throat, detecting edges of the three-dimensional image of the patient throat through the three-dimensional gradients, inputting the edges into a cyst detection network to obtain a cyst form, inputting the edges into a tumor detection network to obtain a tumor form, wherein the cyst form comprises a cyst central coordinate position and a cyst boundary, connecting the cyst central coordinate position and the tumor central coordinate position to obtain a first path when the cyst form and the tumor form exist simultaneously, changing ultrasonic frequency by an ultrasonic probe to move in the patient throat in a preset path to construct the three-dimensional image of the patient throat and identify the cyst form and the tumor form, connecting the cyst central coordinate position and the tumor central coordinate position again to obtain a second path, obtaining a tumor correction central coordinate position through the first path and the second path, obtaining throat space-occupying lesion parameters through the tumor correction central coordinate position and the tumor boundary, wherein the cyst detection network is a cyst edge feature classifier established through a neural network algorithm, and the tumor detection network is a tumor edge feature classifier established through a neural network algorithm.
Illustratively, the patient goes to a hospital visit for laryngeal discomfort, and the physician uses ultrasound to examine the larynx to identify the possible site-occupying lesions. However, when a cyst exists in the throat of a patient, the ultrasonic waves can be refracted when penetrating through cyst effusion, so that the position of a tumor in imaging is artifact, and diagnosis of a doctor is interfered. The ultrasound probe is first used to move along preset paths in the patient's throat, including linear, rotational, and free movement, to acquire two-dimensional ultrasound images and to record the position and orientation of the probe at each scan point. The acquired image is enhanced by applying a gaussian filtering technique to remove noise and improve image quality. Using the enhanced two-dimensional image, combining the scanning position and direction information, reconstructing the image into a three-dimensional voxel model of the throat of the patient through a pixel mapping method, filling voxel blank positions in the model, and calculating the color and transparency of each voxel. The edges of the image are detected through the three-dimensional gradient information, and then the edges are input into a specially designed neural network to detect the forms of cysts and tumors respectively. These neural networks can distinguish between specific locations and boundaries of cysts and masses based on the characteristics of the ultrasound image. After identifying cysts and tumors, path analysis is performed using their center position coordinates. Firstly, determining the path of ultrasonic waves penetrating through the cyst and entering the tumor, and then calculating the true position of the tumor by changing the frequency of the ultrasonic waves and collecting data again and comparing the change of the center coordinates of the tumor in the two scanning results.
Further, the method for filtering and enhancing the two-dimensional ultrasonic image of the throat of the patient comprises the following steps:
the Gaussian kernel of size n×n and standard deviation sigma are preset, and the kernel value G (i, j) at the kernel position (i, j) of the Gaussian kernel is:
The method comprises the steps of carrying out normalization processing on Gaussian kernels to enable the sum of kernel values of the Gaussian kernels to be equal to 1, carrying out convolution operation on the normalized Gaussian kernels and each pixel of a two-dimensional ultrasonic image, wherein the convolution operation comprises the steps of sequentially aligning the Gaussian kernels to the pixels of the two-dimensional ultrasonic image, denoising and enhancing pixel values I (u, v) of coordinates (u, v) to obtain corresponding pixel values I' (u, v) as follows:
Where k is the radius of the Gaussian kernel, I (u+i, v+j) is the pixel value of the offset position (u+i, v+j) relative to the current coordinate (u, v), and the convolution operation is repeated until all the pixel values I (u, v) of the two-dimensional ultrasonic image are denoised and enhanced to obtain a pixel value I' (u, v).
Illustratively, a two-dimensional image of the patient's throat is acquired by an ultrasound probe, and the filtering enhancement of the image employs, for example, a gaussian kernel sized 5x5, and a suitable standard deviation, e.g., σ=1.5, is selected to effectively remove random noise from the image while preserving important structural features in the image, such as the tumor or cyst edges of the throat. In particular, each pixel value is replaced with a weighted average of its neighboring pixels, the weights being determined by gaussian kernels to smooth the edges in the image and enhance the image.
Further, the method for mapping pixels of two-dimensional ultrasonic images corresponding to different scanning points into a space three-dimensional coordinate system through scanning positions and scanning directions to obtain a three-dimensional voxel model of the throat of the patient comprises the following steps:
Obtaining translation vector by variation of scanning position Obtaining rotation vector by variation of scanning directionCoordinates (u, v) of pixels of the two-dimensional ultrasound image are mapped to coordinates (u ', v', w ') of voxels in a spatial three-dimensional coordinate system, wherein the coordinates (u', v ', w') are calculated as:
Wherein T is a transformation matrix, and the transformation matrix T is:
Illustratively, a three-dimensional voxel model of the patient's throat is constructed based on the enhanced two-dimensional images, and the corresponding scan position and orientation information for each image. Each pixel of the two-dimensional image is mapped onto a voxel in three-dimensional space, and the mapping process calculates the exact position of each pixel in three-dimensional space by geometric transformation using pre-recorded probe position and orientation data. Filling a blank area in the three-dimensional model by adopting an interpolation technology, and estimating the color and the transparency of the blank voxel according to the color and the position of the known voxel.
Further, the method for interpolating voxel blanks in the three-dimensional voxel model comprises the following steps:
Obtaining coordinate points P (u ', V ', w ') at voxel blank positions in a space three-dimensional coordinate system, and obtaining voxel values adjacent to the coordinate points P (u ', V ', w ') in three mutually perpendicular directions in the space three-dimensional coordinate system, wherein the voxel value interpolation V P of the coordinate points P (u ', V ', w ') is as follows:
VP(u′,v′,w′)=V000(1-u′)(1-v′)(1-w′)+V001(1-u′)(1-v′)w′+V010(1-u′)v′(1-w′)+V011(1-u′)v′w′+V100u′(1-v′)(1-w′)+V101u′(1-v′)w′+V110u′v′(1-w′)+V111u′v′w′;
Wherein V 000 is the minimum x, y, and z-axis direction voxel values, V 001 is the maximum z-axis direction voxel values and the minimum x and y-axis direction voxel values, V 010 is the maximum y-axis direction voxel values and the minimum x and z-axis direction voxel values, V 011 is the maximum z and y-axis direction voxel values and the minimum x-axis direction voxel values, V 100 is the maximum x-axis direction voxel values and the minimum y and z-axis direction voxel values, V 101 is the maximum x and z-axis direction voxel values and the minimum y-axis direction voxel values, V 110 is the maximum x and y-axis direction voxel values and the minimum z-axis direction voxel values, and V 111 is the maximum x, y, and z-axis direction voxel values.
Illustratively, the pixels of each two-dimensional image are organized into a continuous three-dimensional voxel model by mapping the two-dimensional images into a three-dimensional space using the enhanced two-dimensional ultrasound images and their corresponding scan positions and orientations, and in particular using the scan position and orientation information to determine the exact location of the pixels in the three-dimensional space. Due to the limitations of scanning discontinuities, there may be blank areas in the model. Interpolation techniques are employed to estimate and fill in voxel values of these blank regions. Specifically, the values of known voxels around the blank voxels are obtained, and the values of the blank voxels are calculated by using an interpolation method.
Further, the method for obtaining the tumor correction center coordinate position through the first path and the second path comprises the following steps:
Acquiring incident point positions L 1 (u ', v', w ') and L 2 (u', v ', w') of ultrasonic waves corresponding to the first path and the second path respectively, and marking direction vectors of the ultrasonic waves corresponding to the first path and the second path as AndThe central coordinate positions of the tumor corresponding to the first path and the second path are respectively marked as P 1 (u ', v ', w ') and P 2 (u ', v ', w '), the central coordinate positions of the tumor correction are marked as P (u ', v ', w '), the refraction point positions of the ultrasonic wave penetrating through the cyst corresponding to the first path and the second path are marked as R 1 (u ', v ', w ') and R 2 (u ', v ', w '), and the ultrasonic wave refraction equation set is established as follows:
Wherein d 1 is the distance from R 1 (u ', v ', w ') to P 1 (u ', v ', w '), d 2 is the distance from R 2 (u ', v ', w ') to P 2 (u ', v ', w '), and the mass correction center coordinate position is calculated by the ultrasonic refraction equation set and is recorded as P (u ', v ', w ').
Illustratively, as shown in FIG. 2, in accurately locating a tumor, artifacts due to refraction of ultrasound re-cyst fluid products are required. False artifacts occur in the display location of the tumor location due to changes in the direction of propagation of ultrasound waves through different media, such as cyst fluid and surrounding tissue. The method is to re-collect the data of the cyst and the tumor area by changing the position of the ultrasonic probe and adjusting the scanning parameters, and record the specific path of penetrating the cyst to the tumor, including the position of the incidence point 3 and the refraction point 4 of the ultrasonic wave. By comparing the scanning results after the initial scanning and the adjustment of the parameters, the positional deviation due to refraction can be identified.
Example 2:
based on the same inventive concept, as shown in fig. 3, the present embodiment further provides an identification system for laryngeal occupancy lesions based on image identification, the system comprising:
The image acquisition module is used for moving the ultrasonic probe at the throat of a patient in a preset path and acquiring a two-dimensional ultrasonic image, wherein the preset path comprises but is not limited to linear, rotary or free movement;
The three-dimensional reconstruction module is used for carrying out filtering enhancement on the two-dimensional ultrasonic image of the throat of the patient, and then mapping pixels of the two-dimensional ultrasonic image corresponding to different scanning points into a space three-dimensional coordinate system through scanning positions and scanning directions to obtain a three-dimensional voxel model of the throat of the patient;
The system comprises a throat space-occupying lesion recognition module, a cyst detection network, a cyst edge feature classifier, and a tumor edge feature classifier, wherein the throat space-occupying lesion recognition module is used for acquiring three-dimensional gradients of a three-dimensional image of a throat of a patient, detecting edges of the three-dimensional image of the throat of the patient through the three-dimensional gradients, inputting the edges into the cyst detection network to obtain a cyst form, inputting the edges into the tumor detection network to obtain a tumor form, wherein the tumor form comprises a tumor center coordinate position and a tumor boundary, when the cyst form and the tumor form exist simultaneously, connecting the tumor center coordinate position and the tumor center coordinate position to obtain a first path, changing an ultrasonic frequency to move in the throat of the patient in a preset path to construct the three-dimensional image of the throat of the patient and recognize the tumor form and the tumor form, connecting the tumor center coordinate position and the tumor center coordinate position again to obtain a second path, obtaining a tumor correction center coordinate position and a tumor boundary through the first path and the second path, wherein the cyst detection network is a tumor edge feature classifier established through a neural network algorithm.
Further, the system further comprises:
The image filtering module is used for presetting a Gaussian kernel with the size of n multiplied by n and a standard deviation sigma, and a kernel value G (i, j) at a kernel position (i, j) of the Gaussian kernel is as follows:
The method comprises the steps of carrying out normalization processing on Gaussian kernels to enable the sum of kernel values of the Gaussian kernels to be equal to 1, carrying out convolution operation on the normalized Gaussian kernels and each pixel of a two-dimensional ultrasonic image, wherein the convolution operation comprises the steps of sequentially aligning the Gaussian kernels to the pixels of the two-dimensional ultrasonic image, denoising and enhancing pixel values I (u, v) of coordinates (u, v) to obtain corresponding pixel values I' (u, v) as follows:
Where k is the radius of the Gaussian kernel, I (u+i, v+j) is the pixel value of the offset position (u+i, v+j) relative to the current coordinate (u, v), and the convolution operation is repeated until all the pixel values I (u, v) of the two-dimensional ultrasonic image are denoised and enhanced to obtain a pixel value I' (u, v).
Further, the system further comprises:
a coordinate transformation module for obtaining translation vector by changing scanning position Obtaining rotation vector by variation of scanning directionCoordinates (u, v) of pixels of the two-dimensional ultrasound image are mapped to coordinates (u ', v', w ') of voxels in a spatial three-dimensional coordinate system, wherein the coordinates (u', v ', w') are calculated as:
Wherein T is a transformation matrix, and the transformation matrix T is:
further, the system further comprises:
The interpolation module is used for obtaining coordinate points P (u ', V ', w ') at voxel blank positions in the space three-dimensional coordinate system, obtaining voxel values adjacent to the coordinate points P (u ', V ', w ') in three mutually perpendicular directions in the space three-dimensional coordinate system, and the voxel value interpolation V P of the coordinate points P (u ', V ', w ') is as follows:
VP(u′,v′,w′)=V000(1-u′)(1-v′)(1-w′)+V001(1-u′)(1-v′)w′+V010(1-u′)v′(1-w′)+V011(1-u′)v′w′+V100u′(1-v′)(1-w′)+V101u′(1-v′)w′+V110u′v′(1-w′)+V111u′v′w′;
Wherein V 000 is the minimum x, y, and z-axis direction voxel values, V 001 is the maximum z-axis direction voxel values and the minimum x and y-axis direction voxel values, V 010 is the maximum y-axis direction voxel values and the minimum x and z-axis direction voxel values, V 011 is the maximum z and y-axis direction voxel values and the minimum x-axis direction voxel values, V 100 is the maximum x-axis direction voxel values and the minimum y and z-axis direction voxel values, V 101 is the maximum x and z-axis direction voxel values and the minimum y-axis direction voxel values, V 110 is the maximum x and y-axis direction voxel values and the minimum z-axis direction voxel values, and V 111 is the maximum x, y, and z-axis direction voxel values.
Further, the system further comprises:
The artifact correction module is configured to obtain incident point positions L 1 (u ', v', w ') and L 2 (u', v ', w') of the ultrasonic waves corresponding to the first path and the second path, and record direction vectors of the ultrasonic waves corresponding to the first path and the second path as AndThe central coordinate positions of the tumor corresponding to the first path and the second path are respectively marked as P 1 (u ', v ', w ') and P 2 (u ', v ', w '), the central coordinate positions of the tumor correction are marked as P (u ', v ', w '), the refraction point positions of the ultrasonic wave penetrating through the cyst corresponding to the first path and the second path are marked as R 1 (u ', v ', w ') and R 2 (u ', v ', w '), and the ultrasonic wave refraction equation set is established as follows:
Wherein d 1 is the distance from R 1 (u ', v ', w ') to P 1 (u ', v ', w '), d 2 is the distance from R 2 (u ', v ', w ') to P 2 (u ', v ', w '), and the mass correction center coordinate position is calculated by the ultrasonic refraction equation set and is recorded as P (u ', v ', w ').
It should be noted that, regarding the system in the above embodiment, the specific manner in which the respective modules perform the operations has been described in detail in the embodiment regarding the method, and will not be described in detail herein.
Finally, it should be noted that although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the technical solutions described in the foregoing embodiments, or equivalents may be substituted for some of the technical features thereof, and any modifications, equivalents, improvements or changes may be made without departing from the spirit and principle of the present invention.

Claims (4)

1.基于图像识别的喉占位性病变的识别方法,其特征在于,所述方法包括:1. A method for identifying laryngeal space-occupying lesions based on image recognition, characterized in that the method comprises: 将超声波探头以预设路径在患者喉部移动并采集二维超声波图像,所述预设路径包括线性、旋转或自由移动;记录所述超声波探头在不同扫描点采集二维超声波图像时的扫描位置和扫描方向;Move the ultrasonic probe in the patient's throat along a preset path and collect two-dimensional ultrasonic images, wherein the preset path includes linear, rotational or free movement; record the scanning position and scanning direction of the ultrasonic probe when collecting two-dimensional ultrasonic images at different scanning points; 将患者喉部的二维超声波图像进行滤波增强后,通过扫描位置和扫描方向将不同扫描点对应的二维超声波图像的像素映射到空间三维坐标系中得到患者喉部的三维体素模型;将三维体素模型中的体素空白处进行插值,通过三维体素模型计算每个体素的颜色和透明度,获取观察点位的光线穿过三维体素模型的穿透路径,通过穿透路径得到患者喉部三维图像中每个体素的累积透明度和累积颜色;After filtering and enhancing the two-dimensional ultrasonic image of the patient's larynx, the pixels of the two-dimensional ultrasonic image corresponding to different scanning points are mapped to a three-dimensional spatial coordinate system through the scanning position and scanning direction to obtain a three-dimensional voxel model of the patient's larynx; the blank spaces of the voxels in the three-dimensional voxel model are interpolated, the color and transparency of each voxel are calculated through the three-dimensional voxel model, the penetration path of the light at the observation point passing through the three-dimensional voxel model is obtained, and the cumulative transparency and cumulative color of each voxel in the three-dimensional image of the patient's larynx are obtained through the penetration path; 获取患者喉部三维图像的三维梯度,通过三维梯度检测患者喉部三维图像的边缘,将边缘输入囊肿检测网络得到囊肿形态,将边缘输入肿块检测网络得到肿块形态,囊肿形态包括囊肿中心坐标位置和囊肿边界,肿块形态包括肿块中心坐标位置和肿块边界;当囊肿形态和肿块形态同时存在时,连接囊肿中心坐标位置和肿块中心坐标位置得到第一路径,再次将超声波探头改变超声波频率以预设路径在患者喉部移动并构建患者喉部三维图像和识别囊肿形态和肿块形态,再次连接囊肿中心坐标位置和肿块中心坐标位置得到第二路径,通过第一路径和第二路径得到肿块修正中心坐标位置,通过肿块修正中心坐标位置和肿块边界得到喉占位性病变参数;所述囊肿检测网络是通过神经网络算法建立的囊肿边缘特征分类器,所述肿块检测网络是通过神经网络算法建立的肿块边缘特征分类器;The three-dimensional gradient of the three-dimensional image of the patient's larynx is obtained, and the edge of the three-dimensional image of the patient's larynx is detected by the three-dimensional gradient, and the edge is input into the cyst detection network to obtain the cyst morphology, and the edge is input into the mass detection network to obtain the mass morphology, wherein the cyst morphology includes the cyst center coordinate position and the cyst boundary, and the mass morphology includes the mass center coordinate position and the mass boundary; when the cyst morphology and the mass morphology exist at the same time, the cyst center coordinate position and the mass center coordinate position are connected to obtain a first path, and the ultrasonic probe is again changed in the ultrasonic frequency to move in the patient's larynx along a preset path to construct the patient's larynx three-dimensional image and identify the cyst morphology and the mass morphology, and the cyst center coordinate position and the mass center coordinate position are connected again to obtain a second path, and the mass correction center coordinate position is obtained through the first path and the second path, and the laryngeal space-occupying lesion parameters are obtained through the mass correction center coordinate position and the mass boundary; the cyst detection network is a cyst edge feature classifier established by a neural network algorithm, and the mass detection network is a mass edge feature classifier established by a neural network algorithm; 所述通过扫描位置和扫描方向将不同扫描点对应的二维超声波图像的像素映射到空间三维坐标系中得到患者喉部的三维体素模型的方法包括:The method for mapping pixels of a two-dimensional ultrasonic image corresponding to different scanning points to a three-dimensional spatial coordinate system by scanning position and scanning direction to obtain a three-dimensional voxel model of the patient's larynx comprises: 通过扫描位置的变动得到平移向量通过扫描方向的变动得到旋转向量二维超声波图像的像素的坐标(u,v)映射到空间三维坐标系中的体素的坐标(u',v',w'),其中坐标(u',v',w')The translation vector is obtained by scanning the change of position The rotation vector is obtained by changing the scanning direction The coordinates (u, v) of the pixels of the two-dimensional ultrasound image are mapped to the coordinates (u', v', w') of the voxels in the three-dimensional spatial coordinate system, where the coordinates (u', v', w') 计算公式为:The calculation formula is: 其中T为变换矩阵,变换矩阵T为:Where T is the transformation matrix, the transformation matrix T is: 所述将三维体素模型中的体素空白处进行插值的方法包括:The method for interpolating the blank voxel space in the three-dimensional voxel model comprises: 获取空间三维坐标系中体素空白处的坐标点P(u',v',w'),获取在空间三维坐标系中三个相互垂直的方向在坐标点P(u',v',w')相邻的体素值,坐标点P(u',v',w')的体素值插值VP为:Get the coordinate point P(u',v',w') of the blank voxel in the three-dimensional coordinate system of space, get the voxel values adjacent to the coordinate point P(u',v',w') in three mutually perpendicular directions in the three-dimensional coordinate system of space, and the voxel value interpolation V P of the coordinate point P(u',v',w') is: VP(u′,v′,w′)=V000(1-u′)(1-v′)(1-w′)+V001(1-u′)(1-v′)w′+V010(1-u′)v′(1-w′)+V011(1-u′)v′w′+V100u′(1-v′)(1-w′)+V101u′(1-v′)w′+V110u′v′(1-w′)+V111u′v′w′;V P (u′,v′,w′)=V 000 (1-u′)(1-v′)(1-w′)+V 001 (1-u′)(1-v′)w′ +V 010 (1-u′)v′(1-w′)+V 011 (1-u′)v′w′+V 100 u′(1-v′)(1-w′)+V 101 u′(1-v′)w′+V 110 u′v′(1-w′)+V 111 u′v′w′; 其中,V000为x、y和z轴方向体素值最小值,V001为z轴方向体素值最大值且为x和y轴方向体素值最小值,V010为y轴方向体素值最大值且为x和z轴方向体素值最小值,V011为z和y轴方向体素值最大值且为x轴方向体素值最小值,V100为x轴方向体素值最大值且为y和z轴方向体素值最小值,V101为x和z轴方向体素值最大值且为y轴方向体素值最小值,V110为x和y轴方向体素值最大值且为z轴方向体素值最小值,V111为x、y和z轴方向体素值最大值;Among them, V 000 is the minimum voxel value in the x, y and z axis directions, V 001 is the maximum voxel value in the z axis direction and the minimum voxel value in the x and y axis directions, V 010 is the maximum voxel value in the y axis direction and the minimum voxel value in the x and z axis directions, V 011 is the maximum voxel value in the z and y axis directions and the minimum voxel value in the x axis direction, V 100 is the maximum voxel value in the x axis direction and the minimum voxel value in the y and z axis directions, V 101 is the maximum voxel value in the x and z axis directions and the minimum voxel value in the y axis direction, V 110 is the maximum voxel value in the x and y axis directions and the minimum voxel value in the z axis direction, and V 111 is the maximum voxel value in the x, y and z axis directions; 所述通过第一路径和第二路径得到肿块修正中心坐标位置的方法包括:The method for obtaining the corrected center coordinate position of the mass through the first path and the second path includes: 获取第一路径和第二路径分别对应的超声波进入囊肿的入射点位置L1(u',v',w')和L2(u',v',w'),将超声波沿着第一路径和第二路径对应的方向向量分别记为将第一路径和第二路径对应的肿块中心坐标位置分别记为P1(u',v',w')和P2(u',v',w'),肿块修正中心坐标位置记为P(u',v',w'),将第一路径和第二路径分别对应的超声波穿透囊肿的折射点位置记为R1(u',v',w')和R2(u',v',w');建立超声波折射方程组为:Obtain the incident point positions L 1 (u', v', w') and L 2 (u', v', w') of the ultrasound entering the cyst corresponding to the first path and the second path, respectively, and record the direction vectors corresponding to the ultrasound along the first path and the second path as and The center coordinates of the mass corresponding to the first path and the second path are recorded as P 1 (u', v', w') and P 2 (u', v', w'), the corrected center coordinates of the mass are recorded as P(u', v', w'), and the refraction points of the ultrasound penetrating the cyst corresponding to the first path and the second path are recorded as R 1 (u', v', w') and R 2 (u', v', w'); the ultrasonic refraction equations are established as follows: 其中,d1是R1(u',v',w')到P1(u',v',w')的距离,d2是R2(u',v',w')到P2(u',v',w')的距离;t1是L1(u',v',w')到P1(u',v',w')的距离,t2是L2(u',v',w')到P1(u',v',w')的距离;通过超声波折射方程组求得肿块修正中心坐标位置记为P(u',v',w')。Wherein, d 1 is the distance from R 1 (u',v',w') to P 1 (u',v',w'), d 2 is the distance from R 2 (u',v',w') to P 2 (u',v',w'); t 1 is the distance from L 1 (u',v',w') to P 1 (u',v',w'), t 2 is the distance from L 2 (u',v',w') to P 1 (u',v',w'); the corrected center coordinate position of the mass is obtained by using the ultrasonic refraction equations and is recorded as P(u',v',w'). 2.根据权利要求1所述的基于图像识别的喉占位性病变的识别方法,其特征在于,所述将患者喉部的二维超声波图像进行滤波增强的方法包括:2. The method for identifying laryngeal space-occupying lesions based on image recognition according to claim 1, wherein the method for filtering and enhancing the two-dimensional ultrasonic image of the patient's larynx comprises: 预设n×n大小的高斯核和标准差σ,在高斯核的核位置(i,j)处的核值G(i,j)为:Preset the Gaussian kernel of size n×n and the standard deviation σ, and the kernel value G(i,j) at the kernel position (i,j) of the Gaussian kernel is: 将高斯核进行归一化处理使高斯核的核值总和等于1,将归一化的高斯核和二维超声波图像每个像素进行卷积操作;所述卷积操作包括,将高斯核依次对齐二维超声波图像的像素,将坐标(u,v)的像素值I(u,v)去噪增强后得到对应的像素值I'(u,v)为:The Gaussian kernel is normalized so that the sum of the kernel values of the Gaussian kernel is equal to 1, and the normalized Gaussian kernel is convolved with each pixel of the two-dimensional ultrasonic image; the convolution operation includes aligning the Gaussian kernel with the pixels of the two-dimensional ultrasonic image in sequence, and denoising and enhancing the pixel value I(u,v) of the coordinate (u,v) to obtain the corresponding pixel value I'(u,v): 其中,k是高斯核的半径,I(u+i,v+j)是相对于当前坐标(u,v)的偏移位置(u+i,v+j)的像素值;重复卷积操作直到二维超声波图像的所有像素值I(u,v)去噪增强后得到像素值I′(u,v)。Where k is the radius of the Gaussian kernel, and I(u+i,v+j) is the pixel value at the offset position (u+i,v+j) relative to the current coordinate (u,v); the convolution operation is repeated until all pixel values I(u,v) of the two-dimensional ultrasonic image are denoised and enhanced to obtain the pixel value I′(u,v). 3.基于图像识别的喉占位性病变的识别系统,其特征在于,所述系统包括:3. A system for identifying laryngeal space-occupying lesions based on image recognition, characterized in that the system comprises: 图像采集模块,用于将超声波探头以预设路径在患者喉部移动并采集二维超声波图像,所述预设路径包括线性、旋转或自由移动;记录所述超声波探头在不同扫描点采集二维超声波图像时的扫描位置和扫描方向;An image acquisition module is used to move the ultrasonic probe in the patient's throat along a preset path and acquire a two-dimensional ultrasonic image, wherein the preset path includes linear, rotational or free movement; and record the scanning position and scanning direction of the ultrasonic probe when acquiring a two-dimensional ultrasonic image at different scanning points; 三维重构模块,用于将患者喉部的二维超声波图像进行滤波增强后,通过扫描位置和扫描方向将不同扫描点对应的二维超声波图像的像素映射到空间三维坐标系中得到患者喉部的三维体素模型;将三维体素模型中的体素空白处进行插值,通过三维体素模型计算每个体素的颜The three-dimensional reconstruction module is used to filter and enhance the two-dimensional ultrasonic image of the patient's larynx, and then map the pixels of the two-dimensional ultrasonic image corresponding to different scanning points to the three-dimensional coordinate system of the space to obtain a three-dimensional voxel model of the patient's larynx through the scanning position and scanning direction; interpolate the blank voxels in the three-dimensional voxel model, and calculate the color of each voxel through the three-dimensional voxel model. 色和透明度,获取观察点位的光线穿过三维体素模型的穿透路径,通过穿透路径得到患者喉部三维图像中每个体素的累积透明度和累积颜色;Color and transparency, obtain the penetration path of the light at the observation point through the three-dimensional voxel model, and obtain the cumulative transparency and cumulative color of each voxel in the three-dimensional image of the patient's larynx through the penetration path; 喉占位性病变识别模块,用于获取患者喉部三维图像的三维梯度,通过三维梯度检测患者喉部三维图像的边缘,将边缘输入囊肿检测网络得到囊肿形态,将边缘输入肿块检测网络得到肿块形态,囊肿形态包括囊肿中心坐标位置和囊肿边界,肿块形态包括肿块中心坐标位置和肿块边界;当囊肿形态和肿块形态同时存在时,连接囊肿中心坐标位置和肿块中心坐标位置得到第一路径,再次将超声波探头改变超声波频率以预设路径在患者喉部移动并构建患者喉部三维图像和识别囊肿形态和肿块形态,再次连接囊肿中心坐标位置和肿块中心坐标位置得到第二路径,通过第一路径和第二路径得到肿块修正中心坐标位置,通过肿块修正中心坐标位置和肿块边界得到喉占位性病变参数;所述囊肿检测网络是通过神经网络算法建立的囊肿边缘特征分类器,所述肿块检测网络是通过神经网络算法建立的肿块边缘特征分类器。The laryngeal space-occupying lesion recognition module is used to obtain the three-dimensional gradient of the three-dimensional image of the patient's larynx, detect the edge of the three-dimensional image of the patient's larynx through the three-dimensional gradient, input the edge into the cyst detection network to obtain the cyst morphology, and input the edge into the mass detection network to obtain the mass morphology. The cyst morphology includes the cyst center coordinate position and the cyst boundary, and the mass morphology includes the mass center coordinate position and the mass boundary. When the cyst morphology and the mass morphology exist at the same time, the cyst center coordinate position and the mass center coordinate position are connected to obtain a first path, and the ultrasonic probe is changed again to move in the patient's larynx along a preset path to construct the patient's larynx three-dimensional image and identify the cyst morphology and the mass morphology. The cyst center coordinate position and the mass center coordinate position are connected again to obtain a second path, and the mass correction center coordinate position is obtained through the first path and the second path, and the laryngeal space-occupying lesion parameters are obtained through the mass correction center coordinate position and the mass boundary. The cyst detection network is a cyst edge feature classifier established by a neural network algorithm, and the mass detection network is a mass edge feature classifier established by a neural network algorithm. 所述系统还包括坐标变换模块,用于通过扫描位置的变动得到平移向量通过扫描方向的变动得到旋转向量二维超声波图像的像素的坐标(u,v)映射到空间三维坐标系中的体素的坐标(u',v',w'),其中坐标(u',v',w')计算公式为:The system also includes a coordinate transformation module for obtaining a translation vector by changing the scanning position. The rotation vector is obtained by changing the scanning direction The coordinates (u, v) of the pixels of the two-dimensional ultrasound image are mapped to the coordinates (u', v', w') of the voxels in the three-dimensional spatial coordinate system, where the coordinates (u', v', w') are calculated as follows: 其中T为变换矩阵,变换矩阵T为:Where T is the transformation matrix, the transformation matrix T is: 所述系统还包括插值模块,用于获取空间三维坐标系中体素空白处的坐标点P(u',v',w'),获取在空间三维坐标系中三个相互垂直方向在坐标点P(u',v',w')相邻的体素值,坐标点P(u',v',w')的体素值插值VP为:The system further includes an interpolation module for obtaining a coordinate point P (u', v', w') at a blank voxel position in a three-dimensional spatial coordinate system, and obtaining voxel values adjacent to the coordinate point P (u', v', w') in three mutually perpendicular directions in the three-dimensional spatial coordinate system. The voxel value interpolation V P of the coordinate point P (u', v', w') is: VP(u′,v′,w′)=V000(1-u′)(1-v′)(1-w′)+V001(1-u′)(1-v′)w′+V010(1-u′)v′(1-w′)+V011(1-u′)v′w′+V100u′(1-v′)(1-w′)+V101u′(1-v′)w′+V110u′v′(1-w′)+V111u′v′w′;V P (u′,v′,w′)=V 000 (1-u′)(1-v′)(1-w′)+V 001 (1-u′)(1-v′)w′ +V 010 (1-u′)v′(1-w′)+V 011 (1-u′)v′w′+V 100 u′(1-v′)(1-w′)+V 101 u′(1-v′)w′+V 110 u′v′(1-w′)+V 111 u′v′w′; 其中,V000为x、y和z轴方向体素值最小值,V001为z轴方向体素值最大值且为x和y轴方向体素值最小值,V010为y轴方向体素值最大值且为x和z轴方向体素值最小值,V011为z和y轴方向体素值最大值且为x轴方向体素值最小值,V100为x轴方向体素值最大值且为y和z轴方向体素值最小值,V101为x和z轴方向体素值最大值且为y轴方向体素值最小值,V110为x和y轴方向体素值最大值且为z轴方向体素值最小值,V111为x、y和z轴方向体素值最大值;Among them, V 000 is the minimum voxel value in the x, y and z axis directions, V 001 is the maximum voxel value in the z axis direction and the minimum voxel value in the x and y axis directions, V 010 is the maximum voxel value in the y axis direction and the minimum voxel value in the x and z axis directions, V 011 is the maximum voxel value in the z and y axis directions and the minimum voxel value in the x axis direction, V 100 is the maximum voxel value in the x axis direction and the minimum voxel value in the y and z axis directions, V 101 is the maximum voxel value in the x and z axis directions and the minimum voxel value in the y axis direction, V 110 is the maximum voxel value in the x and y axis directions and the minimum voxel value in the z axis direction, and V 111 is the maximum voxel value in the x, y and z axis directions; 所述系统还包括伪影修正模块,用于获取第一路径和第二路径分别对应的超声波进入囊肿的入射点位置L1(u',v',w')和L2(u',v',w'),将超声波沿着第一路径和第二路径对应的方向向量分别记为将第一路径和第二路径对应的肿块中心坐标位置分别记为P1(u',v',w')和P2(u',v',w'),肿块修正中心坐标位置记为P(u',v',w'),将第一路径和第二路径分别对应的超声波穿透囊肿的折射点位置记为R1(u',v',w')和R2(u',v',w');建立超声波折射方程组为:The system also includes an artifact correction module for obtaining the incident point positions L 1 (u', v', w') and L 2 (u', v', w') of the ultrasound entering the cyst corresponding to the first path and the second path, respectively, and recording the direction vectors corresponding to the ultrasound along the first path and the second path as and The center coordinates of the mass corresponding to the first path and the second path are recorded as P 1 (u', v', w') and P 2 (u', v', w'), the corrected center coordinates of the mass are recorded as P(u', v', w'), and the refraction points of the ultrasound penetrating the cyst corresponding to the first path and the second path are recorded as R 1 (u', v', w') and R 2 (u', v', w'); the ultrasonic refraction equations are established as follows: 其中,d1是R1(u',v',w')到P1(u',v',w')的距离,d2是R2(u',v',w')到P2(u',v',w')的距离;t1是L1(u',v',w')到P1(u',v',w')的距离,t2是L2(u',v',w')到P1(u',v',w')的距离;通过超声波折射方程组求得肿块修正中心坐标位置记为P(u',v',w')。Wherein, d 1 is the distance from R 1 (u',v',w') to P 1 (u',v',w'), d 2 is the distance from R 2 (u',v',w') to P 2 (u',v',w'); t 1 is the distance from L 1 (u',v',w') to P 1 (u',v',w'), t 2 is the distance from L 2 (u',v',w') to P 1 (u',v',w'); the corrected center coordinate position of the mass is obtained by using the ultrasonic refraction equations and is recorded as P(u',v',w'). 4.根据权利要求3所述的基于图像识别的喉占位性病变的识别系统,其特征在于,所述系统还包括:4. The laryngeal space-occupying lesion recognition system based on image recognition according to claim 3, characterized in that the system further comprises: 图像滤波模块,用于预设n×n大小的高斯核和标准差σ,在高斯核的核位置(i,j)处的核值G(i,j)为:The image filtering module is used to preset the Gaussian kernel of size n×n and the standard deviation σ. The kernel value G(i,j) at the kernel position (i,j) of the Gaussian kernel is: 将高斯核进行归一化处理使高斯核的核值总和等于1,将归一化的高斯核和二维超声波图像每个像素进行卷积操作;所述卷积操作包括,将高斯核依次对齐二维超声波图像的像素,将坐标(u,v)的像素值I(u,v)去噪增强后得到对应的像素值I'(u,v)为:The Gaussian kernel is normalized so that the sum of the kernel values of the Gaussian kernel is equal to 1, and the normalized Gaussian kernel is convolved with each pixel of the two-dimensional ultrasonic image; the convolution operation includes aligning the Gaussian kernel with the pixels of the two-dimensional ultrasonic image in sequence, and denoising and enhancing the pixel value I(u,v) of the coordinate (u,v) to obtain the corresponding pixel value I'(u,v): 其中,k是高斯核的半径,I(u+i,v+j)是相对于当前坐标(u,v)的偏移位置(u+i,v+j)的像素值;重复卷积操作直到二维超声波图像的所有像素值I(u,v)去噪增强后得到像素值I′(u,v)。Where k is the radius of the Gaussian kernel, and I(u+i,v+j) is the pixel value at the offset position (u+i,v+j) relative to the current coordinate (u,v); the convolution operation is repeated until all pixel values I(u,v) of the two-dimensional ultrasonic image are denoised and enhanced to obtain the pixel value I′(u,v).
CN202410522926.0A 2024-04-28 2024-04-28 Method and system for identifying throat space occupying lesions based on image identification Active CN118383798B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410522926.0A CN118383798B (en) 2024-04-28 2024-04-28 Method and system for identifying throat space occupying lesions based on image identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410522926.0A CN118383798B (en) 2024-04-28 2024-04-28 Method and system for identifying throat space occupying lesions based on image identification

Publications (2)

Publication Number Publication Date
CN118383798A CN118383798A (en) 2024-07-26
CN118383798B true CN118383798B (en) 2024-12-13

Family

ID=91998882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410522926.0A Active CN118383798B (en) 2024-04-28 2024-04-28 Method and system for identifying throat space occupying lesions based on image identification

Country Status (1)

Country Link
CN (1) CN118383798B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102844761A (en) * 2010-04-19 2012-12-26 皇家飞利浦电子股份有限公司 Report viewer using radiological descriptors
CN107492097A (en) * 2017-08-07 2017-12-19 北京深睿博联科技有限责任公司 A kind of method and device for identifying MRI image area-of-interest

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080208061A1 (en) * 2007-02-23 2008-08-28 General Electric Company Methods and systems for spatial compounding in a handheld ultrasound device
US20120095341A1 (en) * 2010-10-19 2012-04-19 Toshiba Medical Systems Corporation Ultrasonic image processing apparatus and ultrasonic image processing method
US9336592B2 (en) * 2012-02-03 2016-05-10 The Trustees Of Dartmouth College Method and apparatus for determining tumor shift during surgery using a stereo-optical three-dimensional surface-mapping system
US20160242733A1 (en) * 2015-02-20 2016-08-25 QT Ultrasound LLC Tissue lesion detection and determination using quantitative transmission ultrasound
CN108324324A (en) * 2018-03-12 2018-07-27 西安交通大学 It is a kind of ultrasound low frequency through cranial capacity super-resolution three-dimensional contrast imaging method and system
CN109741441A (en) * 2018-12-19 2019-05-10 中惠医疗科技(上海)有限公司 Fibroid method for reconstructing three-dimensional model and system
CN111178449B (en) * 2019-12-31 2021-11-05 浙江大学 A liver cancer image classification method combining computer vision features and radiomics features

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102844761A (en) * 2010-04-19 2012-12-26 皇家飞利浦电子股份有限公司 Report viewer using radiological descriptors
CN107492097A (en) * 2017-08-07 2017-12-19 北京深睿博联科技有限责任公司 A kind of method and device for identifying MRI image area-of-interest

Also Published As

Publication number Publication date
CN118383798A (en) 2024-07-26

Similar Documents

Publication Publication Date Title
US10561403B2 (en) Sensor coordinate calibration in an ultrasound system
CN111095349B (en) reduce noise in images
Rohling et al. Automatic registration of 3-D ultrasound images
US8630492B2 (en) System and method for identifying a vascular border
Loizou et al. Despeckle filtering for ultrasound imaging and video, volume I: Algorithms and software
JP4899837B2 (en) Ultrasound imaging system and method
US8246543B2 (en) Imaging method utilizing attenuation and speed parameters in inverse scattering techniques
KR101932721B1 (en) Method and Appartus of maching medical images
CN109767400B (en) Ultrasonic image speckle noise removing method for guiding trilateral filtering
CN109961411B (en) Non-subsampled shearlet transform medical CT image denoising method
Adam et al. Semiautomated Border Tracking of Cine Echocardiographic Ventnrcular Images
EP1030191A2 (en) Semi-automated segmentation method for 3-dimensional ultrasound
JP4481824B2 (en) System and method for identifying vascular borders
WO2021212693A1 (en) Gabor wavelet-fused multi-scale local level set ultrasonic image segmentation method
CN111354006A (en) Method and device for tracing target tissue in ultrasonic image
CN117391955A (en) Convex set projection super-resolution reconstruction method based on multi-frame optical coherence tomography images
CN111507979A (en) Computer-aided analysis method for medical image
Huang et al. A new adaptive interpolation algorithm for 3D ultrasound imaging with speckle reduction and edge preservation
CN103761767A (en) Quick three-dimensional ultrasound image reconstruction method based on sparse data
WO2010066007A1 (en) Medical diagnostic method and apparatus
CN116228794A (en) Location and segmentation method of tooth lesion area based on PS-OCT
CN107169978B (en) Ultrasound image edge detection method and system
CN118383798B (en) Method and system for identifying throat space occupying lesions based on image identification
CN118710687A (en) Ultrasonic image adaptive spatial compounding method and system
Cheng et al. End-to-end algorithm research in PACT—from signal processing to reconstruction solution to image processing: A review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant