Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a method and a system for identifying throat space occupying lesions based on image identification, which reconstruct a three-dimensional image through ultrasonic waves aiming at a two-dimensional image, adjust ultrasonic frequency to form new artifacts when the tumor artifacts exist in the throat, and obtain actual coordinate parameters of the tumor through multiple measurement and solving, so as to solve the problem that the throat space occupying lesions generate refraction artifacts in ultrasonic image reconstruction.
In order to achieve the above purpose, the technical scheme of the invention is as follows:
the method for identifying the throat space occupying lesion based on image identification comprises the following steps:
(1) The method comprises the steps of moving an ultrasonic probe in a patient throat in a preset path and acquiring a two-dimensional ultrasonic image, wherein the preset path comprises but is not limited to linear, rotary or free movement;
(2) The method comprises the steps of filtering and enhancing a two-dimensional ultrasonic image of a patient throat, mapping pixels of the two-dimensional ultrasonic image corresponding to different scanning points into a space three-dimensional coordinate system through scanning positions and scanning directions to obtain a three-dimensional voxel model of the patient throat, interpolating voxel blank positions in the three-dimensional voxel model, calculating the color and transparency of each voxel through the three-dimensional voxel model, obtaining a penetrating path of light rays of an observation point through the three-dimensional voxel model, and obtaining the accumulated transparency and accumulated color of each voxel in the three-dimensional image of the patient throat through the penetrating path;
(3) The method comprises the steps of obtaining three-dimensional gradients of a three-dimensional image of a patient throat, detecting edges of the three-dimensional image of the patient throat through the three-dimensional gradients, inputting the edges into a cyst detection network to obtain a cyst form, inputting the edges into a tumor detection network to obtain a tumor form, wherein the cyst form comprises a cyst central coordinate position and a cyst boundary, connecting the cyst central coordinate position and the tumor central coordinate position to obtain a first path when the cyst form and the tumor form exist simultaneously, changing ultrasonic frequency by an ultrasonic probe to move in the patient throat in a preset path to construct the three-dimensional image of the patient throat and identify the cyst form and the tumor form, connecting the cyst central coordinate position and the tumor central coordinate position again to obtain a second path, obtaining a tumor correction central coordinate position through the first path and the second path, obtaining throat space-occupying lesion parameters through the tumor correction central coordinate position and the tumor boundary, wherein the cyst detection network is a cyst edge feature classifier established through a neural network algorithm, and the tumor detection network is a tumor edge feature classifier established through a neural network algorithm.
Further, the method for filtering and enhancing the two-dimensional ultrasonic image of the throat of the patient comprises the following steps:
the Gaussian kernel of size n×n and standard deviation sigma are preset, and the kernel value G (i, j) at the kernel position (i, j) of the Gaussian kernel is:
The method comprises the steps of carrying out normalization processing on Gaussian kernels to enable the sum of kernel values of the Gaussian kernels to be equal to 1, carrying out convolution operation on the normalized Gaussian kernels and each pixel of a two-dimensional ultrasonic image, wherein the convolution operation comprises the steps of sequentially aligning the Gaussian kernels to the pixels of the two-dimensional ultrasonic image, denoising and enhancing pixel values I (u, v) of coordinates (u, v) to obtain corresponding pixel values I' (u, v) as follows:
Where k is the radius of the Gaussian kernel, I (u+i, v+j) is the pixel value of the offset position (u+i, v+j) relative to the current coordinate (u, v), and the convolution operation is repeated until all the pixel values I (u, v) of the two-dimensional ultrasonic image are denoised and enhanced to obtain a pixel value I' (u, v).
Further, the method for mapping pixels of two-dimensional ultrasonic images corresponding to different scanning points into a space three-dimensional coordinate system through scanning positions and scanning directions to obtain a three-dimensional voxel model of the throat of the patient comprises the following steps:
Obtaining translation vector by variation of scanning position Obtaining rotation vector by variation of scanning directionCoordinates (u, v) of pixels of the two-dimensional ultrasound image are mapped to coordinates (u ', v', w ') of voxels in a spatial three-dimensional coordinate system, wherein the coordinates (u', v ', w') are calculated as:
Wherein T is a transformation matrix, and the transformation matrix T is:
Further, the method for interpolating voxel blanks in the three-dimensional voxel model comprises the following steps:
Obtaining coordinate points P (u ', V ', w ') at voxel blank positions in a space three-dimensional coordinate system, and obtaining voxel values adjacent to the coordinate points P (u ', V ', w ') in three mutually perpendicular directions in the space three-dimensional coordinate system, wherein the voxel value interpolation V P of the coordinate points P (u ', V ', w ') is as follows:
VP(u′,v′,w′)=V000(1-u′)(1-v′)(1-w′)+V001(1-u′)(1-v′)w′+V010(1-u′)v′(1-w′)+V011(1-u′)v′w′+V100u′(1-v′)(1-w′)+V101u′(1-v′)w′+V110u′v′(1-w′)+V111u′v′w′;
Wherein V 000 is the minimum x, y, and z-axis direction voxel values, V 001 is the maximum z-axis direction voxel values and the minimum x and y-axis direction voxel values, V 010 is the maximum y-axis direction voxel values and the minimum x and z-axis direction voxel values, V 011 is the maximum z and y-axis direction voxel values and the minimum x-axis direction voxel values, V 100 is the maximum x-axis direction voxel values and the minimum y and z-axis direction voxel values, V 101 is the maximum x and z-axis direction voxel values and the minimum y-axis direction voxel values, V 110 is the maximum x and y-axis direction voxel values and the minimum z-axis direction voxel values, and V 111 is the maximum x, y, and z-axis direction voxel values.
Further, the method for obtaining the tumor correction center coordinate position through the first path and the second path comprises the following steps:
Acquiring incident point positions L 1 (u ', v', w ') and L 2 (u', v ', w') of ultrasonic waves corresponding to the first path and the second path respectively, and marking direction vectors of the ultrasonic waves corresponding to the first path and the second path as AndThe central coordinate positions of the tumor corresponding to the first path and the second path are respectively marked as P 1 (u ', v ', w ') and P 2 (u ', v ', w '), the central coordinate positions of the tumor correction are marked as P (u ', v ', w '), the refraction point positions of the ultrasonic wave penetrating through the cyst corresponding to the first path and the second path are marked as R 1 (u ', v ', w ') and R 2 (u ', v ', w '), and the ultrasonic wave refraction equation set is established as follows:
Wherein d 1 is the distance from R 1 (u ', v ', w ') to P 1 (u ', v ', w '), d 2 is the distance from R 2 (u ', v ', w ') to P 2 (u ', v ', w '), and the mass correction center coordinate position is calculated by the ultrasonic refraction equation set and is recorded as P (u ', v ', w ').
The invention also provides an identification system of throat space occupying lesion based on image identification, which comprises:
The image acquisition module is used for moving the ultrasonic probe at the throat of a patient in a preset path and acquiring a two-dimensional ultrasonic image, wherein the preset path comprises but is not limited to linear, rotary or free movement;
The three-dimensional reconstruction module is used for carrying out filtering enhancement on the two-dimensional ultrasonic image of the throat of the patient, and then mapping pixels of the two-dimensional ultrasonic image corresponding to different scanning points into a space three-dimensional coordinate system through scanning positions and scanning directions to obtain a three-dimensional voxel model of the throat of the patient;
The system comprises a throat space-occupying lesion recognition module, a cyst detection network, a cyst edge feature classifier, and a tumor edge feature classifier, wherein the throat space-occupying lesion recognition module is used for acquiring three-dimensional gradients of a three-dimensional image of a throat of a patient, detecting edges of the three-dimensional image of the throat of the patient through the three-dimensional gradients, inputting the edges into the cyst detection network to obtain a cyst form, inputting the edges into the tumor detection network to obtain a tumor form, wherein the tumor form comprises a tumor center coordinate position and a tumor boundary, when the cyst form and the tumor form exist simultaneously, connecting the tumor center coordinate position and the tumor center coordinate position to obtain a first path, changing an ultrasonic frequency to move in the throat of the patient in a preset path to construct the three-dimensional image of the throat of the patient and recognize the tumor form and the tumor form, connecting the tumor center coordinate position and the tumor center coordinate position again to obtain a second path, obtaining a tumor correction center coordinate position and a tumor boundary through the first path and the second path, wherein the cyst detection network is a tumor edge feature classifier established through a neural network algorithm.
Further, the system further comprises:
The image filtering module is used for presetting a Gaussian kernel with the size of n multiplied by n and a standard deviation sigma, and a kernel value G (i, j) at a kernel position (i, j) of the Gaussian kernel is as follows:
The method comprises the steps of carrying out normalization processing on Gaussian kernels to enable the sum of kernel values of the Gaussian kernels to be equal to 1, carrying out convolution operation on the normalized Gaussian kernels and each pixel of a two-dimensional ultrasonic image, wherein the convolution operation comprises the steps of sequentially aligning the Gaussian kernels to the pixels of the two-dimensional ultrasonic image, denoising and enhancing pixel values I (u, v) of coordinates (u, v) to obtain corresponding pixel values I' (u, v) as follows:
Where k is the radius of the Gaussian kernel, I (u+i, v+j) is the pixel value of the offset position (u+i, v+j) relative to the current coordinate (u, v), and the convolution operation is repeated until all the pixel values I (u, v) of the two-dimensional ultrasonic image are denoised and enhanced to obtain a pixel value I' (u, v).
Further, the system further comprises:
a coordinate transformation module for obtaining translation vector by changing scanning position Obtaining rotation vector by variation of scanning directionCoordinates (u, v) of pixels of the two-dimensional ultrasound image are mapped to coordinates (u ', v', w ') of voxels in a spatial three-dimensional coordinate system, wherein the coordinates (u', v ', w') are calculated as:
Wherein, T is the transformation matrix, and transformation matrix T is:
further, the system further comprises:
The interpolation module is used for obtaining coordinate points P (u ', V ', w ') at voxel blank positions in the space three-dimensional coordinate system, obtaining voxel values adjacent to the coordinate points P (u ', V ', w ') in three mutually perpendicular directions in the space three-dimensional coordinate system, and the voxel value interpolation V P of the coordinate points P (u ', V ', w ') is as follows:
VP(u′,v′,w′)=V000(1-u′)(1-v′)(1-w′)+V001(1-u′)(1-v′)w′+V010(1-u′)v′(1-w′)+V011(1-u′)v′w′+V100u′(1-v′)(1-w′)+V101u′(1-v′)w′+V110u′v′(1-w′)+V111u′v′w′;
Wherein V 000 is the minimum x, y, and z-axis direction voxel values, V 001 is the maximum z-axis direction voxel values and the minimum x and y-axis direction voxel values, V 010 is the maximum y-axis direction voxel values and the minimum x and z-axis direction voxel values, V 011 is the maximum z and y-axis direction voxel values and the minimum x-axis direction voxel values, V 100 is the maximum x-axis direction voxel values and the minimum y and z-axis direction voxel values, V 101 is the maximum x and z-axis direction voxel values and the minimum y-axis direction voxel values, V 110 is the maximum x and y-axis direction voxel values and the minimum z-axis direction voxel values, and V 111 is the maximum x, y, and z-axis direction voxel values.
Further, the system further comprises:
The artifact correction module is configured to obtain incident point positions L 1 (u ', v', w ') and L 2 (u', v ', w') of the ultrasonic waves corresponding to the first path and the second path, and record direction vectors of the ultrasonic waves corresponding to the first path and the second path as AndThe central coordinate positions of the tumor corresponding to the first path and the second path are respectively marked as P 1 (u ', v ', w ') and P 2 (u ', v ', w '), the central coordinate positions of the tumor correction are marked as P (u ', v ', w '), the refraction point positions of the ultrasonic wave penetrating through the cyst corresponding to the first path and the second path are marked as R 1 (u ', v ', w ') and R 2 (u ', v ', w '), and the ultrasonic wave refraction equation set is established as follows:
Wherein d 1 is the distance from R 1 (u ', v ', w ') to P 1 (u ', v ', w '), d 2 is the distance from R 2 (u ', v ', w ') to P 2 (u ', v ', w '), and the mass correction center coordinate position is calculated by the ultrasonic refraction equation set and is recorded as P (u ', v ', w ').
Compared with the prior art, the invention has the beneficial effects that:
(1) The two-dimensional ultrasonic image is subjected to filtering enhancement and then interpolation processing, so that the data are smoother and more accurate.
(2) The convolution operation is carried out on each pixel of the two-dimensional ultrasonic image, so that the aliasing of the image is solved, and the quality of the processed image is better than that of the image reconstructed by the original signal.
(3) According to the invention, the three-dimensional image is reconstructed by ultrasonic waves aiming at the two-dimensional image, when the tumor artifact exists in the throat, the frequency of the ultrasonic waves is adjusted to form a new artifact, and the actual coordinate parameters of the tumor are obtained by multiple measurement and solving.
In summary, the three-dimensional image is reconstructed by ultrasonic waves aiming at the two-dimensional image, when the tumor artifact exists in the throat, the frequency of the ultrasonic waves is adjusted to form new artifact, and the actual coordinate parameters of the tumor are obtained by measuring and solving for a plurality of times, so that the problem that refraction artifact is generated in the ultrasonic image reconstruction by the throat occupancy lesion is solved.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Before the example, the application scenario of the present invention needs to be described, and the present invention is to solve the problem of refraction artifact in ultrasonic medical imaging, for example, a cyst is formed in a throat of a patient with a throat space lesion, and the inner core of the cyst is effusion, and refraction occurs when ultrasonic waves penetrate through the cyst. As shown in fig. 2, the ultrasonic wave part continues to advance after penetrating through the cyst fluid until the ultrasonic wave part is reflected after striking a hard nuclear tumor, and the transmitted ultrasonic wave original path penetrates through the cyst fluid to reach the ultrasonic wave receiving device. After the reflected wave of the ultrasonic wave is collected, a laryngeal image is drawn, but the imaging position of the tumor from the laryngeal image is the first position 1, and the actual position of the actual tumor is the second position 2.
As shown in fig. 1, the present embodiment provides a method for identifying a laryngeal placeholder lesion based on image identification, the method including:
Moving an ultrasonic probe in a patient's throat in a preset path including but not limited to linear, rotational or free movement and acquiring a two-dimensional ultrasonic image;
After filtering and enhancing a two-dimensional ultrasonic image of the throat of a patient, mapping pixels of the two-dimensional ultrasonic image corresponding to different scanning points into a space three-dimensional coordinate system through scanning positions and scanning directions to obtain a three-dimensional voxel model of the throat of the patient;
The method comprises the steps of obtaining three-dimensional gradients of a three-dimensional image of a patient throat, detecting edges of the three-dimensional image of the patient throat through the three-dimensional gradients, inputting the edges into a cyst detection network to obtain a cyst form, inputting the edges into a tumor detection network to obtain a tumor form, wherein the cyst form comprises a cyst central coordinate position and a cyst boundary, connecting the cyst central coordinate position and the tumor central coordinate position to obtain a first path when the cyst form and the tumor form exist simultaneously, changing ultrasonic frequency by an ultrasonic probe to move in the patient throat in a preset path to construct the three-dimensional image of the patient throat and identify the cyst form and the tumor form, connecting the cyst central coordinate position and the tumor central coordinate position again to obtain a second path, obtaining a tumor correction central coordinate position through the first path and the second path, obtaining throat space-occupying lesion parameters through the tumor correction central coordinate position and the tumor boundary, wherein the cyst detection network is a cyst edge feature classifier established through a neural network algorithm, and the tumor detection network is a tumor edge feature classifier established through a neural network algorithm.
Illustratively, the patient goes to a hospital visit for laryngeal discomfort, and the physician uses ultrasound to examine the larynx to identify the possible site-occupying lesions. However, when a cyst exists in the throat of a patient, the ultrasonic waves can be refracted when penetrating through cyst effusion, so that the position of a tumor in imaging is artifact, and diagnosis of a doctor is interfered. The ultrasound probe is first used to move along preset paths in the patient's throat, including linear, rotational, and free movement, to acquire two-dimensional ultrasound images and to record the position and orientation of the probe at each scan point. The acquired image is enhanced by applying a gaussian filtering technique to remove noise and improve image quality. Using the enhanced two-dimensional image, combining the scanning position and direction information, reconstructing the image into a three-dimensional voxel model of the throat of the patient through a pixel mapping method, filling voxel blank positions in the model, and calculating the color and transparency of each voxel. The edges of the image are detected through the three-dimensional gradient information, and then the edges are input into a specially designed neural network to detect the forms of cysts and tumors respectively. These neural networks can distinguish between specific locations and boundaries of cysts and masses based on the characteristics of the ultrasound image. After identifying cysts and tumors, path analysis is performed using their center position coordinates. Firstly, determining the path of ultrasonic waves penetrating through the cyst and entering the tumor, and then calculating the true position of the tumor by changing the frequency of the ultrasonic waves and collecting data again and comparing the change of the center coordinates of the tumor in the two scanning results.
Further, the method for filtering and enhancing the two-dimensional ultrasonic image of the throat of the patient comprises the following steps:
the Gaussian kernel of size n×n and standard deviation sigma are preset, and the kernel value G (i, j) at the kernel position (i, j) of the Gaussian kernel is:
The method comprises the steps of carrying out normalization processing on Gaussian kernels to enable the sum of kernel values of the Gaussian kernels to be equal to 1, carrying out convolution operation on the normalized Gaussian kernels and each pixel of a two-dimensional ultrasonic image, wherein the convolution operation comprises the steps of sequentially aligning the Gaussian kernels to the pixels of the two-dimensional ultrasonic image, denoising and enhancing pixel values I (u, v) of coordinates (u, v) to obtain corresponding pixel values I' (u, v) as follows:
Where k is the radius of the Gaussian kernel, I (u+i, v+j) is the pixel value of the offset position (u+i, v+j) relative to the current coordinate (u, v), and the convolution operation is repeated until all the pixel values I (u, v) of the two-dimensional ultrasonic image are denoised and enhanced to obtain a pixel value I' (u, v).
Illustratively, a two-dimensional image of the patient's throat is acquired by an ultrasound probe, and the filtering enhancement of the image employs, for example, a gaussian kernel sized 5x5, and a suitable standard deviation, e.g., σ=1.5, is selected to effectively remove random noise from the image while preserving important structural features in the image, such as the tumor or cyst edges of the throat. In particular, each pixel value is replaced with a weighted average of its neighboring pixels, the weights being determined by gaussian kernels to smooth the edges in the image and enhance the image.
Further, the method for mapping pixels of two-dimensional ultrasonic images corresponding to different scanning points into a space three-dimensional coordinate system through scanning positions and scanning directions to obtain a three-dimensional voxel model of the throat of the patient comprises the following steps:
Obtaining translation vector by variation of scanning position Obtaining rotation vector by variation of scanning directionCoordinates (u, v) of pixels of the two-dimensional ultrasound image are mapped to coordinates (u ', v', w ') of voxels in a spatial three-dimensional coordinate system, wherein the coordinates (u', v ', w') are calculated as:
Wherein T is a transformation matrix, and the transformation matrix T is:
Illustratively, a three-dimensional voxel model of the patient's throat is constructed based on the enhanced two-dimensional images, and the corresponding scan position and orientation information for each image. Each pixel of the two-dimensional image is mapped onto a voxel in three-dimensional space, and the mapping process calculates the exact position of each pixel in three-dimensional space by geometric transformation using pre-recorded probe position and orientation data. Filling a blank area in the three-dimensional model by adopting an interpolation technology, and estimating the color and the transparency of the blank voxel according to the color and the position of the known voxel.
Further, the method for interpolating voxel blanks in the three-dimensional voxel model comprises the following steps:
Obtaining coordinate points P (u ', V ', w ') at voxel blank positions in a space three-dimensional coordinate system, and obtaining voxel values adjacent to the coordinate points P (u ', V ', w ') in three mutually perpendicular directions in the space three-dimensional coordinate system, wherein the voxel value interpolation V P of the coordinate points P (u ', V ', w ') is as follows:
VP(u′,v′,w′)=V000(1-u′)(1-v′)(1-w′)+V001(1-u′)(1-v′)w′+V010(1-u′)v′(1-w′)+V011(1-u′)v′w′+V100u′(1-v′)(1-w′)+V101u′(1-v′)w′+V110u′v′(1-w′)+V111u′v′w′;
Wherein V 000 is the minimum x, y, and z-axis direction voxel values, V 001 is the maximum z-axis direction voxel values and the minimum x and y-axis direction voxel values, V 010 is the maximum y-axis direction voxel values and the minimum x and z-axis direction voxel values, V 011 is the maximum z and y-axis direction voxel values and the minimum x-axis direction voxel values, V 100 is the maximum x-axis direction voxel values and the minimum y and z-axis direction voxel values, V 101 is the maximum x and z-axis direction voxel values and the minimum y-axis direction voxel values, V 110 is the maximum x and y-axis direction voxel values and the minimum z-axis direction voxel values, and V 111 is the maximum x, y, and z-axis direction voxel values.
Illustratively, the pixels of each two-dimensional image are organized into a continuous three-dimensional voxel model by mapping the two-dimensional images into a three-dimensional space using the enhanced two-dimensional ultrasound images and their corresponding scan positions and orientations, and in particular using the scan position and orientation information to determine the exact location of the pixels in the three-dimensional space. Due to the limitations of scanning discontinuities, there may be blank areas in the model. Interpolation techniques are employed to estimate and fill in voxel values of these blank regions. Specifically, the values of known voxels around the blank voxels are obtained, and the values of the blank voxels are calculated by using an interpolation method.
Further, the method for obtaining the tumor correction center coordinate position through the first path and the second path comprises the following steps:
Acquiring incident point positions L 1 (u ', v', w ') and L 2 (u', v ', w') of ultrasonic waves corresponding to the first path and the second path respectively, and marking direction vectors of the ultrasonic waves corresponding to the first path and the second path as AndThe central coordinate positions of the tumor corresponding to the first path and the second path are respectively marked as P 1 (u ', v ', w ') and P 2 (u ', v ', w '), the central coordinate positions of the tumor correction are marked as P (u ', v ', w '), the refraction point positions of the ultrasonic wave penetrating through the cyst corresponding to the first path and the second path are marked as R 1 (u ', v ', w ') and R 2 (u ', v ', w '), and the ultrasonic wave refraction equation set is established as follows:
Wherein d 1 is the distance from R 1 (u ', v ', w ') to P 1 (u ', v ', w '), d 2 is the distance from R 2 (u ', v ', w ') to P 2 (u ', v ', w '), and the mass correction center coordinate position is calculated by the ultrasonic refraction equation set and is recorded as P (u ', v ', w ').
Illustratively, as shown in FIG. 2, in accurately locating a tumor, artifacts due to refraction of ultrasound re-cyst fluid products are required. False artifacts occur in the display location of the tumor location due to changes in the direction of propagation of ultrasound waves through different media, such as cyst fluid and surrounding tissue. The method is to re-collect the data of the cyst and the tumor area by changing the position of the ultrasonic probe and adjusting the scanning parameters, and record the specific path of penetrating the cyst to the tumor, including the position of the incidence point 3 and the refraction point 4 of the ultrasonic wave. By comparing the scanning results after the initial scanning and the adjustment of the parameters, the positional deviation due to refraction can be identified.
Example 2:
based on the same inventive concept, as shown in fig. 3, the present embodiment further provides an identification system for laryngeal occupancy lesions based on image identification, the system comprising:
The image acquisition module is used for moving the ultrasonic probe at the throat of a patient in a preset path and acquiring a two-dimensional ultrasonic image, wherein the preset path comprises but is not limited to linear, rotary or free movement;
The three-dimensional reconstruction module is used for carrying out filtering enhancement on the two-dimensional ultrasonic image of the throat of the patient, and then mapping pixels of the two-dimensional ultrasonic image corresponding to different scanning points into a space three-dimensional coordinate system through scanning positions and scanning directions to obtain a three-dimensional voxel model of the throat of the patient;
The system comprises a throat space-occupying lesion recognition module, a cyst detection network, a cyst edge feature classifier, and a tumor edge feature classifier, wherein the throat space-occupying lesion recognition module is used for acquiring three-dimensional gradients of a three-dimensional image of a throat of a patient, detecting edges of the three-dimensional image of the throat of the patient through the three-dimensional gradients, inputting the edges into the cyst detection network to obtain a cyst form, inputting the edges into the tumor detection network to obtain a tumor form, wherein the tumor form comprises a tumor center coordinate position and a tumor boundary, when the cyst form and the tumor form exist simultaneously, connecting the tumor center coordinate position and the tumor center coordinate position to obtain a first path, changing an ultrasonic frequency to move in the throat of the patient in a preset path to construct the three-dimensional image of the throat of the patient and recognize the tumor form and the tumor form, connecting the tumor center coordinate position and the tumor center coordinate position again to obtain a second path, obtaining a tumor correction center coordinate position and a tumor boundary through the first path and the second path, wherein the cyst detection network is a tumor edge feature classifier established through a neural network algorithm.
Further, the system further comprises:
The image filtering module is used for presetting a Gaussian kernel with the size of n multiplied by n and a standard deviation sigma, and a kernel value G (i, j) at a kernel position (i, j) of the Gaussian kernel is as follows:
The method comprises the steps of carrying out normalization processing on Gaussian kernels to enable the sum of kernel values of the Gaussian kernels to be equal to 1, carrying out convolution operation on the normalized Gaussian kernels and each pixel of a two-dimensional ultrasonic image, wherein the convolution operation comprises the steps of sequentially aligning the Gaussian kernels to the pixels of the two-dimensional ultrasonic image, denoising and enhancing pixel values I (u, v) of coordinates (u, v) to obtain corresponding pixel values I' (u, v) as follows:
Where k is the radius of the Gaussian kernel, I (u+i, v+j) is the pixel value of the offset position (u+i, v+j) relative to the current coordinate (u, v), and the convolution operation is repeated until all the pixel values I (u, v) of the two-dimensional ultrasonic image are denoised and enhanced to obtain a pixel value I' (u, v).
Further, the system further comprises:
a coordinate transformation module for obtaining translation vector by changing scanning position Obtaining rotation vector by variation of scanning directionCoordinates (u, v) of pixels of the two-dimensional ultrasound image are mapped to coordinates (u ', v', w ') of voxels in a spatial three-dimensional coordinate system, wherein the coordinates (u', v ', w') are calculated as:
Wherein T is a transformation matrix, and the transformation matrix T is:
further, the system further comprises:
The interpolation module is used for obtaining coordinate points P (u ', V ', w ') at voxel blank positions in the space three-dimensional coordinate system, obtaining voxel values adjacent to the coordinate points P (u ', V ', w ') in three mutually perpendicular directions in the space three-dimensional coordinate system, and the voxel value interpolation V P of the coordinate points P (u ', V ', w ') is as follows:
VP(u′,v′,w′)=V000(1-u′)(1-v′)(1-w′)+V001(1-u′)(1-v′)w′+V010(1-u′)v′(1-w′)+V011(1-u′)v′w′+V100u′(1-v′)(1-w′)+V101u′(1-v′)w′+V110u′v′(1-w′)+V111u′v′w′;
Wherein V 000 is the minimum x, y, and z-axis direction voxel values, V 001 is the maximum z-axis direction voxel values and the minimum x and y-axis direction voxel values, V 010 is the maximum y-axis direction voxel values and the minimum x and z-axis direction voxel values, V 011 is the maximum z and y-axis direction voxel values and the minimum x-axis direction voxel values, V 100 is the maximum x-axis direction voxel values and the minimum y and z-axis direction voxel values, V 101 is the maximum x and z-axis direction voxel values and the minimum y-axis direction voxel values, V 110 is the maximum x and y-axis direction voxel values and the minimum z-axis direction voxel values, and V 111 is the maximum x, y, and z-axis direction voxel values.
Further, the system further comprises:
The artifact correction module is configured to obtain incident point positions L 1 (u ', v', w ') and L 2 (u', v ', w') of the ultrasonic waves corresponding to the first path and the second path, and record direction vectors of the ultrasonic waves corresponding to the first path and the second path as AndThe central coordinate positions of the tumor corresponding to the first path and the second path are respectively marked as P 1 (u ', v ', w ') and P 2 (u ', v ', w '), the central coordinate positions of the tumor correction are marked as P (u ', v ', w '), the refraction point positions of the ultrasonic wave penetrating through the cyst corresponding to the first path and the second path are marked as R 1 (u ', v ', w ') and R 2 (u ', v ', w '), and the ultrasonic wave refraction equation set is established as follows:
Wherein d 1 is the distance from R 1 (u ', v ', w ') to P 1 (u ', v ', w '), d 2 is the distance from R 2 (u ', v ', w ') to P 2 (u ', v ', w '), and the mass correction center coordinate position is calculated by the ultrasonic refraction equation set and is recorded as P (u ', v ', w ').
It should be noted that, regarding the system in the above embodiment, the specific manner in which the respective modules perform the operations has been described in detail in the embodiment regarding the method, and will not be described in detail herein.
Finally, it should be noted that although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the technical solutions described in the foregoing embodiments, or equivalents may be substituted for some of the technical features thereof, and any modifications, equivalents, improvements or changes may be made without departing from the spirit and principle of the present invention.