Disclosure of Invention
The invention aims to provide an intraoperative lung nodule positioning system based on a endoscopic B-ultrasonic image, which is used for identifying and analyzing the B-ultrasonic image through a lung nodule positioning unit, determining the accurate position and the safe cutting edge of a lung nodule, and an AI display unit provides navigation and operation proposal for doctors, thereby being beneficial to improving the operation effect and reducing the occurrence of complications so as to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions:
an intraoperative lung nodule positioning system based on endoscopic B-mode ultrasound images, comprising:
the cavity mirror unit is used for integrating the B-ultrasonic probe on the cavity mirror system, acquiring an intra-operative lung B-ultrasonic image based on the B-ultrasonic equipment in real time, and preprocessing the acquired B-ultrasonic image to obtain dynamic image data of a lung structure of a patient;
A lung nodule localization unit for:
determining respiratory motion characteristics of a patient based on dynamic image data of a lung structure of the patient, performing respiratory motion compensation on the B-ultrasonic image based on the respiratory motion characteristics to obtain a compensated lung structure, and detecting and identifying lung nodules in the B-ultrasonic image;
constructing a three-dimensional model of the lung of the patient based on the identified lung nodule information, determining the position, the size and the relation with surrounding tissues of the lung nodule through the three-dimensional model of the lung, and calculating a safe cutting edge based on the position and the size of the lung nodule;
AI display unit for:
Displaying operation data, B ultrasonic images and analysis results of a patient in real time, generating an operation scheme according to the position, the size and the lung structure of the lung of the patient, and determining the position and the excision range of the lung;
Meanwhile, the operation progress is monitored in real time in the operation, and a real-time navigation instruction is provided for a doctor based on the current position of the excision equipment.
Further, the endoscope unit includes:
the B-ultrasonic imaging module is used for receiving and processing the sound wave signals from the B-ultrasonic probe in real time, and converting the sound wave signals to generate a visual B-ultrasonic image;
The image processing module is used for carrying out smoothing and image enhancement processing on the original B ultrasonic image acquired from the B ultrasonic imaging module and carrying out image segmentation operation on the processed B ultrasonic image;
and the lung contour positioning module is used for extracting the characteristics of the segmented B ultrasonic image, identifying and marking the boundary of the lung in the image based on the extracted lung characteristics to obtain the lung contour, and acquiring the dynamic image data of the lung by monitoring the change of the lung contour in real time.
Further, the image processing module includes:
the first pixel block extraction module is used for extracting each pixel block of the original B ultrasonic image to be used as a main pixel block;
a second pixel block extracting module for extracting a plurality of pixel blocks in contact with the main pixel block as auxiliary pixel blocks;
The pixel block unit acquisition module is used for taking each main pixel block and the corresponding auxiliary pixel block as a pixel block unit;
The first gray value extraction module is used for extracting the gray value of each main pixel block of the original B ultrasonic image;
The second gray value extraction module is used for extracting gray values of a plurality of auxiliary pixel blocks which correspond to each main pixel block and are in contact with each other;
The gray reference value acquisition module is used for acquiring a gray reference value corresponding to each pixel value block by utilizing the gray value of each main pixel block of the original B ultrasonic image and the gray values of a plurality of auxiliary pixel blocks corresponding to each main pixel block, wherein the gray reference value is acquired by the following steps:
Wherein R c represents a gray reference value, n represents the number of auxiliary pixel blocks, R i represents the gray value of the ith auxiliary pixel block, R z represents the gray value of the main pixel block, R 01 and R 02 represent a first compensation adjustment coefficient and a second compensation adjustment coefficient respectively;
wherein R p represents the average gray value of the auxiliary pixel block, R max represents the maximum gray value in the auxiliary pixel block;
and the gray value adjusting module is used for adjusting the gray value of the main pixel block by utilizing the gray reference value.
Further, the gray value adjusting module includes:
the gray reference value calling module is used for calling the gray reference value;
A first gray difference value obtaining module, configured to obtain a gray difference value between the gray reference value and the main pixel block by using the gray reference value and the gray value of the main pixel block, as a first gray difference value;
A second gray level difference value obtaining module, configured to obtain a gray level difference value between the gray level reference value and each auxiliary pixel block by using the gray level reference value and the gray level value of each auxiliary pixel block, as a second gray level difference value;
the target gray value acquisition module is used for acquiring the target gray value of the main pixel block by utilizing the first gray difference value and the second gray difference value, wherein the target gray value is acquired by the following formula:
Wherein R m represents a target gray value of a main pixel block, R si represents a second gray value difference value corresponding to an ith auxiliary pixel block, and R sz represents a first gray value difference value corresponding to the main pixel block;
and the adjustment execution module is used for adjusting the gray value of the main pixel block according to the target gray value of the main pixel block.
Further, the image processing module further includes:
The gray value difference value acquisition module is used for extracting the gray value difference value between each main pixel block and each auxiliary pixel block after the gray value adjustment is completed after all pixel blocks of the original B ultrasonic image are completed;
the gray value difference comparison module is used for comparing the gray value difference between each main pixel block and each auxiliary pixel block after the gray value adjustment is completed with a preset difference threshold;
The gray value data information extraction module is used for extracting gray values of the auxiliary pixel blocks corresponding to the gray value difference exceeding a preset difference threshold;
the gray value compensation coefficient acquisition module is used for acquiring a gray value compensation coefficient of a gray value of the auxiliary pixel block according to gray value data information of the auxiliary pixel block corresponding to a difference value exceeding a preset difference value threshold, wherein the gray value compensation coefficient is acquired according to the following formula:
Wherein R m represents a gray value compensation coefficient, R fx and R fh respectively represent a gray value before gray value adjustment and a gray value after gray value adjustment of the auxiliary pixel block, R e represents a gray value difference value between the auxiliary pixel block and the main pixel block, and R cy represents a preset difference value threshold;
the gray value compensation adjustment module is used for carrying out gray value compensation adjustment on the gray value of the auxiliary pixel block by using a gray value compensation coefficient, wherein the gray value of the auxiliary pixel block after gray value compensation adjustment is obtained through the following formula:
Wherein R t represents the gray value of the auxiliary pixel block after gray value compensation adjustment, and R zh represents the gray value of the main pixel block after gray value adjustment.
Further, the lung contour positioning module specifically comprises:
determining the appearance characteristics of the lung of the patient based on the B ultrasonic image, and calculating the gradient strength and the gradient direction of pixel points in the B ultrasonic image based on the appearance characteristics so as to determine the discrete edge points of the edge of the lung of the patient;
identifying each end point of the patient lung contour line based on the discrete edge points, wherein each end point comprises a starting point, an end point and a turning point of the contour;
Searching adjacent pixel points of each endpoint, and continuing searching along the direction of the high gradient until returning to the starting point, so as to form a complete lung contour line;
Determining the distance between the endpoints based on each endpoint of the contour line, extracting the characteristic parameters of the contour line, and performing smoothing treatment on the extracted characteristic parameters of the contour line to obtain a complete and smooth lung contour line;
and comparing the lung contour lines at different time points, and analyzing the dynamic function change of the lung of the patient based on the comparison result to obtain dynamic image data of the lung structure of the patient.
Further, the lung nodule positioning unit comprises:
The respiratory motion compensation module is used for determining respiratory motion characteristics of a patient based on dynamic image data of a lung structure of the patient, determining a respiratory mode of the patient based on the respiratory motion characteristics of the patient, and correcting and compensating the B ultrasonic image based on the respiratory mode of the patient;
The lung nodule recognition module is used for recognizing and extracting nodules in the image by adopting an image processing and analyzing algorithm, analyzing extracted features of pixel values, textures and shapes in the image and marking the positions and the ranges of the lung nodules;
the lung nodule analysis module is used for converting continuous two-dimensional B ultrasonic images into a three-dimensional model based on a three-dimensional reconstruction technology, and the three-dimensional model comprises the position, the size and the relation with surrounding tissues of a lung nodule;
And the lung nodule positioning module is used for calculating the coordinates and the size of the lung nodule based on the three-dimensional model and the analysis result of the lung nodule and evaluating the important structure and the important grade of the relation of the surrounding tissues of the lung nodule based on the relation with the surrounding tissues.
Further, the lung nodule recognition module specifically comprises:
determining a threshold value of the B ultrasonic image by using a gray histogram of the image, selecting the peak position of the gray histogram as a preset threshold value, dividing the image into a foreground and a background based on the preset threshold value according to gray features of the image, and primarily detecting a possible nodule region;
Extracting shape features of the nodules in possible nodule areas, analyzing texture features of the nodules, measuring the size of the nodules, and comparing with a preset threshold;
Classifying and integrating the extracted features to generate a lung nodule set, removing overlapped nodules from the lung nodule set, merging adjacent nodules, and processing to obtain a true lung nodule and a false positive result;
Information of the position, size and shape of the detected eupulmonary nodule is output in the form of graphics and texts.
Further, the lung nodule positioning module specifically comprises:
Determining a coordinate system based on the three-dimensional model constructed by the lung nodule analysis module, and calculating coordinates of the recognition result of the lung nodule recognition module in the three-dimensional image based on the determined coordinate system;
Determining the main axis, the minor axis and the minimum axis of each lung nodule in the identification result, and constructing a local coordinate system based on the main axis, the minor axis and the minimum axis of each lung nodule, wherein the center of the lung nodule is an origin, and the main axis, the minor axis and the minimum axis respectively correspond to the x axis, the y axis and the z axis;
The size of the nodule is determined based on the local coordinate system of each lung nodule, the number of pixels occupied by the lung nodule is calculated, and the volume of the lung nodule is determined by multiplying the spatial resolution of each pixel.
Further, the AI display unit includes:
the scheme acquisition module is used for acquiring the history medical image and the physiological data of the patient which are actively uploaded, and generating a surgical scheme of the patient based on the acquired history medical image and the acquired physiological data and the important structure and the important grade of the tissue relationship around the lung nodule;
the real-time positioning module is used for acquiring the coordinate data of the lung nodule positioning module, monitoring the position and the state of the operation equipment in real time and transmitting the acquired positioning data to the real-time navigation module in real time;
the real-time navigation module is used for receiving the positioning data of the surgical equipment from the real-time positioning module, calculating the coordinate data difference value between the positioning data of the surgical equipment and the lung nodule, and generating a real-time navigation instruction;
The display module is used for displaying the operation data of the patient in real time and a real-time image acquired through B ultrasonic;
the intraoperative data recording and analyzing module is used for recording detailed data in the surgical process in real time, including information of the position, the size and the shape of a lung nodule and surgical operation data of a doctor;
And the remote collaboration module is used for transmitting the B-ultrasonic image and the nodule positioning data in operation to the terminal where the remote doctor is located based on the Internet of things in real time, so that the real-time interaction of the data of the remote doctor and the on-site doctor is realized.
Compared with the prior art, the invention has the beneficial effects that:
The method comprises the steps of acquiring a B ultrasonic image of a lung in an operation through a cavity mirror unit, processing the image, identifying and analyzing the B ultrasonic image by using a lung nodule positioning unit, monitoring respiratory motion of a patient, dynamically compensating the B ultrasonic image, thereby eliminating influence of respiratory motion on lung nodule positioning, improving positioning accuracy and stability, determining accurate position and safe cutting edge of a lung nodule, enabling a doctor to perform more accurate nodule positioning and analysis in a three-dimensional model, obtaining lung structure information of the patient, providing navigation and operation proposal for the doctor through an AI display unit, providing real-time and accurate lung nodule positioning and operation navigation, being beneficial to improving operation effect and reducing occurrence of complications.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to solve the technical problems of inaccurate positioning and more complications in the prior art and restrict the effect of the positioning in practical application, referring to fig. 1, the present embodiment provides the following technical solutions:
an intraoperative lung nodule positioning system based on endoscopic B-mode ultrasound images, comprising:
the cavity mirror unit is used for integrating the B-ultrasonic probe on the cavity mirror system, acquiring an intra-operative lung B-ultrasonic image based on the B-ultrasonic equipment in real time, and preprocessing the acquired B-ultrasonic image to obtain dynamic image data of a lung structure of a patient;
A lung nodule localization unit for:
determining respiratory motion characteristics of a patient based on dynamic image data of a lung structure of the patient, performing respiratory motion compensation on the B-ultrasonic image based on the respiratory motion characteristics to obtain a compensated lung structure, and detecting and identifying lung nodules in the B-ultrasonic image;
constructing a three-dimensional model of the lung of the patient based on the identified lung nodule information, determining the position, the size and the relation with surrounding tissues of the lung nodule through the three-dimensional model of the lung, and calculating a safe cutting edge based on the position and the size of the lung nodule;
AI display unit for:
Displaying operation data, B ultrasonic images and analysis results of a patient in real time, generating an operation scheme according to the position, the size and the lung structure of the lung of the patient, and determining the position and the excision range of the lung;
meanwhile, the operation progress is monitored in real time in the operation, and a real-time navigation instruction is provided for a doctor based on the current position of the excision equipment, so that the excision equipment operated by the doctor can accurately reach the position of the lung nodule.
In the embodiment, the B ultrasonic in the operation is non-radiative and integrated on a cavity mirror system, the position of a lung nodule can be positioned in real time and reflected in a display, the B ultrasonic image of the lung in the operation is acquired through a cavity mirror unit, the image is processed, the B ultrasonic image is identified and analyzed by a lung nodule positioning unit, respiratory motion of a patient is monitored, and the B ultrasonic image is dynamically compensated, so that the influence of respiratory motion on the positioning of the lung nodule is eliminated, the accuracy and the stability of the positioning are improved, the accurate position and the safe cutting edge of the lung nodule are determined, a doctor can perform more accurate nodule positioning and analysis in a three-dimensional model to obtain lung structure information of the patient, navigation and operation proposal are provided for the doctor through an AI display unit, the real-time and accurate lung nodule positioning and operation navigation are provided, the accuracy and the efficiency of the operation are improved, the operation effect is improved, and the occurrence of complications is reduced.
In this embodiment, the endoscope unit includes:
the B-ultrasonic imaging module is used for receiving and processing the sound wave signals from the B-ultrasonic probe in real time, and converting the sound wave signals to generate a visual B-ultrasonic image;
The image processing module is used for carrying out smoothing and image enhancement processing on the original B ultrasonic image acquired from the B ultrasonic imaging module, carrying out image segmentation operation on the processed B ultrasonic image, separating different areas in the image such as lung tissues, gas, liquid and the like, and adopting a segmentation algorithm based on gray values, textures or shape characteristics to segment the B ultrasonic image aiming at the characteristics of the B ultrasonic image;
The lung contour positioning module is used for extracting features of the segmented B ultrasonic image, including extracting edge features, texture features, shape features and the like of the lung, identifying and marking the boundary of the lung in the image based on the extracted lung features to obtain the lung contour, and acquiring dynamic image data of the lung by monitoring the change of the lung contour in real time;
specifically, the image processing module includes:
the first pixel block extraction module is used for extracting each pixel block of the original B ultrasonic image to be used as a main pixel block;
a second pixel block extracting module for extracting a plurality of pixel blocks in contact with the main pixel block as auxiliary pixel blocks;
The pixel block unit acquisition module is used for taking each main pixel block and the corresponding auxiliary pixel block as a pixel block unit;
The first gray value extraction module is used for extracting the gray value of each main pixel block of the original B ultrasonic image;
The second gray value extraction module is used for extracting gray values of a plurality of auxiliary pixel blocks which correspond to each main pixel block and are in contact with each other;
The gray reference value acquisition module is used for acquiring a gray reference value corresponding to each pixel value block by utilizing the gray value of each main pixel block of the original B ultrasonic image and the gray values of a plurality of auxiliary pixel blocks corresponding to each main pixel block, wherein the gray reference value is acquired by the following steps:
Wherein R c represents a gray reference value, n represents the number of auxiliary pixel blocks, R i represents the gray value of the ith auxiliary pixel block, R z represents the gray value of the main pixel block, R 01 and R 02 represent a first compensation adjustment coefficient and a second compensation adjustment coefficient respectively;
wherein R p represents the average gray value of the auxiliary pixel block, R max represents the maximum gray value in the auxiliary pixel block;
and the gray value adjusting module is used for adjusting the gray value of the main pixel block by utilizing the gray reference value.
The technical effect of the technical scheme is that the image processing module can process the local area of the image more finely by extracting the main pixel block and the surrounding auxiliary pixel blocks and combining the main pixel block and the surrounding auxiliary pixel blocks into a pixel block unit. This approach helps to enhance details in the image, especially when there is a gray scale difference between the main pixel block and the auxiliary pixel block.
The gray value adjustment module may perform gray value adjustment on the main pixel block using the gray reference values calculated by the gray values of the main pixel block and the auxiliary pixel block. Such adjustment can make the image more uniform, reducing gray value anomalies due to noise or signal interference.
By introducing a first compensation adjustment coefficient (r 01) and a second compensation adjustment coefficient (r 02), the technical scheme allows a user or a system to adjust the calculation mode of the gray reference value according to specific requirements. This flexibility enables the image processing module to accommodate different B-mode images and processing requirements, thereby optimizing image quality.
The introduction of the average gray value (Rp) and the maximum gray value (Rmax) of the sub-pixel block helps to reduce the impact on the gray reference value calculation due to individual outlier pixels (e.g. noise points). The processing mode enables the image processing module to be more robust, and can provide accurate gray reference values under the condition of noise.
The contrast of the B ultrasonic image after gray value adjustment may be enhanced. This is because the gray value adjustment module can perform targeted adjustment on the main pixel block according to the gray reference value, so as to highlight the key information in the image.
In summary, according to the technical scheme, the quality of the B-mode ultrasonic image can be effectively improved by finely processing the local area of the B-mode ultrasonic image, utilizing the gray reference value and the compensation adjustment coefficient to adjust the gray value and the like, including enhancing the image details, reducing the noise influence, improving the image contrast and the like. This is of great importance for medical diagnosis and disease analysis.
Specifically, the gray value adjustment module includes:
the gray reference value calling module is used for calling the gray reference value;
A first gray difference value obtaining module, configured to obtain a gray difference value between the gray reference value and the main pixel block by using the gray reference value and the gray value of the main pixel block, as a first gray difference value;
A second gray level difference value obtaining module, configured to obtain a gray level difference value between the gray level reference value and each auxiliary pixel block by using the gray level reference value and the gray level value of each auxiliary pixel block, as a second gray level difference value;
the target gray value acquisition module is used for acquiring the target gray value of the main pixel block by utilizing the first gray difference value and the second gray difference value, wherein the target gray value is acquired by the following formula:
Wherein R m represents a target gray value of a main pixel block, R si represents a second gray value difference value corresponding to an ith auxiliary pixel block, and R sz represents a first gray value difference value corresponding to the main pixel block;
and the adjustment execution module is used for adjusting the gray value of the main pixel block according to the target gray value of the main pixel block.
The technical effect of the technical scheme is that the module can accurately identify the gray level difference between each main pixel block and the surrounding environment of the main pixel block in the image by calculating the gray level difference between the gray level reference value and the main pixel block and the auxiliary pixel block. Then, the target gray value of the main pixel block is calculated by using the gray difference values, so that the accurate adjustment of the gray value of the main pixel block is realized. Such adjustment helps to improve the contrast of the image, reduce local brightness non-uniformity, and thereby improve the overall quality of the image.
Since the gray value adjustment is based on the gray difference of the main pixel block and its auxiliary pixel block, the module can enhance the details in the image. By adjusting the gray value of the main pixel block, the main pixel block is more coordinated with the surrounding environment, and key information in the image such as focus, blood vessel and the like can be highlighted, so that the diagnosis accuracy of doctors on the image is improved.
In calculating the gray difference value, the module takes into account the gray value of the secondary pixel block, which helps to reduce the effect of noise on the adjustment of the gray value of the primary pixel block. Since the secondary pixel blocks typically contain similar image information as the primary pixel blocks, their gray values can provide useful reference information, helping the module to more accurately identify noise points and reduce their impact.
The whole gray value adjustment process is automatic, and manual intervention is not needed. The efficiency and the accuracy of image processing are greatly improved, and the influence of human factors on the image processing result is reduced.
In summary, the gray value adjusting module calculates the gray difference values and adjusts the gray value of the main pixel block based on the difference values, so that the quality of the B-mode ultrasonic image can be remarkably improved, the image details can be enhanced, the noise influence can be reduced, and the flexibility and the automation degree are high. The technical effects have important application values in the aspects of medical diagnosis, image analysis, subsequent processing and the like.
Specifically, the image processing module further includes:
The gray value difference value acquisition module is used for extracting the gray value difference value between each main pixel block and each auxiliary pixel block after the gray value adjustment is completed after all pixel blocks of the original B ultrasonic image are completed;
the gray value difference comparison module is used for comparing the gray value difference between each main pixel block and each auxiliary pixel block after the gray value adjustment is completed with a preset difference threshold;
The gray value data information extraction module is used for extracting gray values of the auxiliary pixel blocks corresponding to the gray value difference exceeding a preset difference threshold;
the gray value compensation coefficient acquisition module is used for acquiring a gray value compensation coefficient of a gray value of the auxiliary pixel block according to gray value data information of the auxiliary pixel block corresponding to a difference value exceeding a preset difference value threshold, wherein the gray value compensation coefficient is acquired according to the following formula:
Wherein R m represents a gray value compensation coefficient, R fx and R fh respectively represent a gray value before gray value adjustment and a gray value after gray value adjustment of the auxiliary pixel block, R e represents a gray value difference value between the auxiliary pixel block and the main pixel block, and R cy represents a preset difference value threshold;
the gray value compensation adjustment module is used for carrying out gray value compensation adjustment on the gray value of the auxiliary pixel block by using a gray value compensation coefficient, wherein the gray value of the auxiliary pixel block after gray value compensation adjustment is obtained through the following formula:
Wherein R t represents the gray value of the auxiliary pixel block after gray value compensation adjustment, and R zh represents the gray value of the main pixel block after gray value adjustment.
The technical effect of the technical scheme is that the system can identify the significant difference of the gray values between the main pixel block and the auxiliary pixel block through the gray value difference acquisition module and the gray value difference comparison module. These differences may be due to noise during image acquisition, device errors, or characteristics of the image itself. Through subsequent processing, the system can reduce these differences, thereby enhancing the consistency of gray values in the image. The gray value data information extraction module and the gray value compensation coefficient acquisition module work together to identify auxiliary pixel blocks to be adjusted and calculate proper gray value compensation coefficients for the auxiliary pixel blocks. The purpose of this step is to improve the overall quality of the image by adjusting the gray value of the secondary pixel block closer to the gray value of the primary pixel block.
The entire process flow is automated and no manual intervention is required. This makes the image processing process more efficient, faster, and reduces errors introduced by human factors. By setting different difference thresholds, a user can adjust the processing degree of the gray value difference according to actual requirements. This provides flexibility and customizable properties enabling the solution to adapt to different scenarios and application requirements. The gray value compensation adjustment module adjusts the gray value of the auxiliary pixel block by using the calculated gray value compensation coefficient, so as to optimize the visual effect of the image. Such adjustment may reduce non-uniformities and artifacts in the image, making the image clearer and easier to interpret.
In summary, the technical scheme realizes the adjustment of the gray value difference between the main pixel block and the auxiliary pixel block in the B-ultrasonic image through a series of modularized processing flows, thereby enhancing the consistency of the gray value of the image, improving the image quality, optimizing the visual effect and having the characteristics of automation, flexibility and customization.
In this embodiment, the lung contour positioning module specifically includes:
determining the appearance characteristics of the lung of the patient based on the B ultrasonic image, and calculating the gradient strength and the gradient direction of pixel points in the B ultrasonic image based on the appearance characteristics so as to determine the discrete edge points of the edge of the lung of the patient;
identifying each end point of the patient lung contour line based on the discrete edge points, wherein each end point comprises a starting point, an end point and a turning point of the contour;
Searching adjacent pixel points of each endpoint, and continuing searching along the direction of the high gradient until returning to the starting point, so as to form a complete lung contour line;
Determining the distance between the endpoints based on each endpoint of the contour line, extracting the characteristic parameters of the contour line, and performing smoothing treatment on the extracted characteristic parameters of the contour line to obtain a complete and smooth lung contour line;
and comparing the lung contour lines at different time points, and analyzing the dynamic function change of the lung of the patient based on the comparison result to obtain dynamic image data of the lung structure of the patient.
In this embodiment, the acoustic wave signal from the B-ultrasonic probe is received in real time and converted into the visualized B-ultrasonic image, the original B-ultrasonic image is smoothed and enhanced to improve the image quality, different areas in the image are separated by adopting a segmentation algorithm based on gray values, textures or shape characteristics, a doctor can see the structure and boundary of the lung more clearly, and the dynamic function change of the lung is analyzed by obtaining complete and smooth lung contour lines and comparing the lung contour lines at different time points, so that the dynamic function change of the lung can be monitored in real time, and the doctor can better avoid damaging surrounding important structures in the operation process, thereby reducing the occurrence of complications.
In this embodiment, the lung nodule positioning unit comprises:
The respiratory motion compensation module is used for determining respiratory motion characteristics of a patient based on dynamic image data of a lung structure of the patient, determining respiratory modes of the patient based on the respiratory motion characteristics of the patient, such as respiratory frequency, respiratory depth and the like, correcting and compensating the B ultrasonic image based on the respiratory modes of the patient so as to eliminate the influence of respiratory motion on the image quality, and ensuring the accuracy and reliability of subsequent image processing;
The lung nodule recognition module is used for recognizing and extracting nodules in an image by adopting an image processing and analyzing algorithm, analyzing extracted features of pixel values, textures and shapes in the image, and marking the positions and the ranges of the lung nodules, and specifically comprises the following steps:
Determining a threshold value of the B ultrasonic image by using a gray histogram of the image, selecting the peak position of the gray histogram as a preset threshold value, dividing the image into a foreground and a background based on the preset threshold value according to the gray characteristic of the image, wherein the foreground is a possible nodule, the background is normal lung tissue, preliminarily detecting a possible nodule area, removing small noise points by morphological operation, and simultaneously keeping the shape characteristic of the nodule;
Extracting shape characteristics of the nodules, such as circularity, compactness, edge smoothness and the like, analyzing texture characteristics of the nodules, such as gray level co-occurrence matrix, local binary pattern and the like, measuring the sizes of the nodules, such as diameter, volume and the like, and comparing with a preset threshold value;
Classifying and integrating the extracted features to generate a lung nodule set, removing overlapped nodules from the lung nodule set, merging adjacent nodules, and processing to obtain a true lung nodule and a false positive result;
outputting information of the position, the size and the shape of the detected true lung nodule in the form of graphics and texts;
the lung nodule analysis module is used for converting continuous two-dimensional B ultrasonic images into a three-dimensional model based on a three-dimensional reconstruction technology, and the three-dimensional model comprises the position, the size and the relation with surrounding tissues of a lung nodule;
the lung nodule positioning module is used for calculating the coordinates and the sizes of the lung nodules based on the three-dimensional model and the analysis result of the lung nodules, and evaluating important structures and important grades of the relation of the surrounding tissues of the lung nodules based on the relation with the surrounding tissues, such as blood vessels, bronchi and the like, specifically comprising:
Determining a coordinate system based on the three-dimensional model constructed by the lung nodule analysis module, and calculating coordinates of the recognition result of the lung nodule recognition module in the three-dimensional image based on the determined coordinate system;
Determining the main axis, the minor axis and the minimum axis of each lung nodule in the identification result, and constructing a local coordinate system based on the main axis, the minor axis and the minimum axis of each lung nodule, wherein the center of the lung nodule is an origin, and the main axis, the minor axis and the minimum axis respectively correspond to the x axis, the y axis and the z axis;
The size of the nodule is determined based on the local coordinate system of each lung nodule, the number of pixels occupied by the lung nodule is calculated, and the volume of the lung nodule is determined by multiplying the spatial resolution of each pixel.
In this embodiment, through respiratory motion compensation and advanced image processing algorithm, the lung nodule can be more accurately identified and positioned, the possibility of misjudgment and missed diagnosis is reduced, the relation between the lung nodule and surrounding tissues is accurately evaluated, a doctor can avoid damaging important structures more carefully in the operation process, thereby reducing the risk of complications, the lung nodule positioning module can more accurately calculate the coordinates and the size of the lung nodule through the construction of a three-dimensional model and a local coordinate system, the positioning accuracy is improved, the volume and the occupied pixel number of the lung nodule are calculated, the support of quantitative analysis is provided for the doctor, the growth speed and the treatment effect of the lung nodule can be better evaluated, the doctor can be helped to determine the optimal operation path and strategy, and the doctor can be helped to formulate a more accurate operation scheme.
In the present embodiment, the AI display unit includes:
The scheme acquisition module is used for acquiring the actively uploaded historical medical image and physiological data of the patient, generating a surgical scheme of the patient based on the acquired historical medical image and physiological data and important structures and important grades of tissue relations around the lung nodules, and providing accurate surgical guidance based on specific conditions of the patient for doctors;
the real-time positioning module is used for acquiring the coordinate data of the lung nodule positioning module, monitoring the position and the state of the operation equipment in real time and transmitting the acquired positioning data to the real-time navigation module in real time;
the real-time navigation module is used for receiving the positioning data of the surgical equipment from the real-time positioning module, calculating the coordinate data difference value between the positioning data of the surgical equipment and the lung nodule, generating a real-time navigation instruction, and ensuring the accuracy and the safety in the surgical process;
The display module is used for displaying the operation data of the patient in real time, including various physiological parameters and operation progress information, and real-time images acquired through B ultrasonic, and providing a real-time view of the internal condition of the patient for doctors;
The intraoperative data recording and analyzing module is used for recording detailed data in the surgical process, including information of the position, the size and the shape of a lung nodule and surgical operation data of a doctor, and can provide a basis for postoperative evaluation and improvement of a surgical method for the doctor through analysis of the data;
the remote collaboration module is used for transmitting the B-ultrasonic image and the nodule positioning data in the operation to the terminal where the remote doctor is located based on the Internet of things in real time, so that the real-time interaction of the data of the remote doctor and the on-site doctor is realized, the remote doctor can provide remote guidance and suggestion for the on-site doctor, and the safety and effect of the operation are improved.
In the embodiment, by integrating the patient history data and the lung nodule characteristics, a personalized operation scheme is generated, the accuracy and effect of the operation are improved, the operation progress is monitored in real time, navigation instructions are provided, a doctor can reach the position of the lung nodule more quickly, the operation time is shortened, the operation efficiency is improved, the data recording and analysis in the operation provide the basis for the doctor to evaluate and improve the operation method after the operation, the professional level of the doctor is improved, particularly in the treatment of complex cases, the real-time data interaction and remote guidance of the on-site doctor and the remote doctor are realized through the remote cooperation module, the operation safety and effect are improved, the omnibearing support and guarantee are provided for the operation process, the operation safety and effect are improved, and meanwhile, an important tool for evaluating and improving the operation after the doctor is provided.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should be covered by the protection scope of the present invention by making equivalents and modifications to the technical solution and the inventive concept thereof.