[go: up one dir, main page]

CN118680677B - Intraoperative pulmonary nodule localization system based on laparoscopic B-ultrasound images - Google Patents

Intraoperative pulmonary nodule localization system based on laparoscopic B-ultrasound images Download PDF

Info

Publication number
CN118680677B
CN118680677B CN202410696512.XA CN202410696512A CN118680677B CN 118680677 B CN118680677 B CN 118680677B CN 202410696512 A CN202410696512 A CN 202410696512A CN 118680677 B CN118680677 B CN 118680677B
Authority
CN
China
Prior art keywords
lung
gray
pixel block
gray value
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410696512.XA
Other languages
Chinese (zh)
Other versions
CN118680677A (en
Inventor
许有涛
吕萌萌
张怡
夏文佳
黄倩
陈炳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Cancer Hospital
Original Assignee
Jiangsu Cancer Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Cancer Hospital filed Critical Jiangsu Cancer Hospital
Priority to CN202410696512.XA priority Critical patent/CN118680677B/en
Publication of CN118680677A publication Critical patent/CN118680677A/en
Application granted granted Critical
Publication of CN118680677B publication Critical patent/CN118680677B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Clinical applications
    • A61B8/0833Clinical applications involving detecting or locating foreign bodies or organic structures
    • A61B8/085Clinical applications involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5207Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2063Acoustic tracking systems, e.g. using ultrasound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • G06T2207/101363D ultrasound image

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Vascular Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

本发明公开了基于腔镜下B超图像的术中肺结节定位系统,涉及肺结节定位技术领域。为了解决现有技术存在定位不准、存在血胸、气胸等并发症,制约了其在实际应用中的效果的问题;基于腔镜下B超图像的术中肺结节定位系统,包括腔镜单元、肺结节定位单元和AI显示单元;通过腔镜单元获取术中肺部的B超图像,利用肺结节定位单元对B超图像进行识别与分析,监测患者的呼吸运动,并对B超图像进行动态补偿,从而消除呼吸运动对肺结节定位的影响,提高定位的准确性和稳定性,确定肺结节的精确位置及安全切缘,通过AI显示单元为医生提供导航和手术方案建议,提供了实时、准确的肺结节定位和手术导航,有助于提高手术效果,减少并发症的发生。

The present invention discloses an intraoperative lung nodule positioning system based on laparoscopic B-ultrasound images, and relates to the technical field of lung nodule positioning. In order to solve the problem that the existing technology has inaccurate positioning, complications such as hemothorax and pneumothorax, which restricts its effect in practical applications; the intraoperative lung nodule positioning system based on laparoscopic B-ultrasound images includes a laparoscope unit, a lung nodule positioning unit and an AI display unit; the laparoscope unit is used to obtain the intraoperative lung B-ultrasound image, the lung nodule positioning unit is used to identify and analyze the B-ultrasound image, the patient's respiratory movement is monitored, and the B-ultrasound image is dynamically compensated, thereby eliminating the influence of respiratory movement on the positioning of lung nodules, improving the accuracy and stability of positioning, determining the precise position of lung nodules and safe cutting margins, providing navigation and surgical plan suggestions for doctors through the AI display unit, providing real-time and accurate lung nodule positioning and surgical navigation, which helps to improve the surgical effect and reduce the occurrence of complications.

Description

Intraoperative pulmonary nodule positioning system based on endoscopic B-ultrasonic image
Technical Field
The invention relates to the technical field of lung nodule positioning, in particular to an intraoperative lung nodule positioning system based on a endoscopic B-ultrasonic image.
Background
Because thoracoscopic surgery has the advantages of small wound, quick recovery and the like, the thoracoscopic surgery has become an important means for diagnosing and treating the pulmonary nodules. However, how to locate the lung nodules quickly and accurately intraoperatively, how to ablate the tumor with maximum accuracy, and to protect the lung function to the maximum extent, has been a challenge for thoracic surgeons. The patent publication No. CN111821034B discloses a magnetic anchoring lung nodule positioning device for thoracoscopic surgery, which comprises two target magnets used for clamping target nodules on two sides of the target nodules, two coaxial puncture needles which are of hollow structures, wherein the hollow sizes of the two coaxial puncture needles are respectively larger than the outer sizes of the two target magnets so as to respectively pass through the two target magnets, a positioning plate with a plurality of holes, the two coaxial puncture needles pass through the two holes to realize the preliminary positioning of the target nodules, and an anchoring magnet used for attracting the target magnets on the surface of the lung and confirming the positioning range according to the magnetic force, wherein the positioning plate is provided with a plurality of holes, and the anchoring magnet is used for positioning the focus of the knot through magnetic attraction.
Although the above patent can locate the lesion of the node, the traditional lung nodule auxiliary locating technology, such as percutaneous puncture auxiliary locating under CT guidance, bronchoscopy puncture auxiliary locating, CT virtual 3D auxiliary locating and the like, has a certain effect, but also has the problems of inaccurate locating, more complications and the like, for example, the methods may cause complications of the blood chest, pneumothorax and the like, and restrict the effect in practical application.
Disclosure of Invention
The invention aims to provide an intraoperative lung nodule positioning system based on a endoscopic B-ultrasonic image, which is used for identifying and analyzing the B-ultrasonic image through a lung nodule positioning unit, determining the accurate position and the safe cutting edge of a lung nodule, and an AI display unit provides navigation and operation proposal for doctors, thereby being beneficial to improving the operation effect and reducing the occurrence of complications so as to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions:
an intraoperative lung nodule positioning system based on endoscopic B-mode ultrasound images, comprising:
the cavity mirror unit is used for integrating the B-ultrasonic probe on the cavity mirror system, acquiring an intra-operative lung B-ultrasonic image based on the B-ultrasonic equipment in real time, and preprocessing the acquired B-ultrasonic image to obtain dynamic image data of a lung structure of a patient;
A lung nodule localization unit for:
determining respiratory motion characteristics of a patient based on dynamic image data of a lung structure of the patient, performing respiratory motion compensation on the B-ultrasonic image based on the respiratory motion characteristics to obtain a compensated lung structure, and detecting and identifying lung nodules in the B-ultrasonic image;
constructing a three-dimensional model of the lung of the patient based on the identified lung nodule information, determining the position, the size and the relation with surrounding tissues of the lung nodule through the three-dimensional model of the lung, and calculating a safe cutting edge based on the position and the size of the lung nodule;
AI display unit for:
Displaying operation data, B ultrasonic images and analysis results of a patient in real time, generating an operation scheme according to the position, the size and the lung structure of the lung of the patient, and determining the position and the excision range of the lung;
Meanwhile, the operation progress is monitored in real time in the operation, and a real-time navigation instruction is provided for a doctor based on the current position of the excision equipment.
Further, the endoscope unit includes:
the B-ultrasonic imaging module is used for receiving and processing the sound wave signals from the B-ultrasonic probe in real time, and converting the sound wave signals to generate a visual B-ultrasonic image;
The image processing module is used for carrying out smoothing and image enhancement processing on the original B ultrasonic image acquired from the B ultrasonic imaging module and carrying out image segmentation operation on the processed B ultrasonic image;
and the lung contour positioning module is used for extracting the characteristics of the segmented B ultrasonic image, identifying and marking the boundary of the lung in the image based on the extracted lung characteristics to obtain the lung contour, and acquiring the dynamic image data of the lung by monitoring the change of the lung contour in real time.
Further, the image processing module includes:
the first pixel block extraction module is used for extracting each pixel block of the original B ultrasonic image to be used as a main pixel block;
a second pixel block extracting module for extracting a plurality of pixel blocks in contact with the main pixel block as auxiliary pixel blocks;
The pixel block unit acquisition module is used for taking each main pixel block and the corresponding auxiliary pixel block as a pixel block unit;
The first gray value extraction module is used for extracting the gray value of each main pixel block of the original B ultrasonic image;
The second gray value extraction module is used for extracting gray values of a plurality of auxiliary pixel blocks which correspond to each main pixel block and are in contact with each other;
The gray reference value acquisition module is used for acquiring a gray reference value corresponding to each pixel value block by utilizing the gray value of each main pixel block of the original B ultrasonic image and the gray values of a plurality of auxiliary pixel blocks corresponding to each main pixel block, wherein the gray reference value is acquired by the following steps:
Wherein R c represents a gray reference value, n represents the number of auxiliary pixel blocks, R i represents the gray value of the ith auxiliary pixel block, R z represents the gray value of the main pixel block, R 01 and R 02 represent a first compensation adjustment coefficient and a second compensation adjustment coefficient respectively;
wherein R p represents the average gray value of the auxiliary pixel block, R max represents the maximum gray value in the auxiliary pixel block;
and the gray value adjusting module is used for adjusting the gray value of the main pixel block by utilizing the gray reference value.
Further, the gray value adjusting module includes:
the gray reference value calling module is used for calling the gray reference value;
A first gray difference value obtaining module, configured to obtain a gray difference value between the gray reference value and the main pixel block by using the gray reference value and the gray value of the main pixel block, as a first gray difference value;
A second gray level difference value obtaining module, configured to obtain a gray level difference value between the gray level reference value and each auxiliary pixel block by using the gray level reference value and the gray level value of each auxiliary pixel block, as a second gray level difference value;
the target gray value acquisition module is used for acquiring the target gray value of the main pixel block by utilizing the first gray difference value and the second gray difference value, wherein the target gray value is acquired by the following formula:
Wherein R m represents a target gray value of a main pixel block, R si represents a second gray value difference value corresponding to an ith auxiliary pixel block, and R sz represents a first gray value difference value corresponding to the main pixel block;
and the adjustment execution module is used for adjusting the gray value of the main pixel block according to the target gray value of the main pixel block.
Further, the image processing module further includes:
The gray value difference value acquisition module is used for extracting the gray value difference value between each main pixel block and each auxiliary pixel block after the gray value adjustment is completed after all pixel blocks of the original B ultrasonic image are completed;
the gray value difference comparison module is used for comparing the gray value difference between each main pixel block and each auxiliary pixel block after the gray value adjustment is completed with a preset difference threshold;
The gray value data information extraction module is used for extracting gray values of the auxiliary pixel blocks corresponding to the gray value difference exceeding a preset difference threshold;
the gray value compensation coefficient acquisition module is used for acquiring a gray value compensation coefficient of a gray value of the auxiliary pixel block according to gray value data information of the auxiliary pixel block corresponding to a difference value exceeding a preset difference value threshold, wherein the gray value compensation coefficient is acquired according to the following formula:
Wherein R m represents a gray value compensation coefficient, R fx and R fh respectively represent a gray value before gray value adjustment and a gray value after gray value adjustment of the auxiliary pixel block, R e represents a gray value difference value between the auxiliary pixel block and the main pixel block, and R cy represents a preset difference value threshold;
the gray value compensation adjustment module is used for carrying out gray value compensation adjustment on the gray value of the auxiliary pixel block by using a gray value compensation coefficient, wherein the gray value of the auxiliary pixel block after gray value compensation adjustment is obtained through the following formula:
Wherein R t represents the gray value of the auxiliary pixel block after gray value compensation adjustment, and R zh represents the gray value of the main pixel block after gray value adjustment.
Further, the lung contour positioning module specifically comprises:
determining the appearance characteristics of the lung of the patient based on the B ultrasonic image, and calculating the gradient strength and the gradient direction of pixel points in the B ultrasonic image based on the appearance characteristics so as to determine the discrete edge points of the edge of the lung of the patient;
identifying each end point of the patient lung contour line based on the discrete edge points, wherein each end point comprises a starting point, an end point and a turning point of the contour;
Searching adjacent pixel points of each endpoint, and continuing searching along the direction of the high gradient until returning to the starting point, so as to form a complete lung contour line;
Determining the distance between the endpoints based on each endpoint of the contour line, extracting the characteristic parameters of the contour line, and performing smoothing treatment on the extracted characteristic parameters of the contour line to obtain a complete and smooth lung contour line;
and comparing the lung contour lines at different time points, and analyzing the dynamic function change of the lung of the patient based on the comparison result to obtain dynamic image data of the lung structure of the patient.
Further, the lung nodule positioning unit comprises:
The respiratory motion compensation module is used for determining respiratory motion characteristics of a patient based on dynamic image data of a lung structure of the patient, determining a respiratory mode of the patient based on the respiratory motion characteristics of the patient, and correcting and compensating the B ultrasonic image based on the respiratory mode of the patient;
The lung nodule recognition module is used for recognizing and extracting nodules in the image by adopting an image processing and analyzing algorithm, analyzing extracted features of pixel values, textures and shapes in the image and marking the positions and the ranges of the lung nodules;
the lung nodule analysis module is used for converting continuous two-dimensional B ultrasonic images into a three-dimensional model based on a three-dimensional reconstruction technology, and the three-dimensional model comprises the position, the size and the relation with surrounding tissues of a lung nodule;
And the lung nodule positioning module is used for calculating the coordinates and the size of the lung nodule based on the three-dimensional model and the analysis result of the lung nodule and evaluating the important structure and the important grade of the relation of the surrounding tissues of the lung nodule based on the relation with the surrounding tissues.
Further, the lung nodule recognition module specifically comprises:
determining a threshold value of the B ultrasonic image by using a gray histogram of the image, selecting the peak position of the gray histogram as a preset threshold value, dividing the image into a foreground and a background based on the preset threshold value according to gray features of the image, and primarily detecting a possible nodule region;
Extracting shape features of the nodules in possible nodule areas, analyzing texture features of the nodules, measuring the size of the nodules, and comparing with a preset threshold;
Classifying and integrating the extracted features to generate a lung nodule set, removing overlapped nodules from the lung nodule set, merging adjacent nodules, and processing to obtain a true lung nodule and a false positive result;
Information of the position, size and shape of the detected eupulmonary nodule is output in the form of graphics and texts.
Further, the lung nodule positioning module specifically comprises:
Determining a coordinate system based on the three-dimensional model constructed by the lung nodule analysis module, and calculating coordinates of the recognition result of the lung nodule recognition module in the three-dimensional image based on the determined coordinate system;
Determining the main axis, the minor axis and the minimum axis of each lung nodule in the identification result, and constructing a local coordinate system based on the main axis, the minor axis and the minimum axis of each lung nodule, wherein the center of the lung nodule is an origin, and the main axis, the minor axis and the minimum axis respectively correspond to the x axis, the y axis and the z axis;
The size of the nodule is determined based on the local coordinate system of each lung nodule, the number of pixels occupied by the lung nodule is calculated, and the volume of the lung nodule is determined by multiplying the spatial resolution of each pixel.
Further, the AI display unit includes:
the scheme acquisition module is used for acquiring the history medical image and the physiological data of the patient which are actively uploaded, and generating a surgical scheme of the patient based on the acquired history medical image and the acquired physiological data and the important structure and the important grade of the tissue relationship around the lung nodule;
the real-time positioning module is used for acquiring the coordinate data of the lung nodule positioning module, monitoring the position and the state of the operation equipment in real time and transmitting the acquired positioning data to the real-time navigation module in real time;
the real-time navigation module is used for receiving the positioning data of the surgical equipment from the real-time positioning module, calculating the coordinate data difference value between the positioning data of the surgical equipment and the lung nodule, and generating a real-time navigation instruction;
The display module is used for displaying the operation data of the patient in real time and a real-time image acquired through B ultrasonic;
the intraoperative data recording and analyzing module is used for recording detailed data in the surgical process in real time, including information of the position, the size and the shape of a lung nodule and surgical operation data of a doctor;
And the remote collaboration module is used for transmitting the B-ultrasonic image and the nodule positioning data in operation to the terminal where the remote doctor is located based on the Internet of things in real time, so that the real-time interaction of the data of the remote doctor and the on-site doctor is realized.
Compared with the prior art, the invention has the beneficial effects that:
The method comprises the steps of acquiring a B ultrasonic image of a lung in an operation through a cavity mirror unit, processing the image, identifying and analyzing the B ultrasonic image by using a lung nodule positioning unit, monitoring respiratory motion of a patient, dynamically compensating the B ultrasonic image, thereby eliminating influence of respiratory motion on lung nodule positioning, improving positioning accuracy and stability, determining accurate position and safe cutting edge of a lung nodule, enabling a doctor to perform more accurate nodule positioning and analysis in a three-dimensional model, obtaining lung structure information of the patient, providing navigation and operation proposal for the doctor through an AI display unit, providing real-time and accurate lung nodule positioning and operation navigation, being beneficial to improving operation effect and reducing occurrence of complications.
Drawings
FIG. 1 is a block diagram of an intraoperative pulmonary nodule localization system based on endoscopic B-ultrasound images of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to solve the technical problems of inaccurate positioning and more complications in the prior art and restrict the effect of the positioning in practical application, referring to fig. 1, the present embodiment provides the following technical solutions:
an intraoperative lung nodule positioning system based on endoscopic B-mode ultrasound images, comprising:
the cavity mirror unit is used for integrating the B-ultrasonic probe on the cavity mirror system, acquiring an intra-operative lung B-ultrasonic image based on the B-ultrasonic equipment in real time, and preprocessing the acquired B-ultrasonic image to obtain dynamic image data of a lung structure of a patient;
A lung nodule localization unit for:
determining respiratory motion characteristics of a patient based on dynamic image data of a lung structure of the patient, performing respiratory motion compensation on the B-ultrasonic image based on the respiratory motion characteristics to obtain a compensated lung structure, and detecting and identifying lung nodules in the B-ultrasonic image;
constructing a three-dimensional model of the lung of the patient based on the identified lung nodule information, determining the position, the size and the relation with surrounding tissues of the lung nodule through the three-dimensional model of the lung, and calculating a safe cutting edge based on the position and the size of the lung nodule;
AI display unit for:
Displaying operation data, B ultrasonic images and analysis results of a patient in real time, generating an operation scheme according to the position, the size and the lung structure of the lung of the patient, and determining the position and the excision range of the lung;
meanwhile, the operation progress is monitored in real time in the operation, and a real-time navigation instruction is provided for a doctor based on the current position of the excision equipment, so that the excision equipment operated by the doctor can accurately reach the position of the lung nodule.
In the embodiment, the B ultrasonic in the operation is non-radiative and integrated on a cavity mirror system, the position of a lung nodule can be positioned in real time and reflected in a display, the B ultrasonic image of the lung in the operation is acquired through a cavity mirror unit, the image is processed, the B ultrasonic image is identified and analyzed by a lung nodule positioning unit, respiratory motion of a patient is monitored, and the B ultrasonic image is dynamically compensated, so that the influence of respiratory motion on the positioning of the lung nodule is eliminated, the accuracy and the stability of the positioning are improved, the accurate position and the safe cutting edge of the lung nodule are determined, a doctor can perform more accurate nodule positioning and analysis in a three-dimensional model to obtain lung structure information of the patient, navigation and operation proposal are provided for the doctor through an AI display unit, the real-time and accurate lung nodule positioning and operation navigation are provided, the accuracy and the efficiency of the operation are improved, the operation effect is improved, and the occurrence of complications is reduced.
In this embodiment, the endoscope unit includes:
the B-ultrasonic imaging module is used for receiving and processing the sound wave signals from the B-ultrasonic probe in real time, and converting the sound wave signals to generate a visual B-ultrasonic image;
The image processing module is used for carrying out smoothing and image enhancement processing on the original B ultrasonic image acquired from the B ultrasonic imaging module, carrying out image segmentation operation on the processed B ultrasonic image, separating different areas in the image such as lung tissues, gas, liquid and the like, and adopting a segmentation algorithm based on gray values, textures or shape characteristics to segment the B ultrasonic image aiming at the characteristics of the B ultrasonic image;
The lung contour positioning module is used for extracting features of the segmented B ultrasonic image, including extracting edge features, texture features, shape features and the like of the lung, identifying and marking the boundary of the lung in the image based on the extracted lung features to obtain the lung contour, and acquiring dynamic image data of the lung by monitoring the change of the lung contour in real time;
specifically, the image processing module includes:
the first pixel block extraction module is used for extracting each pixel block of the original B ultrasonic image to be used as a main pixel block;
a second pixel block extracting module for extracting a plurality of pixel blocks in contact with the main pixel block as auxiliary pixel blocks;
The pixel block unit acquisition module is used for taking each main pixel block and the corresponding auxiliary pixel block as a pixel block unit;
The first gray value extraction module is used for extracting the gray value of each main pixel block of the original B ultrasonic image;
The second gray value extraction module is used for extracting gray values of a plurality of auxiliary pixel blocks which correspond to each main pixel block and are in contact with each other;
The gray reference value acquisition module is used for acquiring a gray reference value corresponding to each pixel value block by utilizing the gray value of each main pixel block of the original B ultrasonic image and the gray values of a plurality of auxiliary pixel blocks corresponding to each main pixel block, wherein the gray reference value is acquired by the following steps:
Wherein R c represents a gray reference value, n represents the number of auxiliary pixel blocks, R i represents the gray value of the ith auxiliary pixel block, R z represents the gray value of the main pixel block, R 01 and R 02 represent a first compensation adjustment coefficient and a second compensation adjustment coefficient respectively;
wherein R p represents the average gray value of the auxiliary pixel block, R max represents the maximum gray value in the auxiliary pixel block;
and the gray value adjusting module is used for adjusting the gray value of the main pixel block by utilizing the gray reference value.
The technical effect of the technical scheme is that the image processing module can process the local area of the image more finely by extracting the main pixel block and the surrounding auxiliary pixel blocks and combining the main pixel block and the surrounding auxiliary pixel blocks into a pixel block unit. This approach helps to enhance details in the image, especially when there is a gray scale difference between the main pixel block and the auxiliary pixel block.
The gray value adjustment module may perform gray value adjustment on the main pixel block using the gray reference values calculated by the gray values of the main pixel block and the auxiliary pixel block. Such adjustment can make the image more uniform, reducing gray value anomalies due to noise or signal interference.
By introducing a first compensation adjustment coefficient (r 01) and a second compensation adjustment coefficient (r 02), the technical scheme allows a user or a system to adjust the calculation mode of the gray reference value according to specific requirements. This flexibility enables the image processing module to accommodate different B-mode images and processing requirements, thereby optimizing image quality.
The introduction of the average gray value (Rp) and the maximum gray value (Rmax) of the sub-pixel block helps to reduce the impact on the gray reference value calculation due to individual outlier pixels (e.g. noise points). The processing mode enables the image processing module to be more robust, and can provide accurate gray reference values under the condition of noise.
The contrast of the B ultrasonic image after gray value adjustment may be enhanced. This is because the gray value adjustment module can perform targeted adjustment on the main pixel block according to the gray reference value, so as to highlight the key information in the image.
In summary, according to the technical scheme, the quality of the B-mode ultrasonic image can be effectively improved by finely processing the local area of the B-mode ultrasonic image, utilizing the gray reference value and the compensation adjustment coefficient to adjust the gray value and the like, including enhancing the image details, reducing the noise influence, improving the image contrast and the like. This is of great importance for medical diagnosis and disease analysis.
Specifically, the gray value adjustment module includes:
the gray reference value calling module is used for calling the gray reference value;
A first gray difference value obtaining module, configured to obtain a gray difference value between the gray reference value and the main pixel block by using the gray reference value and the gray value of the main pixel block, as a first gray difference value;
A second gray level difference value obtaining module, configured to obtain a gray level difference value between the gray level reference value and each auxiliary pixel block by using the gray level reference value and the gray level value of each auxiliary pixel block, as a second gray level difference value;
the target gray value acquisition module is used for acquiring the target gray value of the main pixel block by utilizing the first gray difference value and the second gray difference value, wherein the target gray value is acquired by the following formula:
Wherein R m represents a target gray value of a main pixel block, R si represents a second gray value difference value corresponding to an ith auxiliary pixel block, and R sz represents a first gray value difference value corresponding to the main pixel block;
and the adjustment execution module is used for adjusting the gray value of the main pixel block according to the target gray value of the main pixel block.
The technical effect of the technical scheme is that the module can accurately identify the gray level difference between each main pixel block and the surrounding environment of the main pixel block in the image by calculating the gray level difference between the gray level reference value and the main pixel block and the auxiliary pixel block. Then, the target gray value of the main pixel block is calculated by using the gray difference values, so that the accurate adjustment of the gray value of the main pixel block is realized. Such adjustment helps to improve the contrast of the image, reduce local brightness non-uniformity, and thereby improve the overall quality of the image.
Since the gray value adjustment is based on the gray difference of the main pixel block and its auxiliary pixel block, the module can enhance the details in the image. By adjusting the gray value of the main pixel block, the main pixel block is more coordinated with the surrounding environment, and key information in the image such as focus, blood vessel and the like can be highlighted, so that the diagnosis accuracy of doctors on the image is improved.
In calculating the gray difference value, the module takes into account the gray value of the secondary pixel block, which helps to reduce the effect of noise on the adjustment of the gray value of the primary pixel block. Since the secondary pixel blocks typically contain similar image information as the primary pixel blocks, their gray values can provide useful reference information, helping the module to more accurately identify noise points and reduce their impact.
The whole gray value adjustment process is automatic, and manual intervention is not needed. The efficiency and the accuracy of image processing are greatly improved, and the influence of human factors on the image processing result is reduced.
In summary, the gray value adjusting module calculates the gray difference values and adjusts the gray value of the main pixel block based on the difference values, so that the quality of the B-mode ultrasonic image can be remarkably improved, the image details can be enhanced, the noise influence can be reduced, and the flexibility and the automation degree are high. The technical effects have important application values in the aspects of medical diagnosis, image analysis, subsequent processing and the like.
Specifically, the image processing module further includes:
The gray value difference value acquisition module is used for extracting the gray value difference value between each main pixel block and each auxiliary pixel block after the gray value adjustment is completed after all pixel blocks of the original B ultrasonic image are completed;
the gray value difference comparison module is used for comparing the gray value difference between each main pixel block and each auxiliary pixel block after the gray value adjustment is completed with a preset difference threshold;
The gray value data information extraction module is used for extracting gray values of the auxiliary pixel blocks corresponding to the gray value difference exceeding a preset difference threshold;
the gray value compensation coefficient acquisition module is used for acquiring a gray value compensation coefficient of a gray value of the auxiliary pixel block according to gray value data information of the auxiliary pixel block corresponding to a difference value exceeding a preset difference value threshold, wherein the gray value compensation coefficient is acquired according to the following formula:
Wherein R m represents a gray value compensation coefficient, R fx and R fh respectively represent a gray value before gray value adjustment and a gray value after gray value adjustment of the auxiliary pixel block, R e represents a gray value difference value between the auxiliary pixel block and the main pixel block, and R cy represents a preset difference value threshold;
the gray value compensation adjustment module is used for carrying out gray value compensation adjustment on the gray value of the auxiliary pixel block by using a gray value compensation coefficient, wherein the gray value of the auxiliary pixel block after gray value compensation adjustment is obtained through the following formula:
Wherein R t represents the gray value of the auxiliary pixel block after gray value compensation adjustment, and R zh represents the gray value of the main pixel block after gray value adjustment.
The technical effect of the technical scheme is that the system can identify the significant difference of the gray values between the main pixel block and the auxiliary pixel block through the gray value difference acquisition module and the gray value difference comparison module. These differences may be due to noise during image acquisition, device errors, or characteristics of the image itself. Through subsequent processing, the system can reduce these differences, thereby enhancing the consistency of gray values in the image. The gray value data information extraction module and the gray value compensation coefficient acquisition module work together to identify auxiliary pixel blocks to be adjusted and calculate proper gray value compensation coefficients for the auxiliary pixel blocks. The purpose of this step is to improve the overall quality of the image by adjusting the gray value of the secondary pixel block closer to the gray value of the primary pixel block.
The entire process flow is automated and no manual intervention is required. This makes the image processing process more efficient, faster, and reduces errors introduced by human factors. By setting different difference thresholds, a user can adjust the processing degree of the gray value difference according to actual requirements. This provides flexibility and customizable properties enabling the solution to adapt to different scenarios and application requirements. The gray value compensation adjustment module adjusts the gray value of the auxiliary pixel block by using the calculated gray value compensation coefficient, so as to optimize the visual effect of the image. Such adjustment may reduce non-uniformities and artifacts in the image, making the image clearer and easier to interpret.
In summary, the technical scheme realizes the adjustment of the gray value difference between the main pixel block and the auxiliary pixel block in the B-ultrasonic image through a series of modularized processing flows, thereby enhancing the consistency of the gray value of the image, improving the image quality, optimizing the visual effect and having the characteristics of automation, flexibility and customization.
In this embodiment, the lung contour positioning module specifically includes:
determining the appearance characteristics of the lung of the patient based on the B ultrasonic image, and calculating the gradient strength and the gradient direction of pixel points in the B ultrasonic image based on the appearance characteristics so as to determine the discrete edge points of the edge of the lung of the patient;
identifying each end point of the patient lung contour line based on the discrete edge points, wherein each end point comprises a starting point, an end point and a turning point of the contour;
Searching adjacent pixel points of each endpoint, and continuing searching along the direction of the high gradient until returning to the starting point, so as to form a complete lung contour line;
Determining the distance between the endpoints based on each endpoint of the contour line, extracting the characteristic parameters of the contour line, and performing smoothing treatment on the extracted characteristic parameters of the contour line to obtain a complete and smooth lung contour line;
and comparing the lung contour lines at different time points, and analyzing the dynamic function change of the lung of the patient based on the comparison result to obtain dynamic image data of the lung structure of the patient.
In this embodiment, the acoustic wave signal from the B-ultrasonic probe is received in real time and converted into the visualized B-ultrasonic image, the original B-ultrasonic image is smoothed and enhanced to improve the image quality, different areas in the image are separated by adopting a segmentation algorithm based on gray values, textures or shape characteristics, a doctor can see the structure and boundary of the lung more clearly, and the dynamic function change of the lung is analyzed by obtaining complete and smooth lung contour lines and comparing the lung contour lines at different time points, so that the dynamic function change of the lung can be monitored in real time, and the doctor can better avoid damaging surrounding important structures in the operation process, thereby reducing the occurrence of complications.
In this embodiment, the lung nodule positioning unit comprises:
The respiratory motion compensation module is used for determining respiratory motion characteristics of a patient based on dynamic image data of a lung structure of the patient, determining respiratory modes of the patient based on the respiratory motion characteristics of the patient, such as respiratory frequency, respiratory depth and the like, correcting and compensating the B ultrasonic image based on the respiratory modes of the patient so as to eliminate the influence of respiratory motion on the image quality, and ensuring the accuracy and reliability of subsequent image processing;
The lung nodule recognition module is used for recognizing and extracting nodules in an image by adopting an image processing and analyzing algorithm, analyzing extracted features of pixel values, textures and shapes in the image, and marking the positions and the ranges of the lung nodules, and specifically comprises the following steps:
Determining a threshold value of the B ultrasonic image by using a gray histogram of the image, selecting the peak position of the gray histogram as a preset threshold value, dividing the image into a foreground and a background based on the preset threshold value according to the gray characteristic of the image, wherein the foreground is a possible nodule, the background is normal lung tissue, preliminarily detecting a possible nodule area, removing small noise points by morphological operation, and simultaneously keeping the shape characteristic of the nodule;
Extracting shape characteristics of the nodules, such as circularity, compactness, edge smoothness and the like, analyzing texture characteristics of the nodules, such as gray level co-occurrence matrix, local binary pattern and the like, measuring the sizes of the nodules, such as diameter, volume and the like, and comparing with a preset threshold value;
Classifying and integrating the extracted features to generate a lung nodule set, removing overlapped nodules from the lung nodule set, merging adjacent nodules, and processing to obtain a true lung nodule and a false positive result;
outputting information of the position, the size and the shape of the detected true lung nodule in the form of graphics and texts;
the lung nodule analysis module is used for converting continuous two-dimensional B ultrasonic images into a three-dimensional model based on a three-dimensional reconstruction technology, and the three-dimensional model comprises the position, the size and the relation with surrounding tissues of a lung nodule;
the lung nodule positioning module is used for calculating the coordinates and the sizes of the lung nodules based on the three-dimensional model and the analysis result of the lung nodules, and evaluating important structures and important grades of the relation of the surrounding tissues of the lung nodules based on the relation with the surrounding tissues, such as blood vessels, bronchi and the like, specifically comprising:
Determining a coordinate system based on the three-dimensional model constructed by the lung nodule analysis module, and calculating coordinates of the recognition result of the lung nodule recognition module in the three-dimensional image based on the determined coordinate system;
Determining the main axis, the minor axis and the minimum axis of each lung nodule in the identification result, and constructing a local coordinate system based on the main axis, the minor axis and the minimum axis of each lung nodule, wherein the center of the lung nodule is an origin, and the main axis, the minor axis and the minimum axis respectively correspond to the x axis, the y axis and the z axis;
The size of the nodule is determined based on the local coordinate system of each lung nodule, the number of pixels occupied by the lung nodule is calculated, and the volume of the lung nodule is determined by multiplying the spatial resolution of each pixel.
In this embodiment, through respiratory motion compensation and advanced image processing algorithm, the lung nodule can be more accurately identified and positioned, the possibility of misjudgment and missed diagnosis is reduced, the relation between the lung nodule and surrounding tissues is accurately evaluated, a doctor can avoid damaging important structures more carefully in the operation process, thereby reducing the risk of complications, the lung nodule positioning module can more accurately calculate the coordinates and the size of the lung nodule through the construction of a three-dimensional model and a local coordinate system, the positioning accuracy is improved, the volume and the occupied pixel number of the lung nodule are calculated, the support of quantitative analysis is provided for the doctor, the growth speed and the treatment effect of the lung nodule can be better evaluated, the doctor can be helped to determine the optimal operation path and strategy, and the doctor can be helped to formulate a more accurate operation scheme.
In the present embodiment, the AI display unit includes:
The scheme acquisition module is used for acquiring the actively uploaded historical medical image and physiological data of the patient, generating a surgical scheme of the patient based on the acquired historical medical image and physiological data and important structures and important grades of tissue relations around the lung nodules, and providing accurate surgical guidance based on specific conditions of the patient for doctors;
the real-time positioning module is used for acquiring the coordinate data of the lung nodule positioning module, monitoring the position and the state of the operation equipment in real time and transmitting the acquired positioning data to the real-time navigation module in real time;
the real-time navigation module is used for receiving the positioning data of the surgical equipment from the real-time positioning module, calculating the coordinate data difference value between the positioning data of the surgical equipment and the lung nodule, generating a real-time navigation instruction, and ensuring the accuracy and the safety in the surgical process;
The display module is used for displaying the operation data of the patient in real time, including various physiological parameters and operation progress information, and real-time images acquired through B ultrasonic, and providing a real-time view of the internal condition of the patient for doctors;
The intraoperative data recording and analyzing module is used for recording detailed data in the surgical process, including information of the position, the size and the shape of a lung nodule and surgical operation data of a doctor, and can provide a basis for postoperative evaluation and improvement of a surgical method for the doctor through analysis of the data;
the remote collaboration module is used for transmitting the B-ultrasonic image and the nodule positioning data in the operation to the terminal where the remote doctor is located based on the Internet of things in real time, so that the real-time interaction of the data of the remote doctor and the on-site doctor is realized, the remote doctor can provide remote guidance and suggestion for the on-site doctor, and the safety and effect of the operation are improved.
In the embodiment, by integrating the patient history data and the lung nodule characteristics, a personalized operation scheme is generated, the accuracy and effect of the operation are improved, the operation progress is monitored in real time, navigation instructions are provided, a doctor can reach the position of the lung nodule more quickly, the operation time is shortened, the operation efficiency is improved, the data recording and analysis in the operation provide the basis for the doctor to evaluate and improve the operation method after the operation, the professional level of the doctor is improved, particularly in the treatment of complex cases, the real-time data interaction and remote guidance of the on-site doctor and the remote doctor are realized through the remote cooperation module, the operation safety and effect are improved, the omnibearing support and guarantee are provided for the operation process, the operation safety and effect are improved, and meanwhile, an important tool for evaluating and improving the operation after the doctor is provided.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should be covered by the protection scope of the present invention by making equivalents and modifications to the technical solution and the inventive concept thereof.

Claims (7)

1. The intra-operative lung nodule positioning system based on the endoscopic B-ultrasonic image is characterized by comprising the following steps:
the cavity mirror unit is used for integrating the B-ultrasonic probe on the cavity mirror system, acquiring an intra-operative lung B-ultrasonic image based on the B-ultrasonic equipment in real time, and preprocessing the acquired B-ultrasonic image to obtain dynamic image data of a lung structure of a patient;
A lung nodule localization unit for:
determining respiratory motion characteristics of a patient based on dynamic image data of a lung structure of the patient, performing respiratory motion compensation on the B-ultrasonic image based on the respiratory motion characteristics to obtain a compensated lung structure, and detecting and identifying lung nodules in the B-ultrasonic image;
constructing a three-dimensional model of the lung of the patient based on the identified lung nodule information, determining the position, the size and the relation with surrounding tissues of the lung nodule through the three-dimensional model of the lung, and calculating a safe cutting edge based on the position and the size of the lung nodule;
AI display unit for:
Displaying operation data, B ultrasonic images and analysis results of a patient in real time, generating an operation scheme according to the position, the size and the lung structure of the lung of the patient, and determining the position and the excision range of the lung;
meanwhile, the operation progress is monitored in real time in the operation, and a real-time navigation instruction is provided for a doctor based on the current position of the excision equipment;
a endoscope unit comprising:
the B-ultrasonic imaging module is used for receiving and processing the sound wave signals from the B-ultrasonic probe in real time, and converting the sound wave signals to generate a visual B-ultrasonic image;
The image processing module is used for carrying out smoothing and image enhancement processing on the original B ultrasonic image acquired from the B ultrasonic imaging module and carrying out image segmentation operation on the processed B ultrasonic image;
The lung contour positioning module is used for carrying out feature extraction on the segmented B ultrasonic image, identifying and marking the boundary of the lung in the image based on the extracted lung feature to obtain the lung contour, and acquiring dynamic image data of the lung by monitoring the change of the lung contour in real time;
An image processing module comprising:
the first pixel block extraction module is used for extracting each pixel block of the original B ultrasonic image to be used as a main pixel block;
a second pixel block extracting module for extracting a plurality of pixel blocks in contact with the main pixel block as auxiliary pixel blocks;
The pixel block unit acquisition module is used for taking each main pixel block and the corresponding auxiliary pixel block as a pixel block unit;
The first gray value extraction module is used for extracting the gray value of each main pixel block of the original B ultrasonic image;
The second gray value extraction module is used for extracting gray values of a plurality of auxiliary pixel blocks which correspond to each main pixel block and are in contact with each other;
The gray reference value acquisition module is used for acquiring a gray reference value corresponding to each pixel value block by utilizing the gray value of each main pixel block of the original B ultrasonic image and the gray values of a plurality of auxiliary pixel blocks corresponding to each main pixel block, wherein the gray reference value is acquired by the following steps:
Wherein R c represents a gray reference value, n represents the number of auxiliary pixel blocks, R i represents the gray value of the ith auxiliary pixel block, R z represents the gray value of the main pixel block, R 01 and R 02 represent a first compensation adjustment coefficient and a second compensation adjustment coefficient respectively;
wherein R p represents the average gray value of the auxiliary pixel block, R max represents the maximum gray value in the auxiliary pixel block;
the gray value adjusting module is used for adjusting the gray value of the main pixel block by utilizing the gray reference value;
A gray value adjustment module comprising:
the gray reference value calling module is used for calling the gray reference value;
A first gray difference value obtaining module, configured to obtain a gray difference value between the gray reference value and the main pixel block by using the gray reference value and the gray value of the main pixel block, as a first gray difference value;
A second gray level difference value obtaining module, configured to obtain a gray level difference value between the gray level reference value and each auxiliary pixel block by using the gray level reference value and the gray level value of each auxiliary pixel block, as a second gray level difference value;
the target gray value acquisition module is used for acquiring the target gray value of the main pixel block by utilizing the first gray difference value and the second gray difference value, wherein the target gray value is acquired by the following formula:
Wherein R m represents a target gray value of a main pixel block, R si represents a second gray value difference value corresponding to an ith auxiliary pixel block, and R sz represents a first gray value difference value corresponding to the main pixel block;
the adjustment execution module is used for adjusting the gray value of the main pixel block according to the target gray value of the main pixel block;
The lung contour positioning module is specifically:
determining the appearance characteristics of the lung of the patient based on the B ultrasonic image, and calculating the gradient strength and the gradient direction of pixel points in the B ultrasonic image based on the appearance characteristics so as to determine the discrete edge points of the edge of the lung of the patient;
identifying each end point of the patient lung contour line based on the discrete edge points, wherein each end point comprises a starting point, an end point and a turning point of the contour;
Searching adjacent pixel points of each endpoint, and continuing searching along the direction of the high gradient until returning to the starting point, so as to form a complete lung contour line;
Determining the distance between the endpoints based on each endpoint of the contour line, extracting the characteristic parameters of the contour line, and performing smoothing treatment on the extracted characteristic parameters of the contour line to obtain a complete and smooth lung contour line;
and comparing the lung contour lines at different time points, and analyzing the dynamic function change of the lung of the patient based on the comparison result to obtain dynamic image data of the lung structure of the patient.
2. The intraoperative lung nodule localization system based on endoscopic B-ultrasound images of claim 1, wherein the image processing module further comprises:
The gray value difference value acquisition module is used for extracting the gray value difference value between each main pixel block and each auxiliary pixel block after the gray value adjustment is completed after all pixel blocks of the original B ultrasonic image are completed;
the gray value difference comparison module is used for comparing the gray value difference between each main pixel block and each auxiliary pixel block after the gray value adjustment is completed with a preset difference threshold;
The gray value data information extraction module is used for extracting gray values of the auxiliary pixel blocks corresponding to the gray value difference exceeding a preset difference threshold;
the gray value compensation coefficient acquisition module is used for acquiring a gray value compensation coefficient of a gray value of the auxiliary pixel block according to gray value data information of the auxiliary pixel block corresponding to a difference value exceeding a preset difference value threshold, wherein the gray value compensation coefficient is acquired according to the following formula:
Wherein R m represents a gray value compensation coefficient, R fx and R fh respectively represent a gray value before gray value adjustment and a gray value after gray value adjustment of the auxiliary pixel block, R e represents a gray value difference value between the auxiliary pixel block and the main pixel block, and R cy represents a preset difference value threshold;
the gray value compensation adjustment module is used for carrying out gray value compensation adjustment on the gray value of the auxiliary pixel block by using a gray value compensation coefficient, wherein the gray value of the auxiliary pixel block after gray value compensation adjustment is obtained through the following formula:
Wherein R t represents the gray value of the auxiliary pixel block after gray value compensation adjustment, and R zh represents the gray value of the main pixel block after gray value adjustment.
3. The intraoperative pulmonary nodule localization system based on endoscopic B-ultrasound images of claim 2, wherein the pulmonary nodule localization unit comprises:
The respiratory motion compensation module is used for determining respiratory motion characteristics of a patient based on dynamic image data of a lung structure of the patient, determining a respiratory mode of the patient based on the respiratory motion characteristics of the patient, and correcting and compensating the B ultrasonic image based on the respiratory mode of the patient;
The lung nodule recognition module is used for recognizing and extracting nodules in the image by adopting an image processing and analyzing algorithm, analyzing extracted features of pixel values, textures and shapes in the image and marking the positions and the ranges of the lung nodules;
the lung nodule analysis module is used for converting continuous two-dimensional B ultrasonic images into a three-dimensional model based on a three-dimensional reconstruction technology, and the three-dimensional model comprises the position, the size and the relation with surrounding tissues of a lung nodule;
And the lung nodule positioning module is used for calculating the coordinates and the size of the lung nodule based on the three-dimensional model and the analysis result of the lung nodule and evaluating the important structure and the important grade of the relation of the surrounding tissues of the lung nodule based on the relation with the surrounding tissues.
4. The intraoperative pulmonary nodule positioning system based on the endoscopic B-ultrasound image of claim 3, wherein the pulmonary nodule recognition module is specifically:
determining a threshold value of the B ultrasonic image by using a gray histogram of the image, selecting the peak position of the gray histogram as a preset threshold value, dividing the image into a foreground and a background based on the preset threshold value according to gray features of the image, and primarily detecting a possible nodule region;
Extracting shape features of the nodules in possible nodule areas, analyzing texture features of the nodules, measuring the size of the nodules, and comparing with a preset threshold;
Classifying and integrating the extracted features to generate a lung nodule set, removing overlapped nodules from the lung nodule set, merging adjacent nodules, and processing to obtain a true lung nodule and a false positive result;
Information of the position, size and shape of the detected eupulmonary nodule is output in the form of graphics and texts.
5. The intraoperative pulmonary nodule positioning system based on the endoscopic B-ultrasound image of claim 3, wherein the pulmonary nodule positioning module is specifically:
Determining a coordinate system based on the three-dimensional model constructed by the lung nodule analysis module, and calculating coordinates of the recognition result of the lung nodule recognition module in the three-dimensional image based on the determined coordinate system;
determining the main axis, the minor axis and the minimum axis of each lung nodule in the identification result, and constructing a local coordinate system based on the main axis, the minor axis and the minimum axis of each lung nodule, wherein the center of the lung nodule is an origin, and the main axis, the minor axis and the minimum axis respectively correspond to the x axis, the y axis and the z axis;
The size of the nodule is determined based on the local coordinate system of each lung nodule, the number of pixels occupied by the lung nodule is calculated, and the volume of the lung nodule is determined by multiplying the spatial resolution of each pixel.
6. The intraoperative pulmonary nodule localization system based on endoscopic B-ultrasound images of claim 2, wherein the AI display unit comprises:
the scheme acquisition module is used for acquiring the history medical image and the physiological data of the patient which are actively uploaded, and generating a surgical scheme of the patient based on the acquired history medical image and the acquired physiological data and the important structure and the important grade of the tissue relationship around the lung nodule;
the real-time positioning module is used for acquiring the coordinate data of the lung nodule positioning module, monitoring the position and the state of the operation equipment in real time and transmitting the acquired positioning data to the real-time navigation module in real time;
the real-time navigation module is used for receiving the positioning data of the surgical equipment from the real-time positioning module, calculating the coordinate data difference value between the positioning data of the surgical equipment and the lung nodule, and generating a real-time navigation instruction;
the display module is used for displaying the operation data of the patient in real time and a real-time image acquired through B ultrasonic.
7. The intraoperative pulmonary nodule positioning system based on endoscopic B-ultrasound images of claim 6, wherein the AI display unit further comprises:
the intraoperative data recording and analyzing module is used for recording detailed data in the surgical process in real time, including information of the position, the size and the shape of a lung nodule and surgical operation data of a doctor;
And the remote collaboration module is used for transmitting the B-ultrasonic image and the nodule positioning data in operation to the terminal where the remote doctor is located based on the Internet of things in real time, so that the real-time interaction of the data of the remote doctor and the on-site doctor is realized.
CN202410696512.XA 2024-05-31 2024-05-31 Intraoperative pulmonary nodule localization system based on laparoscopic B-ultrasound images Active CN118680677B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410696512.XA CN118680677B (en) 2024-05-31 2024-05-31 Intraoperative pulmonary nodule localization system based on laparoscopic B-ultrasound images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410696512.XA CN118680677B (en) 2024-05-31 2024-05-31 Intraoperative pulmonary nodule localization system based on laparoscopic B-ultrasound images

Publications (2)

Publication Number Publication Date
CN118680677A CN118680677A (en) 2024-09-24
CN118680677B true CN118680677B (en) 2025-02-11

Family

ID=92773460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410696512.XA Active CN118680677B (en) 2024-05-31 2024-05-31 Intraoperative pulmonary nodule localization system based on laparoscopic B-ultrasound images

Country Status (1)

Country Link
CN (1) CN118680677B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116712167A (en) * 2023-06-13 2023-09-08 上海联影智能医疗科技有限公司 Navigation method and system for pulmonary nodule operation

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5482535B2 (en) * 2010-07-23 2014-05-07 コニカミノルタ株式会社 Image processing apparatus and image processing method
CN107256699B (en) * 2017-05-26 2020-03-17 惠科股份有限公司 Pixel driving method and display device
CN107595387B (en) * 2017-07-28 2020-08-07 浙江大学 Spine image generation system based on ultrasonic rubbing technology and spine operation navigation and positioning system
CN109146854B (en) * 2018-08-01 2021-10-01 东北大学 A method for analyzing the relationship between pulmonary nodules and pulmonary blood vessels
CN110400328B (en) * 2019-07-09 2021-04-30 中国科学院深圳先进技术研究院 Calculation method, calculation system and terminal for motion speed of surgical assistant robot
CN111067622B (en) * 2019-12-09 2023-04-28 天津大学 Respiratory motion compensation method for pulmonary percutaneous puncture
CN111105427B (en) * 2019-12-31 2023-04-25 佛山科学技术学院 Lung image segmentation method and system based on connected region analysis
CN114938994A (en) * 2022-05-26 2022-08-26 中国科学院合肥物质科学研究院 Lung cancer accurate puncture navigation system and method based on respiratory motion compensation
CN116616807B (en) * 2023-05-26 2025-08-19 杭州华匠医学机器人有限公司 Positioning system, method, electronic equipment and storage medium in pulmonary nodule operation
CN117542088B (en) * 2023-11-15 2025-02-18 杭州晟元数据安全技术股份有限公司 Fingerprint image recognition method and system based on deep learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116712167A (en) * 2023-06-13 2023-09-08 上海联影智能医疗科技有限公司 Navigation method and system for pulmonary nodule operation

Also Published As

Publication number Publication date
CN118680677A (en) 2024-09-24

Similar Documents

Publication Publication Date Title
US9471981B2 (en) Reference image display method for ultrasonography and ultrasonic diagnosis apparatus
US6764449B2 (en) Method and apparatus for enabling a biopsy needle to be observed
EP3145431B1 (en) Method and system of determining probe position in surgical site
CN104248454B (en) A kind of two-dimensional ultrasonic image and the coplanar determination methods of puncture needle
JP6644795B2 (en) Ultrasound imaging apparatus and method for segmenting anatomical objects
CN118453115B (en) Real-time image guidance system based on surgery
Wijata et al. Unbiased validation of the algorithms for automatic needle localization in ultrasound-guided breast biopsies
US20230177681A1 (en) Method for determining an ablation region based on deep learning
KR20230013042A (en) Method for predicting recurrence of lesions through image analysis
CN118948401B (en) Puncture positioning method and device based on ultrasonic imaging
KR101251822B1 (en) System and method for analysising perfusion in dynamic contrast-enhanced lung computed tomography images
CN116863008A (en) Bronchoscope system with bronchus identification function and bronchus identification method
CN118680677B (en) Intraoperative pulmonary nodule localization system based on laparoscopic B-ultrasound images
Han et al. Endoscopic navigation based on three-dimensional structure registration
JP7299100B2 (en) ULTRASOUND DIAGNOSTIC DEVICE AND ULTRASOUND IMAGE PROCESSING METHOD
CN116580033B (en) Multi-mode medical image registration method based on image block similarity matching
JP5403431B2 (en) Tomographic image processing method and apparatus
CN115105202A (en) Focus confirmation method and system used in endoscopic surgery
WO2023133929A1 (en) Ultrasound-based human tissue symmetry detection and analysis method
Doerfler et al. Blood vessel detection in navigated ultrasound: An assistance system for liver resections
CN117838309B (en) Method and system for compensating advancing offset of ultrasonic guided needle knife
Fakhfakh et al. Automatic registration of pre-and intraoperative data for long bones in Minimally Invasive Surgery
Zenbutsu et al. 3D ultrasound assisted laparoscopic liver surgery by visualization of blood vessels
Kupas et al. Visualization of fibroid in laparoscopy videos using ultrasound image segmentation and augmented reality
CN120495394A (en) Virtual overlay positioning patch position extraction method and AR medical glasses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant