[go: up one dir, main page]

CN119942033A - Bronchoscope automatic navigation method, device, electronic equipment and storage medium - Google Patents

Bronchoscope automatic navigation method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN119942033A
CN119942033A CN202411724705.8A CN202411724705A CN119942033A CN 119942033 A CN119942033 A CN 119942033A CN 202411724705 A CN202411724705 A CN 202411724705A CN 119942033 A CN119942033 A CN 119942033A
Authority
CN
China
Prior art keywords
bronchoscope
point
real
projection
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411724705.8A
Other languages
Chinese (zh)
Inventor
宁国琛
杨怡光
廖洪恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202411724705.8A priority Critical patent/CN119942033A/en
Publication of CN119942033A publication Critical patent/CN119942033A/en
Pending legal-status Critical Current

Links

Landscapes

  • Endoscopes (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本发明提供一种支气管镜自动导航方法、装置、电子设备及存储介质,涉及支气管镜技术领域。其中,支气管镜自动导航方法,包括:响应于支气管镜检查操作的启动,获取已建立的目标患者的虚拟支气管镜模型;根据虚拟支气管镜模型,获取支气管镜检查的导航路径,以使操作者依照导航路径,驱动支气管镜在目标患者的支气管内向感兴趣区域移动;响应于接收到支气管镜在当前位置拍摄的目标患者的支气管内的实景图像,从实景图像中提取深度信息,得到当前位置对应的实景深度图;从实景深度图中提取点云信息,得到当前位置的点云;根据实景深度图和当前位置的点云,利用投影函数,得到支气管镜的当前位姿。本发明可以有效提高支气管镜自动导航的精度。

The present invention provides a bronchoscope automatic navigation method, device, electronic device and storage medium, which relate to the technical field of bronchoscopes. Among them, the bronchoscope automatic navigation method includes: in response to the start of the bronchoscope examination operation, obtaining the established virtual bronchoscope model of the target patient; according to the virtual bronchoscope model, obtaining the navigation path of the bronchoscope examination, so that the operator drives the bronchoscope to move to the area of interest in the bronchus of the target patient according to the navigation path; in response to receiving the real-life image of the bronchus of the target patient taken by the bronchoscope at the current position, extracting depth information from the real-life image to obtain a real-life depth map corresponding to the current position; extracting point cloud information from the real-life depth map to obtain a point cloud at the current position; according to the real-life depth map and the point cloud at the current position, using the projection function, the current posture of the bronchoscope is obtained. The present invention can effectively improve the accuracy of the automatic navigation of the bronchoscope.

Description

Bronchoscope automatic navigation method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of bronchoscopes, in particular to an automatic bronchoscope navigation method, an automatic bronchoscope navigation device, electronic equipment and a storage medium.
Background
Bronchoscopy plays a key role in clinical diagnosis and treatment as an important endoscopic tool. Traditionally, this technique relies on the experience and visual judgment of a physician to manipulate bronchoscopes, looking deep into the respiratory tract for observation and treatment. However, the complexity of the airway structure and the narrowness of the passageway, particularly when lesions are located in the distal bronchioles, present significant challenges for manual operation, not only requiring a clinician to have a high degree of skill in the operation, but also because of precision limitations, may affect the accuracy of the diagnosis and the effectiveness of the treatment.
To overcome these difficulties, bronchoscopic automatic navigation techniques have evolved and have made significant progress. The technique combines three-dimensional imaging with advanced image processing techniques, such as Computed Tomography (CT) and Convolutional Neural Network (CNN), and realizes three-dimensional reconstruction of the respiratory tract of a patient. A doctor can intuitively see the three-dimensional structure of the tracheobronchial tree, the real-time image under the bronchoscope and the virtual bronchoscope through the computer screen, so that the accuracy and the visual level of bronchoscope navigation are greatly improved. The application of the technology not only reduces operation risk and patient trauma, but also remarkably improves the observation and diagnosis capability of the intrabronchial lesions, and has wide application prospect in the fields of early diagnosis of the bronchopulmonary carcinoma, tumor excision, lesion biopsy and the like. However, current bronchoscopic automatic navigation techniques still face some challenges. Due to the complexity of the respiratory system, particularly when dealing with abnormal bronchomorphology, structural deformations caused by respiration, or airway narrowing, the existing bronchoscope navigation systems may not be able to accurately accommodate these changes, resulting in positioning errors and insufficient accuracy.
Therefore, how to effectively improve the accuracy of automatic bronchoscope navigation is a technical problem to be solved.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention provides a method, an apparatus, an electronic device and a storage medium for automatically navigating a bronchoscope, so as to effectively improve accuracy of automatically navigating the bronchoscope.
The invention provides an automatic bronchoscope navigation method, which comprises the following steps.
The method comprises the steps of responding to the starting of bronchoscopy operation, obtaining a virtual bronchoscope model of an established target patient, wherein the virtual bronchoscope model is a three-dimensional structure model of a bronchotree established according to a lung airway diagram of the target patient, obtaining a navigation path of bronchoscopy according to the virtual bronchoscope model, enabling an operator to drive the bronchoscope to move towards an interested area in the bronchus of the target patient according to the navigation path, enabling the upper portion of the main bronchus of the target patient to serve as a starting point of the navigation path, enabling the interested area to serve as a target point, responding to a live-action image of the bronchoscope in the bronchus of the target patient, extracting depth information from the live-action image, obtaining a live-action depth diagram corresponding to the current position, extracting point cloud information from the live-action depth diagram, obtaining point cloud information of the current position, obtaining the current position of the bronchoscope according to the live-action depth diagram and the point cloud of the current position, utilizing a projection function to obtain the current position of the bronchoscope, and enabling the operator to move the bronchoscope at the current position of the bronchoscope based on the current position of the bronchoscope.
The automatic bronchoscope navigation method comprises the steps of obtaining a feature map of a live-action image, adding random noise which is subjected to normal distribution to the feature map by utilizing a forward noise adding model to obtain a noise feature map, obtaining the live-action depth map by utilizing a reverse noise removing model according to the feature map and the noise feature map, wherein the reverse noise removing model is a neural network model obtained by training by utilizing a training data set, the training data set comprises a plurality of groups of sample data carrying labels, the sample data are noise feature maps obtained by utilizing the forward noise adding model after noise is added to live-action images shot by the bronchoscope at different preset positions in a bronchus of a patient, and the labels are live-action depth maps corresponding to the sample data.
The bronchoscope automatic navigation method comprises the steps of processing a live-action image by utilizing a multi-scale feature coding network to obtain feature images of multiple scales of the live-action image, fusing the feature images of the multiple scales by utilizing a weighted summation algorithm to obtain an aggregate feature image of the live-action image, and taking the aggregate feature image as the feature image of the live-action image.
The method for automatically navigating the bronchoscope comprises the steps of extracting point cloud information from a live-action depth image to obtain point clouds of the current position, aiming at each pixel point on the live-action depth image, executing the following operation to obtain each data point of the point clouds, carrying out coordinate transformation on x-axis coordinates and y-axis coordinates of the pixel point in the live-action depth image by using imaging parameters of the bronchoscope to obtain corresponding x-axis coordinates and y-axis coordinates of the pixel point in a point cloud coordinate system, and obtaining the data point of the point clouds corresponding to the pixel point according to depth values of the pixel point and corresponding x-axis coordinates and y-axis coordinates of the pixel point in the point cloud coordinate system.
The method for automatically navigating the bronchoscope comprises the steps of utilizing a projection function to obtain the current pose of the bronchoscope according to the point cloud of the live-action image and the current position, utilizing the projection function to project each data point of the point cloud onto a two-dimensional plane corresponding to the live-action image according to the first initial estimated pose of the bronchoscope to obtain a first projection point corresponding to each data point, utilizing a loss function to obtain a first re-projection error according to a first distance between the first projection point corresponding to each data point of the point cloud and a pixel point of the live-action image corresponding to the data point, utilizing a preset algorithm to iteratively adjust the first initial estimated pose of the bronchoscope to minimize the first re-projection error until an iteration termination condition is reached, and obtaining the current pose of the bronchoscope according to the pose of the bronchoscope when the first re-projection error is minimized.
According to the bronchoscope automatic navigation method provided by the invention, the current pose of the bronchoscope is obtained according to the pose of the bronchoscope when the first re-projection error is minimized, and the method comprises the steps of determining a second initial estimated pose of the bronchoscope according to the pose of the bronchoscope when the first re-projection error is minimized; the method comprises the steps of obtaining a real-scene image corresponding to the real-scene image, obtaining a first projection point corresponding to each data point by using a projection function, projecting each data point of the point cloud onto a two-dimensional plane corresponding to the real-scene image according to the first initial estimated pose, selecting a plurality of groups of projection data pairs meeting preset refinement estimation conditions aiming at projection data pairs corresponding to all data points of the point cloud, wherein one group of projection data pairs consists of the first projection point corresponding to one data point and pixel points in the real-scene image corresponding to the second projection point, the preset refinement estimation conditions comprise that the pixel point of each projection data pair is located in a preset range of the first projection point of the projection data pair, the distance between the third projection point of the corresponding data point on the real-scene image of a target patient photographed by the bronchoscope at a first position is smaller than a preset distance threshold, obtaining a second projection error according to a second projection point of each group of the projection data pairs and pixel points in the real-scene image, obtaining a second projection error by using a second pose, and iterating the second projection error until the initial error is minimized by using the iterative error, wherein the second projection error is reached according to the initial pose, and obtaining the current pose of the bronchoscope.
The invention further provides an automatic bronchoscope navigation device which comprises a first acquisition module, a second acquisition module, a fourth acquisition module and a third acquisition module, wherein the first acquisition module is used for responding to the starting of bronchoscopy operation, acquiring an established virtual bronchoscope model of a target patient, the virtual bronchoscope model is a three-dimensional structure model of a bronchotree established according to a lung airway diagram of the target patient, the second acquisition module is used for acquiring a navigation path of bronchoscopy according to the virtual bronchoscope model, so that an operator drives a bronchoscope to move towards an interested area in a bronchoscope of the target patient according to the navigation path, the navigation path takes the upper part of a main bronchoscope of the target patient as a starting point, the interested area is a target point, the third acquisition module is used for responding to a live-action image of the bronchoscope in the bronchoscope of the target patient shot at a current position, extracting depth information from the live-action image, obtaining a live-action depth diagram corresponding to the current position, the fourth acquisition module is used for extracting point information from the live-action depth diagram, so that an operator can drive the bronchoscope to move towards the interested area in the bronchoscope according to the current position, and the cloud position of the bronchoscope is adjusted according to the cloud position.
The invention also provides electronic equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the automatic bronchoscope navigation method is realized by the processor when the computer program is executed.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a bronchoscope automatic navigation method as described in any of the above.
The invention also provides a computer program product comprising a computer program which when executed by a processor implements a bronchoscope automatic navigation method as described in any one of the above.
According to the bronchoscope automatic navigation method, the device, the electronic equipment and the storage medium, provided by the invention, the real-time image of the bronchoscope at the current position is obtained, and the depth information and the point cloud information are extracted from the real-time image, so that the point cloud at the current position can be constructed. By utilizing the live-action image and the point cloud of the current position and combining a projection function, the current pose of the bronchoscope can be accurately calculated, and a clear virtual guide interface with sufficient visual field can be provided. An operator can timely adjust the moving path of the bronchoscope based on accurate current pose information of the bronchoscope, so that the orbital flight based on route planning is realized, and the problems of positioning errors and insufficient precision of a navigation path caused by the complexity of a respiratory system (such as abnormal bronchomorphology, structural deformation caused by respiration, airway stenosis and the like) are overcome. Thereby effectively improving the accuracy of the bronchoscope automatic navigation.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of the bronchoscope automatic navigation method provided by the invention.
Fig. 2 is a flow chart of a method for obtaining a live-action depth map corresponding to a current position.
Fig. 3 is a flow chart of a method for obtaining the current pose of a bronchoscope provided by the invention.
Fig. 4 is a schematic structural diagram of the bronchoscope automatic navigation device provided by the invention.
Fig. 5 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The bronchoscope automatic navigation method of the present invention is described below with reference to fig. 1-3.
Fig. 1 is a schematic flow chart of the automatic bronchoscope navigation method provided by the invention, as shown in fig. 1, the method comprises the following steps:
Step 101, in response to initiation of a bronchoscopy procedure, a virtual bronchoscope model of the established target patient is acquired.
The target patient is a patient for whom bronchoscopy is desired.
The virtual bronchoscope model is a three-dimensional structural model of a bronchial tree established according to a lung airway diagram of a target patient.
In a specific implementation process, the lung airway diagram of the target patient may be obtained in various manners, for example, the lung airway diagram of the target patient may be obtained by performing computed tomography CT on the target patient, which is not limited by the description of the present specification.
In a specific implementation process, the lung airway diagram can be finely segmented to construct an accurate three-dimensional airway grid model, and a highly-realistic virtual bronchoscope model is established by utilizing a volume rendering technology based on the three-dimensional airway grid model. The virtual bronchoscope model can provide a doctor with more visual and comprehensive lung airway visualization tool.
Step 102, obtaining a navigation path of bronchoscopy according to the virtual bronchoscope model, so that an operator drives the bronchoscope to move towards the region of interest in the bronchus of the target patient according to the navigation path.
The navigation path takes the upper part of the main trachea of the target patient as a starting point, and the region of interest as a target point.
In bronchoscopy, the region of interest (Region of Interest, ROI) may include a variety of regions where structures or abnormalities associated with the pulmonary airways and their associated lesions are located, such as, for example, manually or automatically delineated nodular lesions.
In a specific implementation process, the central line of the airway of the target patient can be extracted based on the virtual bronchoscope model, so that an optimal path reaching the target point is obtained, and the optimal path is used as a navigation path to guide subsequent examination or treatment operation. When the navigation path is determined, the position of the viewpoint is required to be always centered to obtain the most comprehensive view, meanwhile, the viewpoint is required to keep a safe distance from the wall surface of the air passage to avoid collision or damage, and in addition, the whole path is required to be strictly positioned in the air passage, so that the accuracy and the safety of operation are ensured.
In the specific implementation, the operator may be a variety of implementation subjects, and is not limited by the expression of the present specification.
For example, the operator may be a doctor, who may drive the bronchoscope to move within the bronchi of the target patient toward the region of interest in accordance with the navigation path in the visualized virtual bronchoscope model displayed on the display.
For another example, the operator may also be an interventional robot that may utilize a robotic arm to drive a bronchoscope to move within the bronchi of the target patient toward the region of interest in accordance with a navigation path extracted from the virtual bronchoscope model.
And step 103, responding to the received live-action image in the bronchus of the target patient shot by the bronchoscope at the current position, and extracting depth information from the live-action image to obtain a live-action depth map corresponding to the current position.
The live-action image is an actual image taken by the bronchoscope in the bronchi of the target patient.
The live-action depth map is a gray-scale image in which the value of each pixel represents the distance of the corresponding point in the scene from the bronchoscope's optical center.
In the specific implementation process, the bronchoscope can shoot live-action images in the bronchus at preset time intervals or preset distance intervals in the process of moving in the bronchus of the target patient, and the shot live-action images are sent to processing equipment for executing the method provided by the invention. After receiving a live-action image in the bronchus of a target patient shot by the bronchoscope at the current position, the processing equipment can extract depth information from the live-action image in various modes to obtain a live-action depth map corresponding to the current position.
For an embodiment of extracting depth information from a live-action image to obtain a live-action depth map corresponding to the current position, refer to the relevant content in fig. 2, which is not described herein.
And 104, extracting point cloud information from the live-action depth map to obtain the point cloud of the current position.
In some embodiments, for each pixel point on the live-action depth map, the following operations may be performed, resulting in each data point of the point cloud:
And obtaining data points in the point cloud corresponding to the pixel points according to the depth values of the pixel points and the corresponding x-axis coordinates and y-axis coordinates of the pixel points in the point cloud coordinate system.
In a specific implementation, the imaging parameters of the bronchoscope may include the focal length of the bronchoscope on the x, y axes of the camera coordinate system.
By way of example only,For any point in the live-action depth map,Pixel values in a live-action depth map, representingRepresentative is the distance of the location point on the bronchi from the bronchoscope optical center, i.e. the z-axis coordinate of the location point under the camera coordinate system of the bronchoscope. The model can be transformed according to the following coordinate systemDepth value of (2),Corresponding x-axis coordinates and y-axis coordinates in the point cloud coordinate system to obtainData points in the corresponding point cloud.
(1)
Wherein, AndThe focal lengths of the bronchoscope on the x and y axes of the camera coordinate system are respectively represented, and can be obtained according to the parameters of the bronchoscope, wherein x and y are respectively any pixel point in the real scene depth map (for example,) An x-axis coordinate and a y-axis coordinate in the live-action depth map; The coordinate system is respectively an x-axis coordinate, a y-axis coordinate and a z-axis coordinate of a data point in a point cloud corresponding to any pixel point in the live-action depth map under a point cloud coordinate system.
For each pixel point in the live-action depth map, a coordinate system conversion model shown in formula (1) can be utilized to obtain a data point in a point cloud corresponding to the pixel point, so that the point cloud of the position of the bronchoscope when the live-action image is shot is obtained.
And 105, obtaining the current pose of the bronchoscope by using a projection function according to the live-action image and the point cloud of the current position, so that an operator can adjust the moving path of the bronchoscope at the next moment based on the current pose of the bronchoscope.
In the specific implementation process, the current pose of the bronchoscope can be obtained by utilizing a projection function according to the live-action image and the point cloud of the current position in various modes, and the method is not limited by the expression of the specification.
Regarding the point cloud according to the live-action image and the current position, an embodiment of obtaining the current pose of the bronchoscope by using the projection function is referred to in fig. 3, and details thereof are not repeated here.
In the specific implementation process, an operator can timely adjust the moving path of the bronchoscope at the next moment based on the current pose of the bronchoscope, and the safe distance between the bronchoscope and the airway wall surface of the bronchus is ensured.
Fig. 2 is a flow chart of a method for obtaining a live-action depth map corresponding to a current position, provided by the invention, as shown in fig. 2, the method includes the following steps:
step 201, obtaining a feature map of a live-action image.
In the specific implementation process, the feature map of the live-action image can be acquired in various modes, and the feature map is not limited by the expression of the specification.
In some embodiments, a Swin transducer structure may be used as a feature extraction model of the trunk to extract feature maps of the live-action image. The feature extraction model comprises a multi-scale feature encoding network and an aggregation module. The multi-scale feature coding network is used for partitioning an input live-action image, mapping the live-action image into token variables containing position embedded information, extracting visual features under different scales, and the aggregation module is used for aggregating the visual features under different scales to obtain global and local information containing the live-action image and reserving feature images of feature information of more original images.
In the implementation process, the live-action image can be processed by utilizing a multi-scale feature coding network to obtain a feature map of multiple scales of the live-action image.
By way of example only, one live-action image is represented as k scalesThe multi-scale feature encoding network then comprises k sets of convolution and downsampling operations for predicting the resulting scaleIs characterized by (a)
And the aggregation module is used for fusing the feature images of a plurality of scales by using a weighted summation algorithm to obtain an aggregate feature image G of the live-action image. The calculation formula is as follows:
(2)
Wherein, ;Indicating that the expansion ratio isFor increasing the receptive field of the convolution kernel, each feature mapAdjusted to the same size by the nearest interpolation.
The aggregate feature map obtained by using the formula (2) can be used as a visual condition for guiding the reverse denoising model in the step 203, and accurately estimating the real scene depth map corresponding to the real scene image in the depth hidden space.
And finally, taking the obtained aggregate feature map as a feature map of the live-action image.
And 202, adding random noise obeying normal distribution to the feature map by using a forward noise adding model to obtain a noise feature map.
The forward noise model is a latent variable model that can be used to generate a formula task.
In a specific implementation process, the forward noise adding model can be a diffusion process as follows:
(3)
Wherein, Is a conditional probability distribution representing a feature map of a given initial live-action imageNoise profile at time step tProbability distribution of (2); and I is an identity matrix.
By using the formula (3), random noise conforming to normal distribution is added to the feature map of the live-action image, different live-action images can be processed into Gaussian noise blurred images, and then a reverse denoising model in the following steps accurately predicts a live-action depth map from the live-action image according to the visual conditions obtained in the step 201 in a Gaussian distribution space.
And 203, obtaining a live-action depth map according to the feature map and the noise feature map by using the reverse denoising model.
In the step, through a denoising process guided by visual conditions, a monocular depth estimation task is realized according to the feature map and the noise feature mapWherein c is a noise feature map, and z is a final obtained live-action depth map. The depth distribution of the live-action depth map is iteratively corrected by the following formulaAnd converting into a final live-action depth map.
(4)
Wherein, For a conditional probability distribution, representing the depth map of a live at a given current time step tAnd a real depth map of the previous time step t-1 under the condition of the noise characteristic map cΘ is a parameter of the model, which can be obtained by training; is a common sign of a gaussian distribution (Normal Distribution); Representing the noise variance at time step t.
Wherein, Is a reverse denoising model for removing noise from noisy imagesGradually removing noise to predict and obtain original depth map. The input of the inverse denoising model comprises the real scene depth map of the current time step tOutputting the real depth map as the previous time step t-1Is a probability distribution of (c).
In the specific implementation process, the inverse denoising model can be constructed based on the neural network in various modes. By way of example only, the inverse denoising model may include multiple convolution layers for capturing input image features and progressively removing noise, skip connections (skip connections) or residual blocks (residual blocks) between the convolution layers may be used to enhance feature propagation and gradient flow, and time step t may be encoded as an additional input feature or incorporated into the weighting or activation functions of the network in a particular manner.
The reverse denoising model is a neural network model obtained by training by using a training data set, the training data set comprises a plurality of groups of sample data carrying labels, the sample data is a noise characteristic diagram obtained by adding noise to live-action images shot by a bronchoscope at different preset positions in a bronchus of a patient by using a forward denoising model, and the labels are live-action depth diagrams corresponding to the sample data.
Fig. 3 is a flow chart of a method for obtaining the current pose of a bronchoscope, as shown in fig. 3, provided by the invention, the method comprises the following steps:
Step 301, using a projection function, according to a first initial estimated pose of the bronchoscope, projecting each data point of the point cloud onto a two-dimensional plane corresponding to the live-action image, and obtaining a first projection point corresponding to each data point.
In a specific implementation process, the first initial estimated pose may be set according to experience or experimental results, and is not limited by the expression of the present specification.
The projection function is used to project points in three-dimensional space onto a two-dimensional image plane. The implementation of the projection function depends on the camera's internal parameters (e.g., focal length, optical center position, radial and tangential distortion parameters, etc.) and the camera's external parameters (i.e., pose, including rotation and translation).
Step 302, obtaining a first re-projection error according to a first distance between a first projection point corresponding to each data point of the data point cloud and a pixel point of the live-action image corresponding to the data point by using the loss function.
The loss function may include, but is not limited to, a mean square error loss function, a Huber loss function, and the like.
In a specific implementation process, a first distance between a first projection point corresponding to each data point of the point cloud and a pixel point of a live-action image corresponding to the data point can be brought into a loss function, so that a first re-projection error is obtained.
Step 303, iteratively adjusting the first initial estimated pose of the bronchoscope by using a preset algorithm to minimize the first re-projection error until an iteration termination condition is reached.
In a specific implementation, the first initial estimated pose of the bronchoscope may be iteratively adjusted using a variety of preset algorithms, such as PnP (PERSPECTIVE-n-Point) algorithm or ICP (Iterative Closest Point) algorithm.
The termination condition may include the first re-projection error reaching a minimum, or the number of iterations reaching a preset distance threshold, etc.
The above steps 301 to 303 may be expressed as follows.
(5)
Wherein, The Huber loss function is represented as such,Is a matching pair formed by a first projection point corresponding to the jth data point of the point cloud and a pixel point of the live-action image corresponding to the data point; A j-th pixel point representing a k-th frame live-action image; estimating pose as projection function using camera The jth data point of the point cloudProjecting the image to a two-dimensional plane corresponding to the k frame of live-action image; Indicating that a parameter P is sought that minimizes the function to be followed.
And step 304, obtaining the current pose of the bronchoscope according to the pose of the bronchoscope when the first re-projection error is minimized.
In some embodiments, the pose of the bronchoscope when the first re-projection error is minimized may be taken as the current pose of the bronchoscope.
In some embodiments, in order to obtain a more accurate estimation result of the current pose of the bronchoscope, so that the navigation process is more suitable for a bronchoscope scene, on the basis of the obtained pose of the bronchoscope when the first re-projection error is minimized, only part of data points in the point cloud in front of the bronchoscope are selected to carry out refined estimation of the pose of the bronchoscope through preset refinement estimation conditions. The specific steps are as follows.
And determining a second initial estimated pose of the bronchoscope according to the pose of the bronchoscope when the first re-projection error is minimized.
In the implementation process, the pose of the bronchoscope when the first re-projection error is minimized can be used as the second initial estimated pose, and the estimated pose of the bronchoscope is adjusted from the second initial estimated pose.
And projecting each data point of the point cloud onto a two-dimensional plane corresponding to the live-action image according to the second initial estimated pose by using a projection function, and obtaining a second projection point corresponding to each data point.
And selecting a plurality of groups of projection data pairs meeting preset refinement estimation conditions for the projection data pairs corresponding to all the data points of the point cloud. The projection data pair consists of a second projection point corresponding to one data point and a pixel point in the live-action image corresponding to the second projection point.
The preset refinement estimation condition comprises that the pixel point of each projection data pair is located in a preset range of the second projection point of the projection data pair. For example, for a point cloud of data pointsOnly from the second projection point preset range on the two-dimensional plane corresponding to the live-action imageSelecting a corresponding point among the pixel points in the image, whereinIs a preset distance threshold.
The preset refinement estimation condition further comprises the step of projecting the second projection point of the data pair, wherein the distance between the data points corresponding to the second projection point and the third projection point of the third projection point on the live-action image in the bronchus of the target patient shot at the last position of the bronchoscope is smaller than a preset distance threshold. The constraint is formulated as follows.
, (6)
Wherein the method comprises the steps ofIn order to preset the distance threshold value,Estimating the pose of the bronchoscope at the last position; the second projection point corresponds to the point b; and the third projection point corresponding to the point b.
Through the formula (6), the data points involved in the refined estimation of the bronchoscope pose are constrained to meet the rule that the camera moves continuously and slowly between adjacent frames. Therefore, the deviation of the pose estimation of the bronchoscope caused by abnormal data points in the point cloud can be avoided, and the precision of the pose estimation of the bronchoscope in the bronchoscope scene with sparse textures is effectively improved.
And obtaining a second projection error according to a second distance between a second projection point and the pixel point of each group of projection data pairs by using the loss function.
And iteratively adjusting the second initial estimated pose of the bronchoscope by using a preset algorithm to minimize the second re-projection error until an iteration termination condition is reached.
And obtaining the current pose of the bronchoscope according to the pose of the bronchoscope when the second projection error is minimized.
The bronchoscope automatic navigation device provided by the invention is described below, and the bronchoscope automatic navigation device described below and the bronchoscope automatic navigation method described above can be correspondingly referred to each other.
Fig. 4 is a schematic structural diagram of the bronchoscope automatic navigation device provided by the invention. As shown in fig. 4, the apparatus 400 includes the following modules.
A first obtaining module 410, configured to obtain, in response to initiation of a bronchoscopy operation, an established virtual bronchoscope model of a target patient, where the virtual bronchoscope model is a three-dimensional structural model of a bronchial tree established according to a lung airway diagram of the target patient.
And a second obtaining module 420, configured to obtain a navigation path of the bronchoscopy according to the virtual bronchoscope model, so that an operator drives the bronchoscope to move toward a region of interest in a bronchus of the target patient according to the navigation path, where the navigation path uses an upper portion of a main bronchus of the target patient as a starting point and the region of interest is a target point.
And a third obtaining module 430, configured to, in response to receiving a live-action image of the bronchoscope in the bronchus of the target patient photographed at the current position, extract depth information from the live-action image, and obtain a live-action depth map corresponding to the current position.
And a fourth obtaining module 440, configured to extract point cloud information from the live-action depth map, and obtain the point cloud of the current position.
And a fifth obtaining module 450, configured to obtain, according to the live-action depth map and the point cloud of the current position, a current pose of the bronchoscope by using a projection function, so that the operator adjusts a movement path of the bronchoscope at a next moment based on the current pose of the bronchoscope.
Fig. 5 illustrates a physical schematic diagram of an electronic device, which may include a processor (processor) 510, a communication interface (Communications Interface) 520, a memory (memory) 530, and a communication bus 540, where the processor 510, the communication interface 520, and the memory 530 perform communication with each other through the communication bus 540, as shown in fig. 5. The processor 510 may invoke logic instructions in the memory 530 to execute a bronchoscope automatic navigation method, where the method includes, in response to initiation of a bronchoscope examination operation, obtaining a virtual bronchoscope model of an established target patient, where the virtual bronchoscope model is a three-dimensional structure model of a bronchotree established according to a lung airway map of the target patient, obtaining a navigation path of the bronchoscope examination according to the virtual bronchoscope model, so that an operator drives the bronchoscope to move in a bronchoscope of the target patient according to the navigation path, where the navigation path uses an upper portion of a main bronchus of the target patient as a starting point, the region of interest as a target point, in response to receiving a live-action image of the bronchoscope in the bronchoscope of the target patient captured at a current position, extracting depth information from the live-action image to obtain a live-action depth map corresponding to the current position, extracting point cloud information from the live-action depth map to obtain a point of the current position, driving the bronchoscope in the bronchoscope according to the navigation path, and adjusting the bronchoscope according to the live-action depth map and the live-action point cloud point, and moving the current position of the bronchoscope based on the current position.
Further, the logic instructions in the memory 530 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.
In another aspect, the invention further provides a computer program product, the computer program product comprises a computer program, the computer program can be stored on a non-transitory computer readable storage medium, when the computer program is executed by a processor, the computer can execute the bronchoscope automatic navigation method provided by the above methods, the method comprises the steps of responding to the starting of bronchoscopy operation, obtaining an established virtual bronchoscope model of a target patient, wherein the virtual bronchoscope model is a three-dimensional structure model of a bronchotree established according to a lung airway map of the target patient, obtaining a navigation path of the bronchoscopy according to the virtual bronchoscope model, so that an operator drives the bronchoscope to move towards a region of interest in a bronchoscope of the target patient according to the navigation path, wherein the navigation path takes the upper part of a main bronchoscope of the target patient as a starting point, the region of interest is a target point, responding to the receiving of a real-time image of the bronchoscope in the bronchoscope of the target patient, extracting real-time information from the real-time image of the bronchoscope according to the virtual bronchoscope model, extracting real-time depth information from the real-time image of the target patient, and obtaining a depth map from the real-time point of the bronchoscope according to the current position, and moving the depth map according to the current position, and adjusting the depth map according to the current position.
In still another aspect, the invention further provides a non-transitory computer readable storage medium, on which a computer program is stored, the computer program being implemented when executed by a processor to perform the bronchoscope automatic navigation method provided by the above methods, the method comprising obtaining a virtual bronchoscope model of an established target patient in response to initiation of bronchoscope examination operation, wherein the virtual bronchoscope model is a three-dimensional structure model of a bronchotree established according to a lung airway map of the target patient, obtaining a navigation path of the bronchoscope examination according to the virtual bronchoscope model, so that an operator drives the bronchoscope to move in a bronchoscope of the target patient according to the navigation path, wherein the navigation path takes an upper part of a main bronchus of the target patient as a starting point, the region of interest is a target point, extracting depth information from the real-scene image in response to receiving the real-scene image of the bronchoscope of the target patient taken at a current position, obtaining a real-scene depth map corresponding to the current position, obtaining a depth map from the real-scene image, extracting the depth map from the real-scene image, and obtaining a depth map from the current position and a cloud point, and moving the bronchoscope according to the current position, and adjusting the depth map by using the cloud point.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
It should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention, and not for limiting the same, and although the present invention has been described in detail with reference to the above-mentioned embodiments, it should be understood by those skilled in the art that the technical solution described in the above-mentioned embodiments may be modified or some technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the spirit and scope of the technical solution of the embodiments of the present invention.

Claims (10)

1.一种支气管镜自动导航方法,其特征在于,包括:1. A bronchoscope automatic navigation method, characterized by comprising: 响应于支气管镜检查操作的启动,获取已建立的目标患者的虚拟支气管镜模型;其中,所述虚拟支气管镜模型是根据所述目标患者的肺部气道图建立的支气管树的三维结构模型;In response to the initiation of the bronchoscopic examination operation, obtaining an established virtual bronchoscopic model of the target patient; wherein the virtual bronchoscopic model is a three-dimensional structural model of the bronchial tree established according to the lung airway map of the target patient; 根据所述虚拟支气管镜模型,获取所述支气管镜检查的导航路径,以使操作者依照所述导航路径,驱动支气管镜在所述目标患者的支气管内向感兴趣区域移动;其中,所述导航路径以所述目标患者的主气管上部为起始点,所述感兴趣区域为目标点;According to the virtual bronchoscope model, a navigation path for the bronchoscopic examination is obtained, so that the operator drives the bronchoscope to move toward the region of interest in the bronchus of the target patient according to the navigation path; wherein the navigation path takes the upper part of the main bronchus of the target patient as the starting point, and the region of interest is the target point; 响应于接收到所述支气管镜在当前位置拍摄的所述目标患者的支气管内的实景图像,从所述实景图像中提取深度信息,得到所述当前位置对应的实景深度图;In response to receiving a real-scene image of the bronchus of the target patient captured by the bronchoscope at the current position, extracting depth information from the real-scene image to obtain a real-scene depth map corresponding to the current position; 从所述实景深度图中提取点云信息,得到所述当前位置的点云;Extracting point cloud information from the real-view depth map to obtain a point cloud at the current position; 根据所述实景深度图和所述当前位置的点云,利用投影函数,得到所述支气管镜的当前位姿,以使所述操作者基于所述支气管镜的当前位姿,调整所述支气管镜在下一时刻的移动路径。According to the real-life depth map and the point cloud of the current position, a projection function is used to obtain the current posture of the bronchoscope, so that the operator can adjust the moving path of the bronchoscope at the next moment based on the current posture of the bronchoscope. 2.根据权利要求1所述的支气管镜自动导航方法,其特征在于,所述从所述实景图像中提取深度信息,得到所述当前位置对应的实景深度图,包括:2. The bronchoscope automatic navigation method according to claim 1, characterized in that the step of extracting depth information from the real scene image to obtain a real scene depth map corresponding to the current position comprises: 获取所述实景图像的特征图;Obtaining a feature map of the real scene image; 利用前向加噪模型,向所述特征图添加服从正态分布的随机噪声,得到噪声特征图;Using a forward noise addition model, random noise that obeys a normal distribution is added to the feature map to obtain a noise feature map; 利用逆向去噪模型,根据所述特征图和所述噪声特征图,得到所述实景深度图;其中,所述去噪模型是利用训练数据集训练得到的神经网络模型,所述训练数据集包括多组携带标签的样本数据,所述样本数据是利用所述前向加噪模型,向所述支气管镜在患者的支气管内不同预设位置拍摄的实景图像添加噪声后得到的噪声特征图,所述标签为所述样本数据对应的实景深度图。The real-life depth map is obtained according to the feature map and the noise feature map by using an inverse denoising model; wherein the denoising model is a neural network model trained using a training data set, the training data set includes multiple groups of sample data with labels, the sample data is a noise feature map obtained by adding noise to the real-life image taken by the bronchoscope at different preset positions in the patient's bronchus by using the forward denoising model, and the label is the real-life depth map corresponding to the sample data. 3.根据权利要求2所述的支气管镜自动导航方法,其特征在于,所述获取所述实景图像的特征图,包括:3. The method for automatic bronchoscope navigation according to claim 2, characterized in that the step of obtaining the feature map of the real scene image comprises: 利用多尺度特征编码网络,处理所述实景图像,得到所述实景图像的多个尺度的特征图;Processing the real scene image using a multi-scale feature encoding network to obtain feature maps of multiple scales of the real scene image; 利用加权求和算法,融合所述多个尺度的特征图,得到所述实景图像的聚合特征图;Using a weighted sum algorithm, the feature maps of the multiple scales are fused to obtain an aggregated feature map of the real scene image; 将所述聚合特征图,作为所述实景图像的特征图。The aggregated feature map is used as the feature map of the real scene image. 4.根据权利要求1所述的支气管镜自动导航方法,其特征在于,所述从所述实景深度图中提取点云信息,得到所述当前位置的点云,包括:4. The automatic navigation method for bronchoscope according to claim 1, characterized in that extracting point cloud information from the real-view depth map to obtain the point cloud of the current position comprises: 针对所述实景深度图上的每一个像素点,执行如下操作,得到所述点云的每一个数据点:For each pixel point on the real scene depth map, perform the following operations to obtain each data point of the point cloud: 利用所述支气管镜的成像参数,对所述像素点在所述实景深度图中的x轴坐标和y轴坐标进行坐标变换,得到所述像素点在点云坐标系中对应的x轴坐标、y轴坐标和z轴坐标;Using the imaging parameters of the bronchoscope, coordinate transformation is performed on the x-axis coordinate and the y-axis coordinate of the pixel point in the real-view depth map to obtain the x-axis coordinate, the y-axis coordinate and the z-axis coordinate corresponding to the pixel point in the point cloud coordinate system; 根据所述像素点的深度值、所述像素点在所述点云坐标系中对应的x轴坐标、y轴坐标和z轴坐标,得到所述像素点对应的所述点云中的数据点。According to the depth value of the pixel point, the x-axis coordinate, y-axis coordinate and z-axis coordinate corresponding to the pixel point in the point cloud coordinate system, the data point in the point cloud corresponding to the pixel point is obtained. 5.根据权利要求1所述的支气管镜自动导航方法,其特征在于,所述根据所述实景图像和所述当前位置的点云,利用投影函数,得到所述支气管镜的当前位姿,包括:5. The bronchoscope automatic navigation method according to claim 1, characterized in that the current posture of the bronchoscope is obtained by using a projection function based on the real scene image and the point cloud of the current position, comprising: 利用投影函数,依照所述支气管镜的第一初始估计位姿,将所述点云的每一个数据点投影到所述实景图像对应的二维平面上,得到所述每一个数据点对应的第一投影点;Using a projection function, according to a first initial estimated position of the bronchoscope, each data point of the point cloud is projected onto a two-dimensional plane corresponding to the real scene image to obtain a first projection point corresponding to each data point; 利用损失函数,根据所述点云的每一个数据点对应的所述第一投影点与所述数据点对应的所述实景图像的像素点之间的第一距离,得到第一重投影误差;Obtaining a first reprojection error according to a first distance between the first projection point corresponding to each data point of the point cloud and a pixel point of the real scene image corresponding to the data point by using a loss function; 利用预设算法,迭代调整所述支气管镜的第一初始估计位姿,以最小化所述第一重投影误差,直至达到迭代终止条件;Iteratively adjusting the first initial estimated position of the bronchoscope by using a preset algorithm to minimize the first reprojection error until an iteration termination condition is reached; 根据所述第一重投影误差最小化时的所述支气管镜的位姿,得到所述支气管镜的当前位姿。The current position of the bronchoscope is obtained according to the position of the bronchoscope when the first projection error is minimized. 6.根据权利要求5所述的支气管镜自动导航方法,其特征在于,所述根据所述第一重投影误差最小化时的所述支气管镜的位姿,得到所述支气管镜的当前位姿,包括:6. The bronchoscope automatic navigation method according to claim 5, characterized in that the current posture of the bronchoscope is obtained according to the posture of the bronchoscope when the first reprojection error is minimized, comprising: 根据所述第一重投影误差最小化时的所述支气管镜的位姿,确定所述支气管镜的第二初始估计位姿;Determining a second initial estimated pose of the bronchoscope according to the pose of the bronchoscope when the first re-projection error is minimized; 利用投影函数,依照所述第二初始估计位姿,将所述点云的每一个数据点投影到所述实景图像对应的二维平面上,得到所述每一个数据点对应的第二投影点;Using a projection function, according to the second initial estimated pose, project each data point of the point cloud onto a two-dimensional plane corresponding to the real scene image to obtain a second projection point corresponding to each data point; 针对所述点云的所有数据点对应的投影数据对中,选择满足预设细化估计条件的多组投影数据对;其中,一组所述投影数据对由一个所述数据点对应的第二投影点和与其对应的实景图像中的像素点组成,所述预设细化估计条件包括:每一个所述投影数据对中的像素点位于所述投影数据对的第二投影点的预设范围之内,且所述投影数据对的第二投影点,与其对应的数据点在所述支气管镜在上一位置拍摄的所述目标患者的支气管内的实景图像上的第三投影点之间的距离小于预设距离阈值;For all the projection data pairs corresponding to the data points of the point cloud, multiple groups of projection data pairs that meet preset refinement estimation conditions are selected; wherein a group of the projection data pairs is composed of a second projection point corresponding to one of the data points and a pixel point in the real-scene image corresponding thereto, and the preset refinement estimation conditions include: the pixel point in each of the projection data pairs is located within a preset range of the second projection point of the projection data pair, and the distance between the second projection point of the projection data pair and a third projection point of the corresponding data point on the real-scene image of the bronchus of the target patient taken by the bronchoscope at the previous position is less than a preset distance threshold; 利用损失函数,根据每一组所述投影数据对的第二投影点与像素点之间的第二距离,得到第二重投影误差;Using the loss function, according to the second distance between the second projection point and the pixel point of each group of the projection data pair, a second reprojection error is obtained; 利用预设算法,迭代调整所述支气管镜的第二初始估计位姿,以最小化所述第二重投影误差,直至达到迭代终止条件;Iteratively adjusting the second initial estimated position of the bronchoscope by using a preset algorithm to minimize the second re-projection error until an iteration termination condition is reached; 根据所述第二重投影误差最小化时的所述支气管镜的位姿,得到所述支气管镜的当前位姿。The current position of the bronchoscope is obtained according to the position of the bronchoscope when the second projection error is minimized. 7.一种支气管镜自动导航装置,其特征在于,包括:7. A bronchoscope automatic navigation device, characterized in that it comprises: 第一获取模块,用于响应于支气管镜检查操作的启动,获取已建立的目标患者的虚拟支气管镜模型;其中,所述虚拟支气管镜模型是根据所述目标患者的肺部气道图建立的支气管树的三维结构模型;A first acquisition module is used to acquire an established virtual bronchoscope model of a target patient in response to the initiation of a bronchoscopic examination operation; wherein the virtual bronchoscope model is a three-dimensional structural model of a bronchial tree established based on a lung airway map of the target patient; 第二获取模块,用于根据所述虚拟支气管镜模型,获取所述支气管镜检查的导航路径,以使操作者依照所述导航路径,驱动支气管镜在所述目标患者的支气管内向感兴趣区域移动;其中,所述导航路径以所述目标患者的主气管上部为起始点,所述感兴趣区域为目标点;A second acquisition module is used to acquire a navigation path for the bronchoscopic examination according to the virtual bronchoscope model, so that the operator drives the bronchoscope to move toward the region of interest in the bronchus of the target patient according to the navigation path; wherein the navigation path takes the upper part of the main bronchus of the target patient as a starting point, and the region of interest is a target point; 第三获取模块,用于响应于接收到所述支气管镜在当前位置拍摄的所述目标患者的支气管内的实景图像,从所述实景图像中提取深度信息,得到所述当前位置对应的实景深度图;A third acquisition module is used for extracting depth information from the real-scene image in response to receiving the real-scene image of the bronchus of the target patient taken by the bronchoscope at the current position, so as to obtain a real-scene depth map corresponding to the current position; 第四获取模块,用于从所述实景深度图中提取点云信息,得到所述当前位置的点云;A fourth acquisition module is used to extract point cloud information from the real scene depth map to obtain the point cloud of the current position; 第五获取模块,用于根据所述实景深度图和所述当前位置的点云,利用投影函数,得到所述支气管镜的当前位姿,以使所述操作者基于所述支气管镜的当前位姿,调整所述支气管镜在下一时刻的移动路径。The fifth acquisition module is used to obtain the current posture of the bronchoscope according to the real-scene depth map and the point cloud of the current position using a projection function, so that the operator can adjust the moving path of the bronchoscope at the next moment based on the current posture of the bronchoscope. 8.一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至6任一项所述支气管镜自动导航方法。8. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the automatic bronchoscope navigation method as described in any one of claims 1 to 6 when executing the computer program. 9.一种非暂态计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至6任一项所述支气管镜自动导航方法。9. A non-transitory computer-readable storage medium having a computer program stored thereon, wherein when the computer program is executed by a processor, the method for automatic bronchoscope navigation as claimed in any one of claims 1 to 6 is implemented. 10.一种计算机程序产品,包括计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至6任一项所述支气管镜自动导航方法。10. A computer program product, comprising a computer program, characterized in that when the computer program is executed by a processor, the method for automatic bronchoscope navigation as claimed in any one of claims 1 to 6 is implemented.
CN202411724705.8A 2024-11-28 2024-11-28 Bronchoscope automatic navigation method, device, electronic equipment and storage medium Pending CN119942033A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411724705.8A CN119942033A (en) 2024-11-28 2024-11-28 Bronchoscope automatic navigation method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411724705.8A CN119942033A (en) 2024-11-28 2024-11-28 Bronchoscope automatic navigation method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN119942033A true CN119942033A (en) 2025-05-06

Family

ID=95542129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411724705.8A Pending CN119942033A (en) 2024-11-28 2024-11-28 Bronchoscope automatic navigation method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN119942033A (en)

Similar Documents

Publication Publication Date Title
US11631174B2 (en) Adaptive navigation technique for navigating a catheter through a body channel or cavity
Shen et al. Context-aware depth and pose estimation for bronchoscopic navigation
Mori et al. Tracking of a bronchoscope using epipolar geometry analysis and intensity-based image registration of real and virtual endoscopic images
EP2710557B1 (en) Fast articulated motion tracking
US7081088B2 (en) Method and apparatus for automatic local path planning for virtual colonoscopy
US20120082351A1 (en) Fast 3d-2d image registration method with application to continuously guided endoscopy
CN102715906A (en) Method and system for 3D cardiac motion estimation from single scan of c-arm angiography
CN113327225B (en) Methods for providing airway information
US20220198693A1 (en) Image processing method, device and computer-readable storage medium
CN116051553B (en) Method and device for marking inside three-dimensional medical model
US12131476B2 (en) System and method for estimating motion of target inside tissue based on surface deformation of soft tissue
CN119942033A (en) Bronchoscope automatic navigation method, device, electronic equipment and storage medium
CN113936074B (en) Image processing method, device, electronic equipment and storage medium
CN116433874B (en) Bronchoscope navigation method, device, equipment and storage medium
CN112258533B (en) Method for segmenting cerebellum earthworm part in ultrasonic image
CN119090965B (en) Method and system for monitoring region of interest in laparoscopic surgery
Luó et al. On scale invariant features and sequential Monte Carlo sampling for bronchoscope tracking
CN119454236B (en) Airway navigation system and method based on bronchoscope view
Fang et al. Spatially Constrained and Deeply Learned Bilateral Structural Intensity-Depth Registration Autonomously Navigates a Flexible Endoscope
CN119174649B (en) Map expansion method and system in electromagnetic navigation bronchoscopy combined with medical image information
CN120543535A (en) Direct sparse odometry surgical navigation method, system, electronic device and medium
CN117710279A (en) Endoscope positioning method, electronic device, and non-transitory computer-readable storage medium
CN120199465A (en) A fast CT positioning navigation method and system
JP2025501453A (en) Processing images of objects and object parts, including multi-object constructs and deformed objects
CN119454236A (en) Airway navigation system and method based on bronchoscopic view

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination