Disclosure of Invention
In order to solve the above problems in the prior art, the present invention provides a method, an apparatus, an electronic device and a storage medium for automatically navigating a bronchoscope, so as to effectively improve accuracy of automatically navigating the bronchoscope.
The invention provides an automatic bronchoscope navigation method, which comprises the following steps.
The method comprises the steps of responding to the starting of bronchoscopy operation, obtaining a virtual bronchoscope model of an established target patient, wherein the virtual bronchoscope model is a three-dimensional structure model of a bronchotree established according to a lung airway diagram of the target patient, obtaining a navigation path of bronchoscopy according to the virtual bronchoscope model, enabling an operator to drive the bronchoscope to move towards an interested area in the bronchus of the target patient according to the navigation path, enabling the upper portion of the main bronchus of the target patient to serve as a starting point of the navigation path, enabling the interested area to serve as a target point, responding to a live-action image of the bronchoscope in the bronchus of the target patient, extracting depth information from the live-action image, obtaining a live-action depth diagram corresponding to the current position, extracting point cloud information from the live-action depth diagram, obtaining point cloud information of the current position, obtaining the current position of the bronchoscope according to the live-action depth diagram and the point cloud of the current position, utilizing a projection function to obtain the current position of the bronchoscope, and enabling the operator to move the bronchoscope at the current position of the bronchoscope based on the current position of the bronchoscope.
The automatic bronchoscope navigation method comprises the steps of obtaining a feature map of a live-action image, adding random noise which is subjected to normal distribution to the feature map by utilizing a forward noise adding model to obtain a noise feature map, obtaining the live-action depth map by utilizing a reverse noise removing model according to the feature map and the noise feature map, wherein the reverse noise removing model is a neural network model obtained by training by utilizing a training data set, the training data set comprises a plurality of groups of sample data carrying labels, the sample data are noise feature maps obtained by utilizing the forward noise adding model after noise is added to live-action images shot by the bronchoscope at different preset positions in a bronchus of a patient, and the labels are live-action depth maps corresponding to the sample data.
The bronchoscope automatic navigation method comprises the steps of processing a live-action image by utilizing a multi-scale feature coding network to obtain feature images of multiple scales of the live-action image, fusing the feature images of the multiple scales by utilizing a weighted summation algorithm to obtain an aggregate feature image of the live-action image, and taking the aggregate feature image as the feature image of the live-action image.
The method for automatically navigating the bronchoscope comprises the steps of extracting point cloud information from a live-action depth image to obtain point clouds of the current position, aiming at each pixel point on the live-action depth image, executing the following operation to obtain each data point of the point clouds, carrying out coordinate transformation on x-axis coordinates and y-axis coordinates of the pixel point in the live-action depth image by using imaging parameters of the bronchoscope to obtain corresponding x-axis coordinates and y-axis coordinates of the pixel point in a point cloud coordinate system, and obtaining the data point of the point clouds corresponding to the pixel point according to depth values of the pixel point and corresponding x-axis coordinates and y-axis coordinates of the pixel point in the point cloud coordinate system.
The method for automatically navigating the bronchoscope comprises the steps of utilizing a projection function to obtain the current pose of the bronchoscope according to the point cloud of the live-action image and the current position, utilizing the projection function to project each data point of the point cloud onto a two-dimensional plane corresponding to the live-action image according to the first initial estimated pose of the bronchoscope to obtain a first projection point corresponding to each data point, utilizing a loss function to obtain a first re-projection error according to a first distance between the first projection point corresponding to each data point of the point cloud and a pixel point of the live-action image corresponding to the data point, utilizing a preset algorithm to iteratively adjust the first initial estimated pose of the bronchoscope to minimize the first re-projection error until an iteration termination condition is reached, and obtaining the current pose of the bronchoscope according to the pose of the bronchoscope when the first re-projection error is minimized.
According to the bronchoscope automatic navigation method provided by the invention, the current pose of the bronchoscope is obtained according to the pose of the bronchoscope when the first re-projection error is minimized, and the method comprises the steps of determining a second initial estimated pose of the bronchoscope according to the pose of the bronchoscope when the first re-projection error is minimized; the method comprises the steps of obtaining a real-scene image corresponding to the real-scene image, obtaining a first projection point corresponding to each data point by using a projection function, projecting each data point of the point cloud onto a two-dimensional plane corresponding to the real-scene image according to the first initial estimated pose, selecting a plurality of groups of projection data pairs meeting preset refinement estimation conditions aiming at projection data pairs corresponding to all data points of the point cloud, wherein one group of projection data pairs consists of the first projection point corresponding to one data point and pixel points in the real-scene image corresponding to the second projection point, the preset refinement estimation conditions comprise that the pixel point of each projection data pair is located in a preset range of the first projection point of the projection data pair, the distance between the third projection point of the corresponding data point on the real-scene image of a target patient photographed by the bronchoscope at a first position is smaller than a preset distance threshold, obtaining a second projection error according to a second projection point of each group of the projection data pairs and pixel points in the real-scene image, obtaining a second projection error by using a second pose, and iterating the second projection error until the initial error is minimized by using the iterative error, wherein the second projection error is reached according to the initial pose, and obtaining the current pose of the bronchoscope.
The invention further provides an automatic bronchoscope navigation device which comprises a first acquisition module, a second acquisition module, a fourth acquisition module and a third acquisition module, wherein the first acquisition module is used for responding to the starting of bronchoscopy operation, acquiring an established virtual bronchoscope model of a target patient, the virtual bronchoscope model is a three-dimensional structure model of a bronchotree established according to a lung airway diagram of the target patient, the second acquisition module is used for acquiring a navigation path of bronchoscopy according to the virtual bronchoscope model, so that an operator drives a bronchoscope to move towards an interested area in a bronchoscope of the target patient according to the navigation path, the navigation path takes the upper part of a main bronchoscope of the target patient as a starting point, the interested area is a target point, the third acquisition module is used for responding to a live-action image of the bronchoscope in the bronchoscope of the target patient shot at a current position, extracting depth information from the live-action image, obtaining a live-action depth diagram corresponding to the current position, the fourth acquisition module is used for extracting point information from the live-action depth diagram, so that an operator can drive the bronchoscope to move towards the interested area in the bronchoscope according to the current position, and the cloud position of the bronchoscope is adjusted according to the cloud position.
The invention also provides electronic equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the automatic bronchoscope navigation method is realized by the processor when the computer program is executed.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a bronchoscope automatic navigation method as described in any of the above.
The invention also provides a computer program product comprising a computer program which when executed by a processor implements a bronchoscope automatic navigation method as described in any one of the above.
According to the bronchoscope automatic navigation method, the device, the electronic equipment and the storage medium, provided by the invention, the real-time image of the bronchoscope at the current position is obtained, and the depth information and the point cloud information are extracted from the real-time image, so that the point cloud at the current position can be constructed. By utilizing the live-action image and the point cloud of the current position and combining a projection function, the current pose of the bronchoscope can be accurately calculated, and a clear virtual guide interface with sufficient visual field can be provided. An operator can timely adjust the moving path of the bronchoscope based on accurate current pose information of the bronchoscope, so that the orbital flight based on route planning is realized, and the problems of positioning errors and insufficient precision of a navigation path caused by the complexity of a respiratory system (such as abnormal bronchomorphology, structural deformation caused by respiration, airway stenosis and the like) are overcome. Thereby effectively improving the accuracy of the bronchoscope automatic navigation.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The bronchoscope automatic navigation method of the present invention is described below with reference to fig. 1-3.
Fig. 1 is a schematic flow chart of the automatic bronchoscope navigation method provided by the invention, as shown in fig. 1, the method comprises the following steps:
Step 101, in response to initiation of a bronchoscopy procedure, a virtual bronchoscope model of the established target patient is acquired.
The target patient is a patient for whom bronchoscopy is desired.
The virtual bronchoscope model is a three-dimensional structural model of a bronchial tree established according to a lung airway diagram of a target patient.
In a specific implementation process, the lung airway diagram of the target patient may be obtained in various manners, for example, the lung airway diagram of the target patient may be obtained by performing computed tomography CT on the target patient, which is not limited by the description of the present specification.
In a specific implementation process, the lung airway diagram can be finely segmented to construct an accurate three-dimensional airway grid model, and a highly-realistic virtual bronchoscope model is established by utilizing a volume rendering technology based on the three-dimensional airway grid model. The virtual bronchoscope model can provide a doctor with more visual and comprehensive lung airway visualization tool.
Step 102, obtaining a navigation path of bronchoscopy according to the virtual bronchoscope model, so that an operator drives the bronchoscope to move towards the region of interest in the bronchus of the target patient according to the navigation path.
The navigation path takes the upper part of the main trachea of the target patient as a starting point, and the region of interest as a target point.
In bronchoscopy, the region of interest (Region of Interest, ROI) may include a variety of regions where structures or abnormalities associated with the pulmonary airways and their associated lesions are located, such as, for example, manually or automatically delineated nodular lesions.
In a specific implementation process, the central line of the airway of the target patient can be extracted based on the virtual bronchoscope model, so that an optimal path reaching the target point is obtained, and the optimal path is used as a navigation path to guide subsequent examination or treatment operation. When the navigation path is determined, the position of the viewpoint is required to be always centered to obtain the most comprehensive view, meanwhile, the viewpoint is required to keep a safe distance from the wall surface of the air passage to avoid collision or damage, and in addition, the whole path is required to be strictly positioned in the air passage, so that the accuracy and the safety of operation are ensured.
In the specific implementation, the operator may be a variety of implementation subjects, and is not limited by the expression of the present specification.
For example, the operator may be a doctor, who may drive the bronchoscope to move within the bronchi of the target patient toward the region of interest in accordance with the navigation path in the visualized virtual bronchoscope model displayed on the display.
For another example, the operator may also be an interventional robot that may utilize a robotic arm to drive a bronchoscope to move within the bronchi of the target patient toward the region of interest in accordance with a navigation path extracted from the virtual bronchoscope model.
And step 103, responding to the received live-action image in the bronchus of the target patient shot by the bronchoscope at the current position, and extracting depth information from the live-action image to obtain a live-action depth map corresponding to the current position.
The live-action image is an actual image taken by the bronchoscope in the bronchi of the target patient.
The live-action depth map is a gray-scale image in which the value of each pixel represents the distance of the corresponding point in the scene from the bronchoscope's optical center.
In the specific implementation process, the bronchoscope can shoot live-action images in the bronchus at preset time intervals or preset distance intervals in the process of moving in the bronchus of the target patient, and the shot live-action images are sent to processing equipment for executing the method provided by the invention. After receiving a live-action image in the bronchus of a target patient shot by the bronchoscope at the current position, the processing equipment can extract depth information from the live-action image in various modes to obtain a live-action depth map corresponding to the current position.
For an embodiment of extracting depth information from a live-action image to obtain a live-action depth map corresponding to the current position, refer to the relevant content in fig. 2, which is not described herein.
And 104, extracting point cloud information from the live-action depth map to obtain the point cloud of the current position.
In some embodiments, for each pixel point on the live-action depth map, the following operations may be performed, resulting in each data point of the point cloud:
And obtaining data points in the point cloud corresponding to the pixel points according to the depth values of the pixel points and the corresponding x-axis coordinates and y-axis coordinates of the pixel points in the point cloud coordinate system.
In a specific implementation, the imaging parameters of the bronchoscope may include the focal length of the bronchoscope on the x, y axes of the camera coordinate system.
By way of example only,For any point in the live-action depth map,Pixel values in a live-action depth map, representingRepresentative is the distance of the location point on the bronchi from the bronchoscope optical center, i.e. the z-axis coordinate of the location point under the camera coordinate system of the bronchoscope. The model can be transformed according to the following coordinate systemDepth value of (2),Corresponding x-axis coordinates and y-axis coordinates in the point cloud coordinate system to obtainData points in the corresponding point cloud.
(1)
Wherein, AndThe focal lengths of the bronchoscope on the x and y axes of the camera coordinate system are respectively represented, and can be obtained according to the parameters of the bronchoscope, wherein x and y are respectively any pixel point in the real scene depth map (for example,) An x-axis coordinate and a y-axis coordinate in the live-action depth map;、、 The coordinate system is respectively an x-axis coordinate, a y-axis coordinate and a z-axis coordinate of a data point in a point cloud corresponding to any pixel point in the live-action depth map under a point cloud coordinate system.
For each pixel point in the live-action depth map, a coordinate system conversion model shown in formula (1) can be utilized to obtain a data point in a point cloud corresponding to the pixel point, so that the point cloud of the position of the bronchoscope when the live-action image is shot is obtained.
And 105, obtaining the current pose of the bronchoscope by using a projection function according to the live-action image and the point cloud of the current position, so that an operator can adjust the moving path of the bronchoscope at the next moment based on the current pose of the bronchoscope.
In the specific implementation process, the current pose of the bronchoscope can be obtained by utilizing a projection function according to the live-action image and the point cloud of the current position in various modes, and the method is not limited by the expression of the specification.
Regarding the point cloud according to the live-action image and the current position, an embodiment of obtaining the current pose of the bronchoscope by using the projection function is referred to in fig. 3, and details thereof are not repeated here.
In the specific implementation process, an operator can timely adjust the moving path of the bronchoscope at the next moment based on the current pose of the bronchoscope, and the safe distance between the bronchoscope and the airway wall surface of the bronchus is ensured.
Fig. 2 is a flow chart of a method for obtaining a live-action depth map corresponding to a current position, provided by the invention, as shown in fig. 2, the method includes the following steps:
step 201, obtaining a feature map of a live-action image.
In the specific implementation process, the feature map of the live-action image can be acquired in various modes, and the feature map is not limited by the expression of the specification.
In some embodiments, a Swin transducer structure may be used as a feature extraction model of the trunk to extract feature maps of the live-action image. The feature extraction model comprises a multi-scale feature encoding network and an aggregation module. The multi-scale feature coding network is used for partitioning an input live-action image, mapping the live-action image into token variables containing position embedded information, extracting visual features under different scales, and the aggregation module is used for aggregating the visual features under different scales to obtain global and local information containing the live-action image and reserving feature images of feature information of more original images.
In the implementation process, the live-action image can be processed by utilizing a multi-scale feature coding network to obtain a feature map of multiple scales of the live-action image.
By way of example only, one live-action image is represented as k scalesThe multi-scale feature encoding network then comprises k sets of convolution and downsampling operations for predicting the resulting scaleIs characterized by (a)。
And the aggregation module is used for fusing the feature images of a plurality of scales by using a weighted summation algorithm to obtain an aggregate feature image G of the live-action image. The calculation formula is as follows:
(2)
Wherein, ;Indicating that the expansion ratio isFor increasing the receptive field of the convolution kernel, each feature mapAdjusted to the same size by the nearest interpolation.
The aggregate feature map obtained by using the formula (2) can be used as a visual condition for guiding the reverse denoising model in the step 203, and accurately estimating the real scene depth map corresponding to the real scene image in the depth hidden space.
And finally, taking the obtained aggregate feature map as a feature map of the live-action image.
And 202, adding random noise obeying normal distribution to the feature map by using a forward noise adding model to obtain a noise feature map.
The forward noise model is a latent variable model that can be used to generate a formula task.
In a specific implementation process, the forward noise adding model can be a diffusion process as follows:
(3)
Wherein, Is a conditional probability distribution representing a feature map of a given initial live-action imageNoise profile at time step tProbability distribution of (2); and I is an identity matrix.
By using the formula (3), random noise conforming to normal distribution is added to the feature map of the live-action image, different live-action images can be processed into Gaussian noise blurred images, and then a reverse denoising model in the following steps accurately predicts a live-action depth map from the live-action image according to the visual conditions obtained in the step 201 in a Gaussian distribution space.
And 203, obtaining a live-action depth map according to the feature map and the noise feature map by using the reverse denoising model.
In the step, through a denoising process guided by visual conditions, a monocular depth estimation task is realized according to the feature map and the noise feature mapWherein c is a noise feature map, and z is a final obtained live-action depth map. The depth distribution of the live-action depth map is iteratively corrected by the following formulaAnd converting into a final live-action depth map.
(4)
Wherein, For a conditional probability distribution, representing the depth map of a live at a given current time step tAnd a real depth map of the previous time step t-1 under the condition of the noise characteristic map cΘ is a parameter of the model, which can be obtained by training; is a common sign of a gaussian distribution (Normal Distribution); Representing the noise variance at time step t.
Wherein, Is a reverse denoising model for removing noise from noisy imagesGradually removing noise to predict and obtain original depth map. The input of the inverse denoising model comprises the real scene depth map of the current time step tOutputting the real depth map as the previous time step t-1Is a probability distribution of (c).
In the specific implementation process, the inverse denoising model can be constructed based on the neural network in various modes. By way of example only, the inverse denoising model may include multiple convolution layers for capturing input image features and progressively removing noise, skip connections (skip connections) or residual blocks (residual blocks) between the convolution layers may be used to enhance feature propagation and gradient flow, and time step t may be encoded as an additional input feature or incorporated into the weighting or activation functions of the network in a particular manner.
The reverse denoising model is a neural network model obtained by training by using a training data set, the training data set comprises a plurality of groups of sample data carrying labels, the sample data is a noise characteristic diagram obtained by adding noise to live-action images shot by a bronchoscope at different preset positions in a bronchus of a patient by using a forward denoising model, and the labels are live-action depth diagrams corresponding to the sample data.
Fig. 3 is a flow chart of a method for obtaining the current pose of a bronchoscope, as shown in fig. 3, provided by the invention, the method comprises the following steps:
Step 301, using a projection function, according to a first initial estimated pose of the bronchoscope, projecting each data point of the point cloud onto a two-dimensional plane corresponding to the live-action image, and obtaining a first projection point corresponding to each data point.
In a specific implementation process, the first initial estimated pose may be set according to experience or experimental results, and is not limited by the expression of the present specification.
The projection function is used to project points in three-dimensional space onto a two-dimensional image plane. The implementation of the projection function depends on the camera's internal parameters (e.g., focal length, optical center position, radial and tangential distortion parameters, etc.) and the camera's external parameters (i.e., pose, including rotation and translation).
Step 302, obtaining a first re-projection error according to a first distance between a first projection point corresponding to each data point of the data point cloud and a pixel point of the live-action image corresponding to the data point by using the loss function.
The loss function may include, but is not limited to, a mean square error loss function, a Huber loss function, and the like.
In a specific implementation process, a first distance between a first projection point corresponding to each data point of the point cloud and a pixel point of a live-action image corresponding to the data point can be brought into a loss function, so that a first re-projection error is obtained.
Step 303, iteratively adjusting the first initial estimated pose of the bronchoscope by using a preset algorithm to minimize the first re-projection error until an iteration termination condition is reached.
In a specific implementation, the first initial estimated pose of the bronchoscope may be iteratively adjusted using a variety of preset algorithms, such as PnP (PERSPECTIVE-n-Point) algorithm or ICP (Iterative Closest Point) algorithm.
The termination condition may include the first re-projection error reaching a minimum, or the number of iterations reaching a preset distance threshold, etc.
The above steps 301 to 303 may be expressed as follows.
(5)
Wherein, The Huber loss function is represented as such,Is a matching pair formed by a first projection point corresponding to the jth data point of the point cloud and a pixel point of the live-action image corresponding to the data point; A j-th pixel point representing a k-th frame live-action image; estimating pose as projection function using camera The jth data point of the point cloudProjecting the image to a two-dimensional plane corresponding to the k frame of live-action image; Indicating that a parameter P is sought that minimizes the function to be followed.
And step 304, obtaining the current pose of the bronchoscope according to the pose of the bronchoscope when the first re-projection error is minimized.
In some embodiments, the pose of the bronchoscope when the first re-projection error is minimized may be taken as the current pose of the bronchoscope.
In some embodiments, in order to obtain a more accurate estimation result of the current pose of the bronchoscope, so that the navigation process is more suitable for a bronchoscope scene, on the basis of the obtained pose of the bronchoscope when the first re-projection error is minimized, only part of data points in the point cloud in front of the bronchoscope are selected to carry out refined estimation of the pose of the bronchoscope through preset refinement estimation conditions. The specific steps are as follows.
And determining a second initial estimated pose of the bronchoscope according to the pose of the bronchoscope when the first re-projection error is minimized.
In the implementation process, the pose of the bronchoscope when the first re-projection error is minimized can be used as the second initial estimated pose, and the estimated pose of the bronchoscope is adjusted from the second initial estimated pose.
And projecting each data point of the point cloud onto a two-dimensional plane corresponding to the live-action image according to the second initial estimated pose by using a projection function, and obtaining a second projection point corresponding to each data point.
And selecting a plurality of groups of projection data pairs meeting preset refinement estimation conditions for the projection data pairs corresponding to all the data points of the point cloud. The projection data pair consists of a second projection point corresponding to one data point and a pixel point in the live-action image corresponding to the second projection point.
The preset refinement estimation condition comprises that the pixel point of each projection data pair is located in a preset range of the second projection point of the projection data pair. For example, for a point cloud of data pointsOnly from the second projection point preset range on the two-dimensional plane corresponding to the live-action imageSelecting a corresponding point among the pixel points in the image, whereinIs a preset distance threshold.
The preset refinement estimation condition further comprises the step of projecting the second projection point of the data pair, wherein the distance between the data points corresponding to the second projection point and the third projection point of the third projection point on the live-action image in the bronchus of the target patient shot at the last position of the bronchoscope is smaller than a preset distance threshold. The constraint is formulated as follows.
, (6)
Wherein the method comprises the steps ofIn order to preset the distance threshold value,Estimating the pose of the bronchoscope at the last position; the second projection point corresponds to the point b; and the third projection point corresponding to the point b.
Through the formula (6), the data points involved in the refined estimation of the bronchoscope pose are constrained to meet the rule that the camera moves continuously and slowly between adjacent frames. Therefore, the deviation of the pose estimation of the bronchoscope caused by abnormal data points in the point cloud can be avoided, and the precision of the pose estimation of the bronchoscope in the bronchoscope scene with sparse textures is effectively improved.
And obtaining a second projection error according to a second distance between a second projection point and the pixel point of each group of projection data pairs by using the loss function.
And iteratively adjusting the second initial estimated pose of the bronchoscope by using a preset algorithm to minimize the second re-projection error until an iteration termination condition is reached.
And obtaining the current pose of the bronchoscope according to the pose of the bronchoscope when the second projection error is minimized.
The bronchoscope automatic navigation device provided by the invention is described below, and the bronchoscope automatic navigation device described below and the bronchoscope automatic navigation method described above can be correspondingly referred to each other.
Fig. 4 is a schematic structural diagram of the bronchoscope automatic navigation device provided by the invention. As shown in fig. 4, the apparatus 400 includes the following modules.
A first obtaining module 410, configured to obtain, in response to initiation of a bronchoscopy operation, an established virtual bronchoscope model of a target patient, where the virtual bronchoscope model is a three-dimensional structural model of a bronchial tree established according to a lung airway diagram of the target patient.
And a second obtaining module 420, configured to obtain a navigation path of the bronchoscopy according to the virtual bronchoscope model, so that an operator drives the bronchoscope to move toward a region of interest in a bronchus of the target patient according to the navigation path, where the navigation path uses an upper portion of a main bronchus of the target patient as a starting point and the region of interest is a target point.
And a third obtaining module 430, configured to, in response to receiving a live-action image of the bronchoscope in the bronchus of the target patient photographed at the current position, extract depth information from the live-action image, and obtain a live-action depth map corresponding to the current position.
And a fourth obtaining module 440, configured to extract point cloud information from the live-action depth map, and obtain the point cloud of the current position.
And a fifth obtaining module 450, configured to obtain, according to the live-action depth map and the point cloud of the current position, a current pose of the bronchoscope by using a projection function, so that the operator adjusts a movement path of the bronchoscope at a next moment based on the current pose of the bronchoscope.
Fig. 5 illustrates a physical schematic diagram of an electronic device, which may include a processor (processor) 510, a communication interface (Communications Interface) 520, a memory (memory) 530, and a communication bus 540, where the processor 510, the communication interface 520, and the memory 530 perform communication with each other through the communication bus 540, as shown in fig. 5. The processor 510 may invoke logic instructions in the memory 530 to execute a bronchoscope automatic navigation method, where the method includes, in response to initiation of a bronchoscope examination operation, obtaining a virtual bronchoscope model of an established target patient, where the virtual bronchoscope model is a three-dimensional structure model of a bronchotree established according to a lung airway map of the target patient, obtaining a navigation path of the bronchoscope examination according to the virtual bronchoscope model, so that an operator drives the bronchoscope to move in a bronchoscope of the target patient according to the navigation path, where the navigation path uses an upper portion of a main bronchus of the target patient as a starting point, the region of interest as a target point, in response to receiving a live-action image of the bronchoscope in the bronchoscope of the target patient captured at a current position, extracting depth information from the live-action image to obtain a live-action depth map corresponding to the current position, extracting point cloud information from the live-action depth map to obtain a point of the current position, driving the bronchoscope in the bronchoscope according to the navigation path, and adjusting the bronchoscope according to the live-action depth map and the live-action point cloud point, and moving the current position of the bronchoscope based on the current position.
Further, the logic instructions in the memory 530 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.
In another aspect, the invention further provides a computer program product, the computer program product comprises a computer program, the computer program can be stored on a non-transitory computer readable storage medium, when the computer program is executed by a processor, the computer can execute the bronchoscope automatic navigation method provided by the above methods, the method comprises the steps of responding to the starting of bronchoscopy operation, obtaining an established virtual bronchoscope model of a target patient, wherein the virtual bronchoscope model is a three-dimensional structure model of a bronchotree established according to a lung airway map of the target patient, obtaining a navigation path of the bronchoscopy according to the virtual bronchoscope model, so that an operator drives the bronchoscope to move towards a region of interest in a bronchoscope of the target patient according to the navigation path, wherein the navigation path takes the upper part of a main bronchoscope of the target patient as a starting point, the region of interest is a target point, responding to the receiving of a real-time image of the bronchoscope in the bronchoscope of the target patient, extracting real-time information from the real-time image of the bronchoscope according to the virtual bronchoscope model, extracting real-time depth information from the real-time image of the target patient, and obtaining a depth map from the real-time point of the bronchoscope according to the current position, and moving the depth map according to the current position, and adjusting the depth map according to the current position.
In still another aspect, the invention further provides a non-transitory computer readable storage medium, on which a computer program is stored, the computer program being implemented when executed by a processor to perform the bronchoscope automatic navigation method provided by the above methods, the method comprising obtaining a virtual bronchoscope model of an established target patient in response to initiation of bronchoscope examination operation, wherein the virtual bronchoscope model is a three-dimensional structure model of a bronchotree established according to a lung airway map of the target patient, obtaining a navigation path of the bronchoscope examination according to the virtual bronchoscope model, so that an operator drives the bronchoscope to move in a bronchoscope of the target patient according to the navigation path, wherein the navigation path takes an upper part of a main bronchus of the target patient as a starting point, the region of interest is a target point, extracting depth information from the real-scene image in response to receiving the real-scene image of the bronchoscope of the target patient taken at a current position, obtaining a real-scene depth map corresponding to the current position, obtaining a depth map from the real-scene image, extracting the depth map from the real-scene image, and obtaining a depth map from the current position and a cloud point, and moving the bronchoscope according to the current position, and adjusting the depth map by using the cloud point.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
It should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention, and not for limiting the same, and although the present invention has been described in detail with reference to the above-mentioned embodiments, it should be understood by those skilled in the art that the technical solution described in the above-mentioned embodiments may be modified or some technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the spirit and scope of the technical solution of the embodiments of the present invention.