[go: up one dir, main page]

CN110033465A - A kind of real-time three-dimensional method for reconstructing applied to binocular endoscope medical image - Google Patents

A kind of real-time three-dimensional method for reconstructing applied to binocular endoscope medical image Download PDF

Info

Publication number
CN110033465A
CN110033465A CN201910315817.0A CN201910315817A CN110033465A CN 110033465 A CN110033465 A CN 110033465A CN 201910315817 A CN201910315817 A CN 201910315817A CN 110033465 A CN110033465 A CN 110033465A
Authority
CN
China
Prior art keywords
dimensional
point
image
camera
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910315817.0A
Other languages
Chinese (zh)
Other versions
CN110033465B (en
Inventor
宋丽梅
尤阳
郭庆华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tiangong University
Original Assignee
Tianjin Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Polytechnic University filed Critical Tianjin Polytechnic University
Priority to CN201910315817.0A priority Critical patent/CN110033465B/en
Publication of CN110033465A publication Critical patent/CN110033465A/en
Application granted granted Critical
Publication of CN110033465B publication Critical patent/CN110033465B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及一种应用于双目内窥镜医学图像的实时三维重建方法,该方法首先通过超像素分割,将颜色复杂区域分割,不同区域的边界构成器官的三维骨架。然后依据外极线约束原则,对左面视角拍到的轮廓信息,依次寻找右视角上对应的极线。为了获得精准的匹配点对,与ORB特征描述算子结合,快速准确定位相机中对应区域交点的位置,通过对应位置关系计算边界骨架的三维数据。最后在分割后的子区域内部采用SFS方法获得坐标相对关系,结合不同颜色梯度差别,推算出各个区域之间的三维坐标信息,获得场景内器官全部三维形貌坐标。本发明解决了内窥镜高精度三维重建的难题,与现有三维重建方法相比操作简单,可靠性高,手术风险低,减轻了病人的痛苦。

The invention relates to a real-time three-dimensional reconstruction method applied to binocular endoscopic medical images. The method firstly divides complex color regions through superpixel segmentation, and the boundaries of different regions constitute a three-dimensional skeleton of an organ. Then, according to the principle of outer epipolar line constraint, the corresponding epipolar lines on the right angle of view are sequentially searched for the contour information photographed from the left angle of view. In order to obtain accurate matching point pairs, it is combined with the ORB feature description operator to quickly and accurately locate the position of the intersection of the corresponding area in the camera, and calculate the three-dimensional data of the boundary skeleton through the corresponding position relationship. Finally, the SFS method is used in the divided sub-regions to obtain the relative relationship of coordinates, and the three-dimensional coordinate information between each region is calculated based on the difference of different color gradients, and all the three-dimensional topographic coordinates of the organs in the scene are obtained. Compared with the existing three-dimensional reconstruction method, the invention solves the difficult problem of high-precision three-dimensional reconstruction of the endoscope, and has the advantages of simple operation, high reliability, low operation risk and alleviation of the pain of the patient.

Description

一种应用于双目内窥镜医学图像的实时三维重建方法A real-time three-dimensional reconstruction method applied to binocular endoscopic medical images

技术领域technical field

本发明涉及一种应用于双目内窥镜医学图像的实时三维重建获方法,更具体的说,本发明涉及一种能够用于展现内窥镜图像内器官精准三维形貌坐标的方法。The present invention relates to a real-time three-dimensional reconstruction and acquisition method applied to binocular endoscopic medical images, more particularly, the present invention relates to a method that can be used to display accurate three-dimensional topographic coordinates of organs in endoscopic images.

背景技术Background technique

如今,全球每年约有1750万人死于心脏病,占到了全部死亡人数的30%。而我国的心血管病人数已达到2.9亿人,且死亡率远高于其他疾病,可见其对人民健康的影响之大。传统的手术方式需要剖开胸腔,锯断胸骨,对患者的呼吸功能产生巨大影响。由于胸骨切口张力较高,使得体质较差的患者,术后恢复十分困难。Today, about 17.5 million people worldwide die of heart disease every year, accounting for 30% of all deaths. The number of cardiovascular patients in my country has reached 290 million, and the mortality rate is much higher than that of other diseases, which shows that it has a great impact on people's health. The traditional surgical method requires opening the thoracic cavity and sawing the sternum, which has a huge impact on the patient's respiratory function. Due to the high tension of the sternum incision, it is very difficult for patients with poor constitution to recover after surgery.

微创手术方式不仅能够降低手术的风险,更能减少病人治疗的痛苦。内窥镜是微创手术的重要信号采集方式,医生不再需要开胸,仅需在胸壁上打3个小孔,分别放置胸腔镜成像装置、超声波手术刀以及手术废弃物吸收装置。手术后,表皮创伤就可以自行愈合,大大减少了病人的创伤和痛苦,也缩短了术后的康复时间。Minimally invasive surgery can not only reduce the risk of surgery, but also reduce the pain of patient treatment. Endoscopy is an important signal acquisition method for minimally invasive surgery. Doctors no longer need to open the chest, but only need to make three small holes in the chest wall, and place the thoracoscope imaging device, ultrasonic scalpel and surgical waste absorption device respectively. After the operation, the epidermal wound can heal itself, which greatly reduces the trauma and pain of the patient, and also shortens the postoperative recovery time.

传统二维内窥镜由于无法在手术医师脑中产生直观的三维位置信息的准确对应关系,需要经过长期培训的医师才能熟练地利用其进行关键部位的手术。现有二维内窥镜在使用过程中仍存在如下风险:Because traditional two-dimensional endoscopes cannot generate an accurate correspondence of intuitive three-dimensional position information in the surgeon's brain, physicians who have undergone long-term training are required to skillfully use it to perform operations on key parts. The existing two-dimensional endoscope still has the following risks during use:

(1)二维内窥镜缺少图像深度感,致使医生在手术过程中对重要的解剖结构及其相对的位置产生视觉上的误判。并且由于深度感的缺失,医生无法准确地判断出进刀位置的深浅,极易因操作失误引起意外出血。(1) The two-dimensional endoscope lacks a sense of image depth, which causes doctors to visually misjudge important anatomical structures and their relative positions during surgery. And due to the lack of depth sense, the doctor cannot accurately judge the depth of the incision position, and it is very easy to cause accidental bleeding due to operational errors.

(2)二维内窥镜图像畸变较大,人体组织结构又十分复杂,因而会影响手术的流畅性和进度,延长手术时间。(2) The image distortion of the two-dimensional endoscope is large, and the human tissue structure is very complex, which will affect the fluency and progress of the operation and prolong the operation time.

三维内窥镜带来的人体三维空间操作在医学领域掀起了革命性的变化,使用三维技术的微创手术将成为手术的主流。三维内窥镜的使用大大减轻了病人的术中痛苦,也缩短了术后的愈合时间。如果在获取三维影像的同时能够获得关键部位的三维数据,则可以大大缩短手术时间,减少手术风险。本发明提出的内窥镜医学图像的实时三维重建方法,正是为了解决上述问题而提出来的。The 3D space manipulation of the human body brought by 3D endoscope has set off a revolutionary change in the medical field, and minimally invasive surgery using 3D technology will become the mainstream of surgery. The use of three-dimensional endoscope greatly reduces the pain of the patient during the operation and shortens the postoperative healing time. If three-dimensional data of key parts can be obtained while acquiring three-dimensional images, the operation time can be greatly shortened and the operation risk can be reduced. The real-time three-dimensional reconstruction method of endoscopic medical images proposed by the present invention is proposed to solve the above problems.

本发明设计了一种超像素分割生成三维骨架与SFS(Shape From Shading)融合的快速三维重建方法,由于内脏器官的构造不同,颜色深浅分布也不相同,单一三维重建方法难以获得整体的三维形貌信息。本发明首先通过超像素分割方法,将颜色复杂区域进行分割,不同区域的边界构成器官的三维骨架。然后依据外极线约束原则,对左面视角拍到的轮廓信息,依次寻找右视角上对应的极线。为了获得精准的左右匹配点对,需要与ORB(Oriented FAST and Rotated BRIEF)特征描述算子结合,快速准确定位左右相机中对应区域交点的位置,通过左右相机的对应位置关系计算边界骨架的三维数据。最后,将三维骨架与SFS融合,已有的SFS算法重建精度高度依赖其成像的光源模型,仅对颜色一致的区域具有较好的三维效果,但是对于颜色不同区域,三维数据存在较大偏差。本发明在超像素分割后的子区域内部采用SFS方法获得区域内的坐标相对关系,以生成的三维骨架坐标为基准,结合不同颜色的梯度差别,依次推算出各个区域之间的三维坐标信息,进而获得场景内器官的全部三维形貌坐标。实现内窥镜在手术场景中的实时三维重建,为医生提供准确和有效的导航信息。The present invention designs a fast three-dimensional reconstruction method that combines superpixel segmentation to generate three-dimensional skeleton and SFS (Shape From Shading). Due to the different structures of internal organs and the different distribution of color shades, it is difficult for a single three-dimensional reconstruction method to obtain the overall three-dimensional shape. appearance information. The present invention firstly uses the superpixel segmentation method to segment the complex color area, and the boundaries of different areas constitute the three-dimensional skeleton of the organ. Then, according to the principle of outer epipolar line constraint, the corresponding epipolar lines on the right angle of view are sequentially searched for the contour information photographed from the left angle of view. In order to obtain accurate left and right matching point pairs, it is necessary to combine with the ORB (Oriented FAST and Rotated BRIEF) feature description operator to quickly and accurately locate the position of the intersection of the corresponding areas in the left and right cameras, and calculate the three-dimensional data of the boundary skeleton through the corresponding position relationship of the left and right cameras. . Finally, the 3D skeleton is fused with SFS. The reconstruction accuracy of the existing SFS algorithm is highly dependent on the light source model of its imaging, and only has a good 3D effect for areas with the same color, but for areas with different colors, there is a large deviation in the 3D data. In the present invention, the SFS method is used in the sub-regions after the superpixel segmentation to obtain the relative relationship of coordinates in the regions, and the generated three-dimensional skeleton coordinates are used as the benchmark, combined with the gradient difference of different colors, the three-dimensional coordinate information between the regions is sequentially calculated. Then, all the three-dimensional topographic coordinates of the organs in the scene are obtained. Real-time 3D reconstruction of the endoscope in the surgical scene is realized, providing accurate and effective navigation information for doctors.

发明内容SUMMARY OF THE INVENTION

本发明设计了一种应用于内窥镜医学图像的实时三维重建获方法,该方法能够应用于采用三维内窥镜的手术中,在获取三维影像的同时能够获得关键部位的三维坐标,可以大大缩短手术时间,减少手术风险。The present invention designs a real-time three-dimensional reconstruction and acquisition method applied to endoscopic medical images. The method can be applied to operations using three-dimensional endoscopes. The three-dimensional coordinates of key parts can be obtained while acquiring three-dimensional images, which can greatly Shorten the operation time and reduce the operation risk.

所述的内窥镜医学图像实时三维重建的硬件装置包括:The hardware device for real-time three-dimensional reconstruction of endoscopic medical images includes:

LED冷光源一个;One LED cold light source;

光学硬杆内窥镜两根;Two optical rigid endoscopes;

用于建立高精度坐标基准的标定平台;Calibration platform for establishing high-precision coordinate datum;

用于采集图像的1200*1600工业彩色相机两个;Two 1200*1600 industrial color cameras for image acquisition;

用于精度控制、图像采集和数据处理的计算机;Computers for precision control, image acquisition and data processing;

用于放置所述的光源和所述的相机的扫描平台。A scanning platform for placing the light source and the camera.

本发明所设计的内窥镜医学图像的实时三维重建获方法,具体操作步骤如下:The real-time three-dimensional reconstruction and acquisition method of endoscopic medical images designed by the present invention has the following specific operation steps:

步骤1:对双目相机进行标定,设左相机A所在的坐标系为OaXaYaZa,设右相机B所在的坐标系为ObXbYbZb,两个相机之间的旋转矩阵为R,平移矩阵为T,标定的公式如公式(1)所示,r11-r33为所述的右相机B相对于所述的左相机A的旋转矩阵分量,tx,ty,tz为所述的右相机B相对于所述的左相机A的平移矩阵分量;Step 1: Calibrate the binocular camera, set the coordinate system where the left camera A is located as O a X a Y a Z a , set the coordinate system where the right camera B is located as O b X b Y b Z b , and the two cameras are The rotation matrix between is R, the translation matrix is T, the calibration formula is shown in formula (1), r 11 -r 33 are the rotation matrix components of the right camera B relative to the left camera A, t x , ty , t z are the translations of the right camera B relative to the left camera A matrix component;

步骤2:将双目内窥镜的探测镜头伸入病人体内,以获取器官表面图像,并将内窥镜采集的器官表面图像采用中值滤波法进行去噪和图像平滑处理,保护图像的细节信息;Step 2: Extend the detection lens of the binocular endoscope into the patient's body to obtain the organ surface image, and use the median filter method to denoise and smooth the image to protect the details of the image. information;

步骤3:利用SLIC(Simple Linear Iterative Clustering)超像素分割方法分割步骤2得到的器官表面图像,首先通过相邻像素的颜色、亮度、纹理特征,将器官表面图像细分为多个子区域,再将各子区域图像从RGB颜色空间转换到CIE-Lab颜色空间,按照超像素个数,在图像内均匀分配种子点,在种子点的邻域范围内利用三维的颜色信息以及二维的空间位置信息,计算每个搜索到的像素点到该种子点的距离,来对像素点进行聚类,并通过超像素分割区域的目标数量,控制分割区域的大小,最后进行迭代优化及增强连通性,得到分割后的器官表面图像,距离计算如公式(2)所示,dc代表颜色距离,ds代表空间距离,Ns代表类内最大空间距离,Nc代表最大的颜色距离;Step 3: Use the SLIC (Simple Linear Iterative Clustering) superpixel segmentation method to segment the organ surface image obtained in step 2. First, the organ surface image is subdivided into multiple sub-regions through the color, brightness, and texture features of adjacent pixels, and then the Each sub-region image is converted from RGB color space to CIE-Lab color space, according to the number of superpixels, the seed points are evenly distributed in the image, and the three-dimensional color information and two-dimensional spatial position information are used in the neighborhood of the seed points. , calculate the distance from each searched pixel point to the seed point to cluster the pixel points, and control the size of the segmented area through the target number of the superpixel segmentation area, and finally perform iterative optimization and enhance connectivity to get For the segmented organ surface image, the distance calculation is shown in formula (2), dc represents the color distance, ds represents the spatial distance, Ns represents the maximum spatial distance within the class, and Nc represents the maximum color distance;

步骤4:将步骤3分割后的器官表面图像,根据外极限匹配原则,在左相机采集到分割后的器官表面图像分割边界选取点,在右相机采集到分割后的器官表面图像上确定极线以及极线与分割边界交点,获得精准的左右匹配点对,再与ORB(Oriented FAST andRotated BRIEF)特征描述算子结合,定位左右相机中对应区域交点的位置,选取交点中匹配度最高的点为匹配点,设左相机选取点的斜率为ka,则相对的匹配点的斜率kb可由公式(3)获得,ka为左相机采集到的图像骨架中某一点P的斜率,kb为右相机采集到的图像骨架中与所述的P点对应点的斜率,重复执行此步骤,可以获得全部骨架所在位置的三维坐标,构成边界骨架三维坐标,记录每个子区域边界的三维坐标信息;Step 4: According to the outer limit matching principle, select points on the segmentation boundary of the segmented organ surface image collected by the left camera, and determine the epipolar line on the segmented organ surface image collected by the right camera. And the intersection of the epipolar line and the segmentation boundary to obtain accurate left and right matching point pairs, and then combined with the ORB (Oriented FAST and Rotated BRIEF) feature description operator to locate the position of the corresponding area intersection in the left and right cameras, and select the point with the highest matching degree in the intersection point as Matching point, let the slope of the point selected by the left camera be ka , then the slope k b of the relative matching point can be obtained by formula (3 ) , ka is the slope of a certain point P in the image skeleton collected by the left camera, and k b is The slope of the point corresponding to the point P in the image skeleton collected by the right camera, and repeating this step, the three-dimensional coordinates of the positions of all the skeletons can be obtained, forming the three-dimensional coordinates of the boundary skeleton, and recording the three-dimensional coordinate information of each sub-region boundary;

步骤5:在步骤4所得到的每一个超像素分割后子区域的内部,首先利用SFS(ShapeFrom Shading)进行三维重建,选取线性法三维建模,利用有限差值法对表面梯度p和q进行离散逼近,然后在高度Z(x,y)方向上根据公式(4)进行线性化处理,最后获取局部点的坐标变化关系;Step 5: Inside each superpixel segmented sub-region obtained in Step 4, first use SFS (ShapeFrom Shading) to carry out 3D reconstruction, select the linear method for 3D modeling, and use the finite difference method to carry out surface gradients p and q. Discrete approximation, then linearize in the direction of height Z(x, y) according to formula (4), and finally obtain the coordinate change relationship of local points;

公式(4)中:In formula (4):

步骤6:将步骤5所获得的坐标变化关系与步骤4所得到的边界骨架坐标信息进行融合,计算器官表面全局三维坐标;运算完毕。Step 6: Integrate the coordinate change relationship obtained in step 5 with the boundary skeleton coordinate information obtained in step 4 to calculate the global three-dimensional coordinates of the organ surface; the operation is completed.

本发明的有益效果是:通过本发明提出的超像素分割与外极线约束下的ORB融合三维骨架的生成方法,以及三维骨架与SFS融合的快速三维重建方法,既可以减少三维重建的特征点匹配的时间,又可以改善特征点匹配的数量和准确度。以生成的三维骨架坐标为基准,结合不同颜色的梯度差别,既可以依次推算出各个区域之间的三维坐标信息,进而又能获得场景内器官的全部三维形貌坐标。The beneficial effects of the present invention are: through the method for generating superpixel segmentation and ORB fusion 3D skeleton under the constraint of epipolar line, and the fast 3D reconstruction method for fusion of 3D skeleton and SFS, the feature points of 3D reconstruction can be reduced. The matching time can also improve the number and accuracy of feature point matching. Based on the generated three-dimensional skeleton coordinates, combined with the gradient differences of different colors, the three-dimensional coordinate information between each area can be calculated in turn, and then all the three-dimensional topographic coordinates of the organs in the scene can be obtained.

附图说明Description of drawings

图1:三维重建方法流程图;Figure 1: Flow chart of 3D reconstruction method;

图2:SLIC超像素分割方法前后对比图;Figure 2: Comparison before and after the SLIC superpixel segmentation method;

(a)分割前原始图片;(a) the original image before segmentation;

(b)分割后图片;(b) Pictures after segmentation;

(c)分割后生成边界三维骨架图片;(c) After segmentation, a boundary 3D skeleton picture is generated;

图3:SFS处理后每个子区域图像;Figure 3: Image of each sub-region after SFS processing;

(a)整体区域图像;(a) overall area image;

(b)边界区域图像;(b) image of the border area;

(c)无边界区域图像。(c) Image of borderless region.

具体实施方式Detailed ways

本发明所设计的内窥镜医学图像的实时三维重建获方法,所述的三维重建方法如图1所示,具体操作如下:The real-time three-dimensional reconstruction method for endoscopic medical images designed by the present invention is shown in Figure 1, and the specific operations are as follows:

完成双目内窥镜的搭建,并对双目相机进行标定,设左相机A坐标系为OaXaYaZa,设右相机B所在的坐标系为ObXbYbZb,两个相机之间的旋转矩阵为R,平移矩阵为T,标定的公式如公式(5)所示,r11-r33为所述的右相机B相对于所述的左相机A的旋转矩阵分量,tx,ty,tz为所述的右相机B相对于所述的左相机A的平移矩阵分量。Complete the construction of the binocular endoscope and calibrate the binocular camera. Let the coordinate system of the left camera A be O a X a Y a Z a and the coordinate system of the right camera B to be O b X b Y b Z b , the rotation matrix between the two cameras is R, and the translation matrix is T. The calibration formula is shown in formula (5), r 11 -r 33 are the rotation matrix components of the right camera B relative to the left camera A, t x , ty , t z are the translations of the right camera B relative to the left camera A matrix components.

将双目内窥镜的探测镜头伸入病人体内,以获取器官表面图像,并将内窥镜采集的器官表面图像采用中值滤波进行去噪、图像平滑处理,保护图像的细节信息。The detection lens of the binocular endoscope is inserted into the body of the patient to obtain the organ surface image, and the organ surface image collected by the endoscope is denoised and image smoothed by median filtering to protect the detailed information of the image.

利用SLIC超像素分割方法分割器官表面图像,超像素分割前后对比图如图2(a)和图2(b)所示;The SLIC superpixel segmentation method is used to segment the organ surface image, and the comparison diagrams before and after superpixel segmentation are shown in Figure 2(a) and Figure 2(b).

(1)初始化种子点。按照设定的超像素个数,在图像内均匀的分配种子点。设图片总共有N个像素点,预分割为K个相同尺寸的超像素,那么每个超像素块的大小为N/K,则相邻种子点的距离近似为A;(1) Initialize the seed point. According to the set number of superpixels, the seed points are evenly distributed in the image. Suppose the picture has a total of N pixels and is pre-segmented into K superpixels of the same size, then the size of each superpixel block is N/K, and the distance between adjacent seed points is approximately A;

(2)在种子点的n×n邻域内重新选择种子点。具体方法为,计算该邻域内所有像素点的梯度值,将种子点移到该邻域内梯度最小的地方;(2) Reselect the seed point within the n×n neighborhood of the seed point. The specific method is to calculate the gradient value of all pixel points in the neighborhood, and move the seed point to the place with the smallest gradient in the neighborhood;

(3)在每个种子点周围的邻域内为每个像素点分配类标签。SLIC的搜索范围限制为2S×2S,可以加速算法收敛;(3) Assign a class label to each pixel in the neighborhood around each seed point. The search range of SLIC is limited to 2S×2S, which can speed up the algorithm convergence;

(4)距离度量,包括颜色距离和空间距离。对于每个搜索到的像素点,分别计算它和该种子点的距离。距离计算方法如公式(6)和公式(7)所示,其中,dc代表颜色距离,ds代表空间距离,NS是类内最大空间距离,定义为NS=S=sqrt(N/K),适用于每个聚类。最大的颜色距离Nc,我们取一个固定常数m代替,m的值取10。最终的距离度量如公式(8)所示,由于每个像素点都会被多个种子点搜索到,所以每个像素点都有一个与周围种子点的距离,取最小值对应的种子点作为该像素点的聚类中心;(4) Distance metrics, including color distance and space distance. For each searched pixel, calculate its distance from the seed point separately. The distance calculation method is shown in formula (6) and formula (7), where d c represents the color distance, d s represents the spatial distance, and N S is the maximum spatial distance within the class, which is defined as N S =S=sqrt(N/ K), for each cluster. For the maximum color distance N c , we take a fixed constant m instead, and the value of m takes 10. The final distance metric is shown in formula (8). Since each pixel point will be searched by multiple seed points, each pixel point has a distance from the surrounding seed points, and the seed point corresponding to the minimum value is taken as the The cluster center of the pixel point;

(5)迭代优化。实践发现10次迭代后图片都可以得到较理想效果,所以迭代次数取10;(5) Iterative optimization. In practice, it is found that the pictures can get ideal results after 10 iterations, so the number of iterations is 10;

(6)增强连通性。经过上述迭代优化出现的瑕疵:出现多连通情况、超像素尺寸过小和单个超像素被切割成多个不连续超像素,可以通过增强连通性解决。(6) Enhance connectivity. The flaws that appear after the above iterative optimization: multi-connectivity, too small superpixel size, and a single superpixel being cut into multiple discontinuous superpixels can be solved by enhancing connectivity.

在双目立体视觉测量中,立体匹配是关键技术,极线约束起着重要作用。观察场景点的两个相机中心C0和C1追踪连接空间三维点X到相机中心的直线,可以找到空间点X在一幅图像中的点p。相反,通过该点p可以找到在另一幅图像中的对应点q。沿着这条直线在另一个图像面进行搜索,直线在另一个图像面形成了一条虚构的直线L,此直线被称为点p的极线。该极线的一个端点以原始观察线上的无穷远处的投影为界,另一个端点以原相机中心在第2个图像面的投影为界,即是极点e。基础矩阵F将一个视角中的二维图像点p映射到另一个视角中的极线上。设左相机选取点的斜率为ka,则与之相对的匹配点的斜率kb可由公式(9)获得。ka为左相机采集到的图像骨架中某一点P的斜率,kb为右相机采集到的图像中与P点对应点的斜率,r11-r33为所述的右相机B相对于所述的左相机A的旋转矩阵分量,tx,ty,tz为所述的右相机B相对于所述的左相机A的平移矩阵分量。In binocular stereo vision measurement, stereo matching is the key technology, and epipolar constraints play an important role. Two camera centers C 0 and C 1 observing the scene point trace the line connecting the three-dimensional point X in space to the center of the camera, and the point p of the space point X in an image can be found. Instead, the corresponding point q in the other image can be found through this point p. Searching on another image plane along this line forms an imaginary straight line L on the other image plane, which is called the epipolar line of point p. One endpoint of the epipolar line is bounded by the projection of infinity on the original observation line, and the other endpoint is bounded by the projection of the original camera center on the second image plane, which is the pole e. The fundamental matrix F maps two-dimensional image points p in one view to epipolar lines in another view. Assuming that the slope of the point selected by the left camera is ka , the slope k b of the corresponding matching point can be obtained by formula ( 9). k a is the slope of a certain point P in the skeleton of the image collected by the left camera, k b is the slope of the point corresponding to point P in the image collected by the right camera, and r 11 -r 33 are the relative values of the right camera B to the point P. The rotation matrix components of the left camera A, t x , ty , and t z are the translation matrix components of the right camera B relative to the left camera A.

在处理特征不明显的器官表面图像时,采用外极线约束与ORB匹配结合的算法,可以有效弥补采用ORB匹配算法的匹配误差,提高匹配精度,可有效发挥外极线约束与ORB匹配相结合算法的优势。通过ORB特征提取特征点并进行匹配和筛选,提取左相机中某一区域交点位置,通过相机标定参数可以获取其在另一个相机中位置外极线,计算与该极线相交的所有区域边界点的位置的ORB特征,将其与左相机中待匹配点的ORB特征进行相似度对比,找出右相机中相似度最高的点作为待匹配特征。重复执行此步骤,可以获得全部骨架所在位置的三维坐标,构成边界骨架三维坐标,记录每个子区域边界的三维坐标信息,边界三维骨架图片如图2(c)所示;When processing organ surface images with insignificant features, the combination of epipolar constraint and ORB matching algorithm can effectively make up for the matching error of the ORB matching algorithm, improve the matching accuracy, and can effectively exert the combination of epipolar constraint and ORB matching. Algorithmic advantages. Extract feature points through ORB features and perform matching and screening to extract the position of the intersection point of a certain area in the left camera. Through the camera calibration parameters, the epipolar line of its position in another camera can be obtained, and all the boundary points of the area intersecting with the epipolar line can be calculated. The ORB feature of the position of , compares it with the ORB feature of the point to be matched in the left camera, and finds the point with the highest similarity in the right camera as the feature to be matched. By repeating this step, the three-dimensional coordinates of the positions of all the skeletons can be obtained to form the three-dimensional coordinates of the boundary skeleton, and the three-dimensional coordinate information of the boundary of each sub-region is recorded.

ORB特征的检测过程为:The detection process of ORB features is as follows:

(1)在图像中选中像素p,设它的亮度为Ip(1) Select pixel p in the image, and set its brightness to be Ip ;

(2)设置一个阈值T,值为Ip的20%;(2) set a threshold value T, the value is 20% of Ip ;

(3)以p为中心,选取半径为3像素的圆上的16个像素点;(3) With p as the center, select 16 pixels on a circle with a radius of 3 pixels;

(4)若选取的圆上有连续的12个点的亮度大于IP+T或小于IP-T,p即被认为是特征点。(4) If the brightness of 12 consecutive points on the selected circle is greater than IP + T or less than IP -T, p is regarded as a feature point.

采用三维骨架与SFS融合的三维成像方法。SFS算法是一种快速有效的三维重建方法,但该方法目前的重建精度高度依赖其成像的光源模型,并且由于内脏器官的构造不同,其颜色深浅分布也不相同,采用一种光源模型容易造成三维数据的恢复偏差。超像素分割后的区域减少了复杂颜色背景的影响,再对分割后不同区域进行SFS三维重建,可以提高重建精度。为了实现不同区域的坐标统一,以三维骨架为基准对不同的区域进行三维坐标的融合,从而获得颜色变化复杂物体的精确三维点云。The 3D imaging method using the fusion of 3D skeleton and SFS. The SFS algorithm is a fast and effective 3D reconstruction method, but the current reconstruction accuracy of this method is highly dependent on the light source model for imaging, and due to the different structures of the internal organs, the color distribution is also different, using a light source model is easy to cause Recovery bias of 3D data. The area after superpixel segmentation reduces the influence of complex color background, and then performs SFS 3D reconstruction on different areas after segmentation, which can improve the reconstruction accuracy. In order to achieve the unification of coordinates in different regions, the 3D coordinates of different regions are fused based on the 3D skeleton, so as to obtain accurate 3D point clouds of objects with complex color changes.

选取线性法三维建模,本申请所用的SFS为利用有限差值法对表面梯度p和q进行离散逼近,然后在高度Z方向上进行线性化处理,这种方法运算速度快,对于任何的反射函数都适用。对于p和q采用离散逼近,如公式(10)和公式(11)所示:The three-dimensional modeling of the linear method is selected. The SFS used in this application is to use the finite difference method to discretely approximate the surface gradients p and q, and then perform linearization in the height Z direction. functions are applicable. Discrete approximations are used for p and q, as shown in equations (10) and (11):

对于某个像素点(x,y)和给定的图像的灰度等级E(x,y),公式(12)中函数f关于高度图Zn-1的线性逼近就可以用泰勒级数展开,然后利用雅克比迭代去求解,经过简化可得:For a certain pixel point (x, y) and a given image gray level E(x, y), the linear approximation of the function f in formula (12) with respect to the height map Z n-1 can be expanded by Taylor series , and then use Jacobian iteration to solve, after simplification, we can get:

然后,对于Z(x,y)=Zn(x,y),第n次迭代的高度图可以直接按公式(13)求解:Then, for Z(x, y) = Z n (x, y), the height map of the n-th iteration can be solved directly by formula (13):

公式(13)中:In formula (13):

现在,设所有像素点的初始估计值为Z0(x,y)=0,高度Z就可以通过公式(13)经过迭代得到。SFS处理后每个子区域图像如图3所示。Now, set the initial estimated value of all pixels as Z 0 (x, y)=0, the height Z can be obtained by iterative formula (13). The image of each sub-region after SFS processing is shown in Figure 3.

将SFS后所获得的坐标变化关系与超像素分割后所得到的边界骨架坐标信息进行融合,计算全局三维坐标。运算完毕。The coordinate change relationship obtained after SFS is fused with the boundary skeleton coordinate information obtained after superpixel segmentation, and the global 3D coordinates are calculated. Completion of operation.

本发明与现有三维重建方法最大区别有如下三点:The biggest difference between the present invention and the existing three-dimensional reconstruction method has the following three points:

(1)提出了超像素分割与外极线约束下的ORB融合三维骨架生成方法,该方法将颜色复杂区域进行分割,不同区域的边界构成器官的三维骨架。本方法既可以减少三维重建的特征点匹配时间,又可以改善特征点匹配的准确度。(1) The superpixel segmentation and ORB fusion 3D skeleton generation method under the constraint of epipolar line is proposed. This method divides the complex color regions, and the boundaries of different regions constitute the 3D skeleton of the organ. The method can not only reduce the feature point matching time of three-dimensional reconstruction, but also improve the feature point matching accuracy.

(2)提出了三维骨架与SFS融合的快速三维成像方法,该方法以生成的三维骨架坐标为基准,结合不同颜色的梯度差别,依次推算出各个区域之间的三维坐标信息,进而获得场景内器官的全部三维形貌坐标。(2) A fast 3D imaging method based on the fusion of 3D skeleton and SFS is proposed. The method takes the generated 3D skeleton coordinates as a benchmark, combines the gradient differences of different colors, and sequentially calculates the 3D coordinate information between each area, and then obtains the scene. The full 3D topographic coordinates of the organ.

(3)解决三维内窥镜高精度三维重建的难题,助推三维重建在医疗和工业领域的发展和应用。本研究对医生的经验要求低,操作更为简单,可靠性也相应提高;与已有的传统外科手术相比,本研究可以减轻病人的痛苦,缩短手术时间,减少手术风险。此外,本研究为其他领域的高精度内窥镜三维重建提供新的算法和理论研究基础。(3) Solve the problem of high-precision 3D reconstruction of 3D endoscopes, and promote the development and application of 3D reconstruction in the medical and industrial fields. Compared with the existing traditional surgery, this study can reduce the pain of the patient, shorten the operation time, and reduce the operation risk. In addition, this study provides a new algorithm and theoretical research basis for high-precision endoscopic 3D reconstruction in other fields.

综上所述,本发明所述的三维重建方法的优点是:To sum up, the advantages of the three-dimensional reconstruction method of the present invention are:

(1)可获得精准的三维坐标数据;(1) Accurate three-dimensional coordinate data can be obtained;

(2)减少了复杂颜色背景的影响,对分割后不同区域三维重建速度快,并且重建精度大大提升。(2) The influence of complex color background is reduced, the 3D reconstruction speed of different regions after segmentation is fast, and the reconstruction accuracy is greatly improved.

三维医用内窥镜诊查不但可以缩短医生培训时间,降低手术时间,还可以解决我国微创手术推广的关键技术难题。同时,通过这种三维立体恢复方法构建的三维内窥镜手术技术的推广,可以推动精准医疗和虚拟现实医疗手段的发展,促进我国医疗仪器行业的进步。Three-dimensional medical endoscopy can not only shorten the training time of doctors and reduce the operation time, but also solve the key technical problems in the promotion of minimally invasive surgery in my country. At the same time, the promotion of 3D endoscopic surgery technology constructed by this 3D stereoscopic restoration method can promote the development of precision medicine and virtual reality medical methods, and promote the progress of my country's medical instrument industry.

以上示意性的对本发明及其实施方式进行了描述,该描述没有局限性,附图中所示的也只是本发明的实施方式之一。所以,如果本领域的普通技术人员受其启示,在不脱离本发明创造宗旨的情况下,采用其它形式的同类部件或其它形式的各部件布局方式,不经创造性的设计出与该技术方案相似的技术方案与实施例,均应属于本发明的保护范围。The present invention and its embodiments have been described above schematically, and the description is not limited, and what is shown in the accompanying drawings is only one of the embodiments of the present invention. Therefore, if those of ordinary skill in the art are inspired by it, without departing from the purpose of the present invention, use other forms of similar parts or other forms of the layout of each part, and design a design similar to this technical solution without creativity. The technical solutions and embodiments of the invention shall belong to the protection scope of the present invention.

Claims (1)

1. the present invention devises a kind of real-time three-dimensional method for reconstructing applied to binocular endoscope medical image, characterized in that packet Containing steps are as follows:
Step 1: binocular camera being demarcated, if the coordinate system where left camera A is OaXaYaZaIf the seat where right camera B Mark system is ObXbYbZb, the spin matrix between two cameras is R, translation matrix T, shown in the formula of calibration such as formula (1),r11-r33Spin matrix point for the right camera B relative to the left camera A Amount, tx, ty, tzTranslation matrix component for the right camera B relative to the left camera A;
Step 2: the detecting lenses of binocular endoscope being protruded into patient body, to obtain organ surface image, and endoscope are adopted The organ surface image of collection carries out denoising and picture smooth treatment using median filtering method, protects the detailed information of image;
Step 3: utilizing SLIC (Simple Linear Iterative Clustering) superpixel segmentation method segmentation step 2 Organ surface image is subdivided by obtained organ surface image first by the color of adjacent pixel, brightness, textural characteristics Multiple subregions, then all subregion image is transformed into CIE-Lab color space from RGB color, according to super-pixel number, Seed point is evenly distributed in image, and three-dimensional colouring information and two-dimensional space bit are utilized in the contiguous range of seed point Confidence breath calculates the pixel each searched to the distance of the seed point, to cluster to pixel, and passes through super-pixel The destination number of cut zone, controls the size of cut zone, optimization and enhancing connectivity is finally iterated, after obtaining segmentation Organ surface image, distance calculates as shown in formula (2), dcRepresent color distance, dsRepresent space length, NsIt represents in class Maximum space distance, NcRepresent maximum color distance;
Step 4: by the organ surface image after step 3 segmentation, according to outer limit matching principle, after left camera collects segmentation Organ surface image segmentation boundary selected point, right camera collect segmentation after organ surface image on determine polar curve and Polar curve and partitioning boundary intersection point, obtain accurately left and right matching double points, then with ORB (Oriented FAST and Rotated BRIEF) feature describes operator combination, positions the position of corresponding region intersection point in the camera of left and right, chooses matching degree highest in intersection point Point be match point, if the slope of left camera selected point be ka, then the slope k of opposite match pointbIt can be obtained by formula (3), ka For the slope of certain point P in left camera acquired image skeleton, kbFor in right camera acquired image skeleton with the P The slope of point corresponding points, repeats this step, can obtain the three-dimensional coordinate of whole skeleton positions, constitutes boundary skeleton Three-dimensional coordinate records the three-dimensional coordinate information on each subregion boundary;
Step 5: the inside of subregion after each obtained super-pixel segmentation of step 4, first with SFS (Shape From Shading) three-dimensional reconstruction is carried out, linear approach three-dimensional modeling is chosen, surface graded p and q is carried out using finite-difference method Then discrete approximation carries out linearization process according to formula (4) on the direction height Z (x, y), finally obtain the coordinate of partial points Variation relation;
In formula (4):
Step 6: step 5 changes in coordinates relationship obtained is merged with the obtained boundary skeleton coordinate information of step 4, Calculate organ surface global three-dimensional coordinate;Operation finishes.
CN201910315817.0A 2019-04-18 2019-04-18 Real-time three-dimensional reconstruction method applied to binocular endoscopic medical image Active CN110033465B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910315817.0A CN110033465B (en) 2019-04-18 2019-04-18 Real-time three-dimensional reconstruction method applied to binocular endoscopic medical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910315817.0A CN110033465B (en) 2019-04-18 2019-04-18 Real-time three-dimensional reconstruction method applied to binocular endoscopic medical image

Publications (2)

Publication Number Publication Date
CN110033465A true CN110033465A (en) 2019-07-19
CN110033465B CN110033465B (en) 2023-04-25

Family

ID=67239087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910315817.0A Active CN110033465B (en) 2019-04-18 2019-04-18 Real-time three-dimensional reconstruction method applied to binocular endoscopic medical image

Country Status (1)

Country Link
CN (1) CN110033465B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110807803A (en) * 2019-10-11 2020-02-18 北京文香信息技术有限公司 A camera positioning method, device, equipment and storage medium
CN110992431A (en) * 2019-12-16 2020-04-10 电子科技大学 A combined three-dimensional reconstruction method of binocular endoscopic soft tissue images
CN111080778A (en) * 2019-12-23 2020-04-28 电子科技大学 Online three-dimensional reconstruction method of binocular endoscope soft tissue image
CN111598939A (en) * 2020-05-22 2020-08-28 中原工学院 A method of measuring human body circumference based on multi-eye vision system
CN113034387A (en) * 2021-03-05 2021-06-25 成都国科微电子有限公司 Image denoising method, device, equipment and medium
CN113052956A (en) * 2021-03-19 2021-06-29 安翰科技(武汉)股份有限公司 Method, device and medium for constructing film reading model based on capsule endoscope
CN114022547A (en) * 2021-09-15 2022-02-08 苏州中科华影健康科技有限公司 Endoscope image detection method, device, equipment and storage medium
CN114078139A (en) * 2021-11-25 2022-02-22 四川长虹电器股份有限公司 Image post-processing method based on portrait segmentation model generation result
CN114092647A (en) * 2021-11-19 2022-02-25 复旦大学 A 3D reconstruction system and method based on panoramic binocular stereo vision
CN114531767A (en) * 2022-04-20 2022-05-24 深圳市宝润科技有限公司 Visual X-ray positioning method and system for handheld X-ray machine
WO2022127533A1 (en) * 2020-12-18 2022-06-23 安翰科技(武汉)股份有限公司 Capsule endoscope image three-dimensional reconstruction method, electronic device, and readable storage medium
CN115245302A (en) * 2021-04-25 2022-10-28 河北医科大学第二医院 System and method for reconstructing three-dimensional scene based on endoscope image
CN115299914A (en) * 2022-08-05 2022-11-08 上海微觅医疗器械有限公司 Endoscope system, image processing method and device
CN116153147A (en) * 2023-02-28 2023-05-23 中国人民解放军陆军特色医学中心 3D-VR binocular stereo vision image construction method and endoscopic operation teaching device
CN116797744A (en) * 2023-08-29 2023-09-22 武汉大势智慧科技有限公司 Multi-time-phase live-action three-dimensional model construction method, system and terminal equipment
CN117952854A (en) * 2024-02-02 2024-04-30 广东工业大学 A multi-spot denoising correction method and three-dimensional reconstruction method based on image conversion
WO2024109610A1 (en) * 2022-11-21 2024-05-30 杭州海康慧影科技有限公司 Endoscope system, and apparatus and method for measuring spacing between in-vivo tissue features
CN118172343A (en) * 2024-03-29 2024-06-11 中国人民解放军空军军医大学 A method for characterizing tumor heterogeneity based on medical imaging
CN119832008A (en) * 2024-12-19 2025-04-15 青岛海洋地质研究所 Water body interference removal method for deep sea binocular camera terrain reconstruction

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587082A (en) * 2009-06-24 2009-11-25 天津工业大学 Quick three-dimensional reconstructing method applied for detecting fabric defect
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN107767442A (en) * 2017-10-16 2018-03-06 浙江工业大学 A kind of foot type three-dimensional reconstruction and measuring method based on Kinect and binocular vision
CN107907048A (en) * 2017-06-30 2018-04-13 长沙湘计海盾科技有限公司 A kind of binocular stereo vision method for three-dimensional measurement based on line-structured light scanning
WO2018107427A1 (en) * 2016-12-15 2018-06-21 深圳大学 Rapid corresponding point matching method and device for phase-mapping assisted three-dimensional imaging system
CN108335350A (en) * 2018-02-06 2018-07-27 聊城大学 The three-dimensional rebuilding method of binocular stereo vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587082A (en) * 2009-06-24 2009-11-25 天津工业大学 Quick three-dimensional reconstructing method applied for detecting fabric defect
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
WO2018107427A1 (en) * 2016-12-15 2018-06-21 深圳大学 Rapid corresponding point matching method and device for phase-mapping assisted three-dimensional imaging system
CN107907048A (en) * 2017-06-30 2018-04-13 长沙湘计海盾科技有限公司 A kind of binocular stereo vision method for three-dimensional measurement based on line-structured light scanning
CN107767442A (en) * 2017-10-16 2018-03-06 浙江工业大学 A kind of foot type three-dimensional reconstruction and measuring method based on Kinect and binocular vision
CN108335350A (en) * 2018-02-06 2018-07-27 聊城大学 The three-dimensional rebuilding method of binocular stereo vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
宋丽梅;陈昌曼;张亮;董霄;: "高精度全局解相在多频率三维测量中的应用" *
董霄;宋丽梅;: "一种基于相位法的彩色三维形貌测量方法" *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110807803A (en) * 2019-10-11 2020-02-18 北京文香信息技术有限公司 A camera positioning method, device, equipment and storage medium
CN110992431A (en) * 2019-12-16 2020-04-10 电子科技大学 A combined three-dimensional reconstruction method of binocular endoscopic soft tissue images
CN111080778A (en) * 2019-12-23 2020-04-28 电子科技大学 Online three-dimensional reconstruction method of binocular endoscope soft tissue image
CN111080778B (en) * 2019-12-23 2023-03-31 电子科技大学 Online three-dimensional reconstruction method of binocular endoscope soft tissue image
CN111598939A (en) * 2020-05-22 2020-08-28 中原工学院 A method of measuring human body circumference based on multi-eye vision system
CN111598939B (en) * 2020-05-22 2021-01-26 中原工学院 Human body circumference measuring method based on multi-vision system
WO2022127533A1 (en) * 2020-12-18 2022-06-23 安翰科技(武汉)股份有限公司 Capsule endoscope image three-dimensional reconstruction method, electronic device, and readable storage medium
US12183017B2 (en) 2020-12-18 2024-12-31 Ankon Technologies Co., Ltd. Capsule endoscope image three-dimensional reconstruction method, electronic device, and readable storage medium
CN113034387A (en) * 2021-03-05 2021-06-25 成都国科微电子有限公司 Image denoising method, device, equipment and medium
CN113034387B (en) * 2021-03-05 2023-07-14 成都国科微电子有限公司 Image denoising method, device, equipment and medium
CN113052956A (en) * 2021-03-19 2021-06-29 安翰科技(武汉)股份有限公司 Method, device and medium for constructing film reading model based on capsule endoscope
CN115245302A (en) * 2021-04-25 2022-10-28 河北医科大学第二医院 System and method for reconstructing three-dimensional scene based on endoscope image
CN114022547A (en) * 2021-09-15 2022-02-08 苏州中科华影健康科技有限公司 Endoscope image detection method, device, equipment and storage medium
CN114092647A (en) * 2021-11-19 2022-02-25 复旦大学 A 3D reconstruction system and method based on panoramic binocular stereo vision
CN114078139A (en) * 2021-11-25 2022-02-22 四川长虹电器股份有限公司 Image post-processing method based on portrait segmentation model generation result
CN114078139B (en) * 2021-11-25 2024-04-16 四川长虹电器股份有限公司 Image post-processing method based on human image segmentation model generation result
CN114531767A (en) * 2022-04-20 2022-05-24 深圳市宝润科技有限公司 Visual X-ray positioning method and system for handheld X-ray machine
US12220269B2 (en) 2022-04-20 2025-02-11 Shenzhen Browiner Tech Co., Ltd Method and system for visualizing X-ray positioning of handheld X-ray machine
CN115299914A (en) * 2022-08-05 2022-11-08 上海微觅医疗器械有限公司 Endoscope system, image processing method and device
WO2024109610A1 (en) * 2022-11-21 2024-05-30 杭州海康慧影科技有限公司 Endoscope system, and apparatus and method for measuring spacing between in-vivo tissue features
CN116153147A (en) * 2023-02-28 2023-05-23 中国人民解放军陆军特色医学中心 3D-VR binocular stereo vision image construction method and endoscopic operation teaching device
CN116797744A (en) * 2023-08-29 2023-09-22 武汉大势智慧科技有限公司 Multi-time-phase live-action three-dimensional model construction method, system and terminal equipment
CN117952854A (en) * 2024-02-02 2024-04-30 广东工业大学 A multi-spot denoising correction method and three-dimensional reconstruction method based on image conversion
CN118172343A (en) * 2024-03-29 2024-06-11 中国人民解放军空军军医大学 A method for characterizing tumor heterogeneity based on medical imaging
CN119832008A (en) * 2024-12-19 2025-04-15 青岛海洋地质研究所 Water body interference removal method for deep sea binocular camera terrain reconstruction

Also Published As

Publication number Publication date
CN110033465B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN110033465B (en) Real-time three-dimensional reconstruction method applied to binocular endoscopic medical image
CN110010249B (en) Augmented reality surgical navigation method, system and electronic device based on video overlay
US10198872B2 (en) 3D reconstruction and registration of endoscopic data
US9498132B2 (en) Visualization of anatomical data by augmented reality
Schmalz et al. An endoscopic 3D scanner based on structured light
CN103325143B (en) Labelling point automatic registration method based on Model Matching
CN108784832A (en) A kind of minimally invasive spine surgical augmented reality air navigation aid
CN116421313A (en) Augmented reality fusion method in thoracoscopic lung tumor resection surgical navigation
CN108090954A (en) Abdominal cavity environmental map based on characteristics of image rebuilds the method with laparoscope positioning
CN113274129A (en) Cardiothoracic surgery auxiliary control system based on virtual reality
CN107221029A (en) A 3D Image Reconstruction Method
CN110731817A (en) radiationless percutaneous spine positioning method based on optical scanning automatic contour segmentation matching
CN114886558A (en) Endoscope projection method and system based on augmented reality
CN116485850A (en) Real-time non-rigid registration method and system for surgical navigation image based on deep learning
Alam et al. Evaluation of medical image registration techniques based on nature and domain of the transformation
CN118340579A (en) Imaging method applied to neurosurgery
CN113012230A (en) Method for placing surgical guide plate under auxiliary guidance of AR in operation
CN114298986B (en) Chest bone three-dimensional construction method and system based on multi-view disordered X-ray film
CN115018890A (en) A three-dimensional model registration method and system
Andrea et al. Validation of stereo vision based liver surface reconstruction for image guided surgery
CN120125631A (en) Medical image processing method, system, device and storage medium
Shao et al. Facial augmented reality based on hierarchical optimization of similarity aspect graph
CN116993805A (en) Intraoperative residual organ volume estimation system for surgical planning assistance
Deguchi et al. A method for bronchoscope tracking using position sensor without fiducial markers
Zhou et al. Taking measurement in every direction: Implicit scene representation for accurately estimating target dimensions under monocular endoscope

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant