[go: up one dir, main page]

CN114494374A - Method for determining fusion error of three-dimensional model and two-dimensional image and electronic equipment - Google Patents

Method for determining fusion error of three-dimensional model and two-dimensional image and electronic equipment Download PDF

Info

Publication number
CN114494374A
CN114494374A CN202210064500.6A CN202210064500A CN114494374A CN 114494374 A CN114494374 A CN 114494374A CN 202210064500 A CN202210064500 A CN 202210064500A CN 114494374 A CN114494374 A CN 114494374A
Authority
CN
China
Prior art keywords
outer contour
dimensional
pixel
distance
biological tissue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210064500.6A
Other languages
Chinese (zh)
Inventor
吴乙荣
李南哲
陈永健
杨欣荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Medical Equipment Co Ltd
Original Assignee
Qingdao Hisense Medical Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Medical Equipment Co Ltd filed Critical Qingdao Hisense Medical Equipment Co Ltd
Priority to CN202210064500.6A priority Critical patent/CN114494374A/en
Publication of CN114494374A publication Critical patent/CN114494374A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种三维模型和二维图像的融合误差的确定方法及电子设备,用以解决现有技术不能准确计算出生物组织的三维模型与腹腔镜视频中生物组织的二维图像的实际融合误差的问题。本申请通过将生物组织的二维图像和三维模型在虚拟相机的视角下使用配准算法进行配准后融合显示;获取二维图像中生物组织的第一外轮廓和生物组织的三维模型投影到二维平面得到的参考二维图像中生物组织的第二外轮廓;确定第一外轮廓和第二外轮廓之间的像素距离;采用预先确定的单位像素距离和物理距离之间的对应关系,确定第一外轮廓和第二外轮廓之间的像素距离对应的物理距离作为融合误差,由此可以准确的计算出生物组织的三维模型和二维图像的实际融合误差。

Figure 202210064500

The present application discloses a method and electronic device for determining the fusion error of a three-dimensional model and a two-dimensional image, which are used to solve the actual problem that the existing technology cannot accurately calculate the three-dimensional model of biological tissue and the two-dimensional image of biological tissue in laparoscopic video. The problem of fusion error. In this application, the two-dimensional image and the three-dimensional model of the biological tissue are registered and displayed by using the registration algorithm in the perspective of the virtual camera; the first outer contour of the biological tissue in the two-dimensional image and the three-dimensional model of the biological tissue are projected onto The second outer contour of the biological tissue in the reference two-dimensional image obtained by the two-dimensional plane; determining the pixel distance between the first outer contour and the second outer contour; using the predetermined correspondence between the unit pixel distance and the physical distance, The physical distance corresponding to the pixel distance between the first outer contour and the second outer contour is determined as the fusion error, so that the actual fusion error of the three-dimensional model of the biological tissue and the two-dimensional image can be accurately calculated.

Figure 202210064500

Description

三维模型和二维图像的融合误差的确定方法及电子设备Method and electronic device for determining fusion error of 3D model and 2D image

技术领域technical field

本申请涉及图像处理技术领域,特别涉及一种三维模型和二维图像的融合误差的确定方法及电子设备。The present application relates to the technical field of image processing, and in particular, to a method and electronic device for determining a fusion error of a three-dimensional model and a two-dimensional image.

背景技术Background technique

腹腔镜的手术导航中,需要将术前重建的三维模型和腹腔镜视频中的生物组织进行配准,并进行融合显示,从而利用术前重建的血管、病灶标记出视频中实际的血管和病灶位置。In laparoscopic surgical navigation, it is necessary to register the preoperatively reconstructed 3D model with the biological tissue in the laparoscopic video, and perform fusion display, so that the actual blood vessels and lesions in the video can be marked with the preoperatively reconstructed blood vessels and lesions. Location.

相关技术中衡量生物组织的三维模型与腹腔镜视频中生物组织的二维图像的匹配程度,一种方法是基于图像相似度,而基于图像相似度的测量方法无法直接给出两者匹配的实际融合误差,两者匹配的实际融合误差,都是在旋转角度、缩放、平移有精确解的前提下,计算生物组织的三维模型在配准后的相机视角下与精确解的误差。然而在实际的使用中,一般无法给出旋转角度、缩放、平移的精确解,即无法得到最佳旋转角度、缩放、平移参数,导致不能准确计算出生物组织的三维模型与腹腔镜视频中生物组织的二维图像的实际融合误差,因此如何准确计算生物组织的三维模型与腹腔镜视频中生物组织的二维图像的实际融合误差是一个急需解决的问题。In the related art, one method to measure the matching degree between the three-dimensional model of biological tissue and the two-dimensional image of biological tissue in laparoscopic video is based on image similarity, and the measurement method based on image similarity cannot directly give the actual matching between the two. The fusion error, the actual fusion error of the two matching, is based on the premise that the rotation angle, zoom, and translation have accurate solutions, and the error between the three-dimensional model of the biological tissue and the accurate solution under the registered camera view is calculated. However, in actual use, it is generally impossible to give accurate solutions for the rotation angle, scaling and translation, that is, the optimal rotation angle, scaling and translation parameters cannot be obtained, resulting in the inability to accurately calculate the 3D model of biological tissue and the biological data in laparoscopic videos. The actual fusion error of the two-dimensional image of the tissue, so how to accurately calculate the actual fusion error of the three-dimensional model of the biological tissue and the two-dimensional image of the biological tissue in the laparoscopic video is an urgent problem to be solved.

发明内容SUMMARY OF THE INVENTION

本申请的目的是提供一种三维模型和二维图像的融合误差的确定方法及电子设备,用以解决现有技术不能准确计算出生物组织的三维模型与腹腔镜视频中生物组织的二维图像的实际融合误差的问题。The purpose of this application is to provide a method and electronic device for determining the fusion error of a three-dimensional model and a two-dimensional image, so as to solve the problem that the prior art cannot accurately calculate the three-dimensional model of biological tissue and the two-dimensional image of biological tissue in laparoscopic video. the actual fusion error.

第一方面,本申请提供一种三维模型和二维图像的融合误差的确定方法,所述方法包括:In a first aspect, the present application provides a method for determining a fusion error of a three-dimensional model and a two-dimensional image, the method comprising:

将所述生物组织的二维图像和三维模型在虚拟相机的视角下使用配准算法进行配准后融合显示;The two-dimensional image and the three-dimensional model of the biological tissue are fused and displayed after registration using a registration algorithm under the perspective of the virtual camera;

获取二维图像中所述生物组织的第一外轮廓,并获取所述生物组织的三维模型投影到二维平面得到的参考二维图像中所述生物组织的第二外轮廓;acquiring the first outer contour of the biological tissue in the two-dimensional image, and acquiring the second outer contour of the biological tissue in the reference two-dimensional image obtained by projecting the three-dimensional model of the biological tissue onto the two-dimensional plane;

确定所述第一外轮廓和所述第二外轮廓之间的像素距离;determining the pixel distance between the first outer contour and the second outer contour;

采用预先确定的单位像素距离和物理距离之间的对应关系,确定所述第一外轮廓和所述第二外轮廓之间的像素距离对应的物理距离作为所述融合误差。Using the predetermined correspondence between the unit pixel distance and the physical distance, the physical distance corresponding to the pixel distance between the first outer contour and the second outer contour is determined as the fusion error.

在一种可能的实施方式中,所述确定所述第一外轮廓和所述第二外轮廓之间的像素距离,包括:In a possible implementation manner, the determining the pixel distance between the first outer contour and the second outer contour includes:

基于所述第一外轮廓和所述第二外轮廓,确定所述第一外轮廓和所述第二外轮廓中用于描述同一位置的像素点对;Based on the first outer contour and the second outer contour, determine the pixel point pair used to describe the same position in the first outer contour and the second outer contour;

基于所述像素点对,确定所述第一外轮廓和所述第二外轮廓的平均像素距离误差作为所述第一外轮廓和所述第二外轮廓之间的像素距离。Based on the pixel point pairs, an average pixel distance error of the first outer contour and the second outer contour is determined as a pixel distance between the first outer contour and the second outer contour.

在一种可能的实施方式中,所述基于所述第一外轮廓和所述第二外轮廓,确定所述第一外轮廓和所述第二外轮廓中用于描述同一位置的像素点对,包括:In a possible implementation manner, based on the first outer contour and the second outer contour, the pixel point pair used to describe the same position in the first outer contour and the second outer contour is determined ,include:

计算所述第一外轮廓和所述第二外轮廓中每个像素点对应的上下文信息;所述上下文信息为所述像素点的邻域结构;Calculate the context information corresponding to each pixel in the first outer contour and the second outer contour; the context information is the neighborhood structure of the pixel;

计算所述第一外轮廓和所述第二外轮廓中任意两个像素点的花费值;Calculate the cost value of any two pixels in the first outer contour and the second outer contour;

使用匈牙利算法统计出所述第一外轮廓和所述第二外轮廓的总体花费值最低的一组像素点对;其中所述一组像素点对包括多个同一位置的像素点对。A group of pixel point pairs with the lowest overall cost value of the first outer contour and the second outer contour is counted by using the Hungarian algorithm; wherein the group of pixel point pairs includes a plurality of pixel point pairs at the same position.

在一种可能的实施方式中,所述基于所述像素点对,确定所述第一外轮廓和所述第二外轮廓的平均像素距离误差,包括:In a possible implementation manner, the determining an average pixel distance error of the first outer contour and the second outer contour based on the pixel point pair includes:

在同一坐标系下,确定所述第一外轮廓和所述第二外轮廓中每个像素点的坐标;所述同一坐标系包括X轴坐标和Y轴坐标;Under the same coordinate system, determine the coordinates of each pixel in the first outer contour and the second outer contour; the same coordinate system includes X-axis coordinates and Y-axis coordinates;

将所述第一外轮廓和所述第二外轮廓中每个像素点对的X轴上的坐标相减,得到X轴上所有像素点对的像素距离误差;Subtract the coordinates on the X axis of each pixel pair in the first outer contour and the second outer contour to obtain the pixel distance error of all pixel pairs on the X axis;

将所述X轴上所有像素点对的像素距离误差除以X轴上像素点对的数量,得到所述X轴上的平均像素距离误差;Divide the pixel distance error of all pixel point pairs on the X axis by the number of pixel point pairs on the X axis to obtain the average pixel distance error on the X axis;

将所述第一外轮廓和所述第二外轮廓中每个像素点对的Y轴上的坐标相减,得到Y轴上所有像素点对的像素距离误差;Subtract the coordinates on the Y-axis of each pixel pair in the first outer contour and the second outer contour to obtain the pixel distance error of all pixel pairs on the Y-axis;

将所述Y轴上所有像素点对的像素距离误差除以Y轴上像素点对的数量,得到所述Y轴上的平均像素距离误差。The average pixel distance error on the Y axis is obtained by dividing the pixel distance error of all pixel point pairs on the Y axis by the number of pixel point pairs on the Y axis.

在一种可能的实施方式中,确定所述单位像素距离和物理距离之间的对应关系,具体包括:In a possible implementation manner, determining the correspondence between the unit pixel distance and the physical distance specifically includes:

在世界坐标系中确定所述生物组织在三维模型中的三维坐标最小值和三维坐标最大值;determining the minimum three-dimensional coordinate value and the maximum three-dimensional coordinate value of the biological tissue in the three-dimensional model in the world coordinate system;

在所述虚拟相机的同一视角下,将所述三维坐标最小值和三维坐标最大值投影到二维平面,得到所述三维坐标最小值和所述三维坐标最大值对应的二维坐标;Under the same viewing angle of the virtual camera, project the three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value to a two-dimensional plane to obtain two-dimensional coordinates corresponding to the three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value;

基于所述世界坐标系中三维坐标最小值和三维坐标最大值的世界坐标,计算得到三维坐标最小值和三维坐标最大值之间的物理距离;Based on the world coordinates of the minimum three-dimensional coordinate and the maximum three-dimensional coordinate in the world coordinate system, calculate the physical distance between the minimum three-dimensional coordinate and the maximum three-dimensional coordinate;

基于所述三维坐标最小值和三维坐标最大值对应的二维坐标,计算得到两个三维坐标最小值和三维坐标最大值对应的二维坐标之间的像素距离;Based on the two-dimensional coordinates corresponding to the three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value, calculate the pixel distance between the two-dimensional coordinates corresponding to the three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value;

基于所述三维坐标最小值和三维坐标最大值之间的物理距离和所述三维坐标最小值和三维坐标最大值对应的二维坐标之间的像素距离,确定二维坐标系中X轴上的单位像素距离对应的物理距离、以及Y轴上的单位像素距离对应的物理距离。Based on the physical distance between the three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value and the pixel distance between the two-dimensional coordinates corresponding to the three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value, determine the X-axis in the two-dimensional coordinate system. The physical distance corresponding to the unit pixel distance, and the physical distance corresponding to the unit pixel distance on the Y axis.

在一种可能的实施方式中,所述确定所述第一外轮廓和所述第二外轮廓之间的像素距离对应的物理距离,具体包括:In a possible implementation manner, the determining the physical distance corresponding to the pixel distance between the first outer contour and the second outer contour specifically includes:

将所述X轴上的平均像素距离误差乘以所述X轴上的单位像素距离对应的物理距离,得到所述第一外轮廓和所述第二外轮廓之间在所述X轴上的像素距离对应的物理距离;Multiply the average pixel distance error on the X axis by the physical distance corresponding to the unit pixel distance on the X axis to obtain the distance between the first outer contour and the second outer contour on the X axis. The physical distance corresponding to the pixel distance;

将所述Y轴上的平均像素距离误差乘以所述Y轴上的单位像素距离对应的物理距离,得到所述第一外轮廓和所述第二外轮廓之间在所述Y轴上的像素距离对应的物理距离;Multiply the average pixel distance error on the Y-axis by the physical distance corresponding to the unit pixel distance on the Y-axis to obtain the distance between the first outer contour and the second outer contour on the Y-axis. The physical distance corresponding to the pixel distance;

基于所述第一外轮廓和所述第二外轮廓之间在所述X轴上的像素距离对应的物理距离和在所述Y轴上的像素距离对应的物理距离,得到所述第一外轮廓和所述第二外轮廓之间的像素距离对应的物理距离;The first outer contour is obtained based on the physical distance corresponding to the pixel distance on the X axis and the physical distance corresponding to the pixel distance on the Y axis between the first outer contour and the second outer contour the physical distance corresponding to the pixel distance between the contour and the second outer contour;

将所述第一外轮廓和所述第二外轮廓之间的像素距离对应的物理距离作为所述融合误差。The physical distance corresponding to the pixel distance between the first outer contour and the second outer contour is used as the fusion error.

在一种可能的实施方式中,所述获取二维图像中所述生物组织的第一外轮廓,并获取所述生物组织的三维模型投影到二维平面得到的参考二维图像中所述生物组织的第二外轮廓,具体包括:In a possible implementation manner, acquiring the first outer contour of the biological tissue in the two-dimensional image, and acquiring the biological tissue in the reference two-dimensional image obtained by projecting the three-dimensional model of the biological tissue onto a two-dimensional plane. The second outer contour of the tissue, specifically including:

利用人工智能算法,获取二维图像中所述生物组织的第一外轮廓;Using an artificial intelligence algorithm to obtain the first outer contour of the biological tissue in the two-dimensional image;

创建一个视频播放尺寸与所述二维图像播放尺寸一致的虚拟窗口,使用所述虚拟相机在所述虚拟窗口上,将所述生物组织的三维模型在所述虚拟相机的同一视角下,投影到二维平面;Create a virtual window with the same video playback size as the two-dimensional image playback size, and use the virtual camera on the virtual window to project the three-dimensional model of the biological tissue on the virtual camera from the same viewing angle. two-dimensional plane;

获取所述二维平面的二维图像,得到所述参考二维图像;acquiring a two-dimensional image of the two-dimensional plane to obtain the reference two-dimensional image;

在所述参考二维图像中对所述生物组织进行边缘检测,得到所述生物组织的第二外轮廓。Edge detection is performed on the biological tissue in the reference two-dimensional image to obtain a second outer contour of the biological tissue.

在一种可能的实施方式中,所述确定所述第一外轮廓和所述第二外轮廓之间的像素距离之前,所述方法还包括:In a possible implementation manner, before the determining the pixel distance between the first outer contour and the second outer contour, the method further includes:

利用曲线数据压缩算法,对所述第一外轮廓和所述第二外轮廓进行下采样处理,得到像素点数量相同的第一外轮廓和第二外轮廓。Using a curve data compression algorithm, the first outer contour and the second outer contour are down-sampled to obtain the first outer contour and the second outer contour with the same number of pixel points.

第二方面,本申请提供一种电子设备,包括处理器和存储器:In a second aspect, the present application provides an electronic device, including a processor and a memory:

所述存储器,用于存储可被所述处理器执行的计算机程序;the memory for storing a computer program executable by the processor;

所述处理器与所述存储器连接,被配置为执行所述指令以实现如上述第一方面中任一项所述的三维模型和二维图像的融合误差的确定方法。The processor is connected to the memory and is configured to execute the instructions to implement the method for determining a fusion error of a three-dimensional model and a two-dimensional image according to any one of the first aspects above.

第三方面,本申请提供一种计算机可读存储介质,当所述计算机可读存储介质中的指令由电子设备执行时,使得所述电子设备能够执行如上述第一方面中任一项所述的三维模型和二维图像的融合误差的确定方法。In a third aspect, the present application provides a computer-readable storage medium, which, when an instruction in the computer-readable storage medium is executed by an electronic device, enables the electronic device to execute any one of the above-mentioned first aspects. A method for determining the fusion error of a 3D model and a 2D image.

第四方面,本申请提供一种计算机程序产品,包括计算机程序:In a fourth aspect, the present application provides a computer program product, including a computer program:

所述计算机程序被处理器执行时实现如上述第一方面中任一项所述的三维模型和二维图像的融合误差的确定方法。When the computer program is executed by the processor, the method for determining the fusion error of the three-dimensional model and the two-dimensional image according to any one of the above-mentioned first aspects is realized.

本申请的实施例提供的技术方案至少带来以下有益效果:The technical solutions provided by the embodiments of the present application bring at least the following beneficial effects:

本申请实施例会首先确定单位像素距离对应的实际物理距离,然后通过将生物组织的二维图像和三维模型在虚拟相机的视角下使用配准算法进行配准后融合显示;获取二维图像中生物组织的第一外轮廓和生物组织的三维模型投影到二维平面得到的参考二维图像中生物组织的第二外轮廓;确定第一外轮廓和第二外轮廓之间的像素距离;采用预先确定的单位像素距离和物理距离之间的对应关系,确定第一外轮廓和第二外轮廓之间的像素距离对应的物理距离作为融合误差。由此本申请通过预先确定的单位像素距离和物理距离之间的对应关系以及二维图像中生物组织的第一外轮廓和生物组织的三维模型的二维投影图像中生物组织的第二外轮廓,可以较为准确的量化同一生物组织在二维图和三维模型之间的误差,提高生物组织的三维模型与腹腔镜视频中生物组织的二维图像的实际融合误差的准确性,从而能够更加精确地给出三维模型和二维图像的融合误差,能够提升用户的体验。In the embodiment of the present application, the actual physical distance corresponding to the unit pixel distance is firstly determined, and then the two-dimensional image and the three-dimensional model of the biological tissue are registered and displayed by using the registration algorithm in the perspective of the virtual camera; The second outer contour of the biological tissue in the reference two-dimensional image obtained by projecting the first outer contour of the tissue and the three-dimensional model of the biological tissue to the two-dimensional plane; determining the pixel distance between the first outer contour and the second outer contour; The determined correspondence between the unit pixel distance and the physical distance is determined, and the physical distance corresponding to the pixel distance between the first outer contour and the second outer contour is determined as a fusion error. Therefore, the present application uses the predetermined correspondence between the unit pixel distance and the physical distance, the first outer contour of the biological tissue in the two-dimensional image and the second outer contour of the biological tissue in the two-dimensional projection image of the three-dimensional model of the biological tissue. , which can more accurately quantify the error between the two-dimensional map and the three-dimensional model of the same biological tissue, improve the accuracy of the actual fusion error between the three-dimensional model of the biological tissue and the two-dimensional image of the biological tissue in the laparoscopic video, so as to be more accurate The fusion error of the 3D model and the 2D image can be given accurately, which can improve the user experience.

本申请的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本申请而了解。本申请的目的和其他优点可通过在所写的说明书、权利要求书、以及附图中所特别指出的结构来实现和获得。Other features and advantages of the present application will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description, claims, and drawings.

附图说明Description of drawings

为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例中所需要使用的附图作简单地介绍,显而易见地,下面所介绍的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions of the embodiments of the present application more clearly, the following briefly introduces the drawings that need to be used in the embodiments of the present application. Obviously, the drawings introduced below are only some embodiments of the present application. For those of ordinary skill in the art, other drawings can also be obtained from these drawings without any creative effort.

图1a为本申请实施例提供的腹腔镜视频中生物组织的二维图像的示意图;1a is a schematic diagram of a two-dimensional image of biological tissue in a laparoscopic video provided by an embodiment of the application;

图1b为本申请实施例提供的术前重建的生物组织的三维模型的示意图;Fig. 1b is a schematic diagram of a three-dimensional model of a preoperatively reconstructed biological tissue provided by an embodiment of the present application;

图1c为本申请实施例提供的生物组织的三维模型和二维图像配准后融合显示的融合显示效果图;Fig. 1c is a fusion display effect diagram of the fusion display after the registration of the three-dimensional model of the biological tissue and the two-dimensional image provided by the embodiment of the application;

图2为本申请实施例提供的三维模型和二维图像的融合误差的确定方法的流程示意图;2 is a schematic flowchart of a method for determining a fusion error of a three-dimensional model and a two-dimensional image provided by an embodiment of the present application;

图3a为本申请实施例提供的二维图像中生物组织的第一外轮廓的示意图;3a is a schematic diagram of a first outer contour of a biological tissue in a two-dimensional image provided by an embodiment of the present application;

图3b为本申请实施例提供的生物组织的三维模型投影到二维平面得到的参考二维图像中生物组织的第二外轮廓的示意图;Fig. 3b is a schematic diagram of the second outer contour of the biological tissue in the reference two-dimensional image obtained by projecting the three-dimensional model of the biological tissue provided by the embodiment of the application to the two-dimensional plane;

图3c为本申请实施例提供的二维图像中生物组织的第一外轮廓下采样后的示意图;Fig. 3c is a schematic diagram after downsampling of the first outer contour of biological tissue in the two-dimensional image provided by the embodiment of the present application;

图3d为本申请实施例提供的参考二维图像中生物组织的第二外轮廓下采样后的示意图;FIG. 3d is a schematic diagram after downsampling of the second outer contour of the biological tissue in the reference two-dimensional image provided by the embodiment of the present application;

图4为本申请实施例提供的第一外轮廓和第二外轮廓之间的像素距离的确定方法的流程示意图;4 is a schematic flowchart of a method for determining a pixel distance between a first outer contour and a second outer contour provided by an embodiment of the present application;

图5为本申请实施例提供的第一外轮廓和第二外轮廓的总体花费值最低的一组像素点对的示意图;5 is a schematic diagram of a group of pixel point pairs with the lowest overall cost value of the first outer contour and the second outer contour provided by an embodiment of the present application;

图6为本申请实施例提供的第一外轮廓和第二外轮廓的平均像素距离误差的确定方法的流程示意图;6 is a schematic flowchart of a method for determining an average pixel distance error of a first outer contour and a second outer contour provided by an embodiment of the present application;

图7为本申请实施例提供的单位像素距离和物理距离之间的对应关系的确定方法的流程示意图;7 is a schematic flowchart of a method for determining a correspondence between a unit pixel distance and a physical distance according to an embodiment of the present application;

图8为本申请实施例提供的第一外轮廓和第二外轮廓之间的像素距离对应的物理距离的确定方法的流程示意图;8 is a schematic flowchart of a method for determining a physical distance corresponding to a pixel distance between a first outer contour and a second outer contour according to an embodiment of the present application;

图9为本申请实施例提供的电子设备的结构示意图。FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.

具体实施方式Detailed ways

为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。其中,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be described clearly and completely below with reference to the accompanying drawings in the embodiments of the present application. The described embodiments are part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative work fall within the protection scope of the present application.

并且,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,另外,在本申请实施例的描述中,“多个”是指两个或多于两个。In addition, in the description of the embodiments of the present application, unless otherwise specified, “/” means or means, for example, A/B can mean A or B; “and/or” in the text is only a description of the associated object The association relationship indicates that there can be three kinds of relationships, for example, A and/or B can indicate that A exists alone, A and B exist at the same time, and B exists alone. In addition, in the description of the embodiments of this application , "plurality" means two or more than two.

以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”、的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。Hereinafter, the terms "first" and "second" are only used for descriptive purposes, and should not be construed as implying or implying relative importance or implying the number of indicated technical features. Therefore, the features defined with "first", "second", and "second" may expressly or implicitly include one or more of the features, and in the description of the embodiments of the present application, unless otherwise stated, "a plurality of" The meaning is two or more.

腹腔镜的手术导航中,需要将如图1a所示的腹腔镜视频中的生物组织和如图1b所示的术前重建的三维模型进行配准,并进行融合显示(如图1c所示),从而利用术前重建的血管、病灶标记出视频中实际的血管和病灶位置。在术前重建的三维模型与腹腔镜视频中的生物组织匹配过程中,需要涉及到3D/2D配准。In laparoscopic surgical navigation, it is necessary to register the biological tissue in the laparoscopic video as shown in Figure 1a with the preoperatively reconstructed 3D model as shown in Figure 1b, and perform fusion display (as shown in Figure 1c). , so that the actual blood vessels and lesions in the video can be marked with the preoperatively reconstructed blood vessels and lesions. In the process of matching the preoperatively reconstructed 3D model with the biological tissue in the laparoscopic video, 3D/2D registration needs to be involved.

其中,3D/2D匹配二维图像方式是不同维度匹配,需要将腹腔镜视频中的肝脏区域上升到三维,实现3D/3D配准或者将术前三维模型进行降维投影到2D,实现2D/2D配准。在2D/2D配准中,需要将三维模型投影到二维图像,并生成二维轮廓,以及从腹腔镜视频中生物组织的二维图像中分割出生物组织区域,获取生物组织的轮廓,使其分别与术前生物组织的投影轮廓库进行匹配,从轮廓库中选取出最佳匹配的轮廓,进而获取到三维模型的旋转、平移、缩放等参数,从而将三维模型转换到合适位置,实现对视频中血管、病灶的标定。而在3D/3D配准中,因为腹腔镜经常变焦,无法实时获取当前的相机内参,因此无法获得像素点的深度信息,并且生物组织表面非常光滑,没有太明显的标记点,因此本申请中使用的配准方式是2D/2D配准。Among them, the 3D/2D matching 2D image method is different dimensional matching. It is necessary to raise the liver area in the laparoscopic video to 3D to achieve 3D/3D registration or to reduce the dimension of the preoperative 3D model and project it to 2D to achieve 2D/ 2D registration. In 2D/2D registration, it is necessary to project a 3D model to a 2D image, generate a 2D contour, and segment the biological tissue area from the 2D image of the biological tissue in the laparoscopic video to obtain the contour of the biological tissue so that the It is matched with the projected contour library of preoperative biological tissue, and the best matching contour is selected from the contour library, and then the rotation, translation, zoom and other parameters of the three-dimensional model are obtained, so as to convert the three-dimensional model to a suitable position to achieve Calibration of blood vessels and lesions in the video. In 3D/3D registration, because the laparoscope is often zoomed, the current camera internal parameters cannot be obtained in real time, so the depth information of the pixel points cannot be obtained, and the surface of the biological tissue is very smooth and there are no obvious marking points. Therefore, in this application The registration method used is 2D/2D registration.

相关技术中衡量生物组织的三维模型与腹腔镜视频中生物组织的二维图像的匹配程度,主要有两种方法:一种是基于图像相似度,一种是基于边缘信息。其中,基于图像相似度的测量方法无法直接给出两者匹配的实际融合误差,两者匹配的实际融合误差,都是在旋转角度、缩放、平移有精确解的前提下,计算生物组织的三维模型在配准后的相机视角下与精确解的误差。然而在实际的使用中,一般无法给出旋转角度、缩放、平移的精确解,即最佳旋转角度、缩放、平移参数,导致不能准确计算出生物组织的三维模型与腹腔镜视频中生物组织的二维图像的实际融合误差,因此如何准确计算生物组织的三维模型与腹腔镜视频中生物组织的二维图像的实际融合误差是一个急需解决的问题。In the related art, there are mainly two methods to measure the matching degree between the three-dimensional model of biological tissue and the two-dimensional image of biological tissue in the laparoscopic video: one is based on image similarity, and the other is based on edge information. Among them, the measurement method based on image similarity cannot directly give the actual fusion error of the two matching. The error of the model from the exact solution at the registered camera view. However, in actual use, it is generally impossible to provide accurate solutions for rotation angle, scaling, and translation, that is, the optimal rotation angle, scaling, and translation parameters, resulting in the inability to accurately calculate the three-dimensional model of biological tissue and the biological tissue in laparoscopic videos. The actual fusion error of the two-dimensional image, so how to accurately calculate the actual fusion error of the three-dimensional model of the biological tissue and the two-dimensional image of the biological tissue in the laparoscopic video is an urgent problem to be solved.

有鉴于此,本申请提供了一种三维模型和二维图像的融合误差的确定方法及电子设备,用以解决现有技术不能准确计算出生物组织的三维模型与腹腔镜视频中生物组织的二维图像的实际融合误差的问题。In view of this, the present application provides a method and electronic device for determining the fusion error of a three-dimensional model and a two-dimensional image, so as to solve the problem that the existing technology cannot accurately calculate the three-dimensional model of the biological tissue and the biological tissue in the laparoscopic video. The problem of the actual fusion error of dimensional images.

本申请的发明构思可概括为:本申请实施例会首先确定单位像素距离对应的实际物理距离,然后通过将生物组织的二维图像和三维模型在虚拟相机的视角下使用配准算法进行配准后融合显示;获取二维图像中生物组织的第一外轮廓和生物组织的三维模型投影到二维平面得到的参考二维图像中生物组织的第二外轮廓;确定第一外轮廓和第二外轮廓之间的像素距离;采用预先确定的单位像素距离和物理距离之间的对应关系,确定第一外轮廓和第二外轮廓之间的像素距离对应的物理距离作为融合误差。由此本申请通过预先确定的单位像素距离和物理距离之间的对应关系以及二维图像中生物组织的第一外轮廓和生物组织的三维模型的二维投影图像中生物组织的第二外轮廓,可以较为准确的量化同一生物组织在二维图和三维模型之间的误差,提高生物组织的三维模型与腹腔镜视频中生物组织的二维图像的实际融合误差的准确性,从而能够更加精确地给出三维模型和二维图像的融合误差,能够提升用户的体验。The inventive concept of the present application can be summarized as follows: in the embodiment of the present application, the actual physical distance corresponding to the unit pixel distance is firstly determined, and then the two-dimensional image and the three-dimensional model of the biological tissue are registered using the registration algorithm from the perspective of the virtual camera. Fusion display; obtaining the first outer contour of the biological tissue in the two-dimensional image and the second outer contour of the biological tissue in the reference two-dimensional image obtained by projecting the three-dimensional model of the biological tissue to the two-dimensional plane; determining the first outer contour and the second outer contour The pixel distance between the contours; using the predetermined correspondence between the unit pixel distance and the physical distance, the physical distance corresponding to the pixel distance between the first outer contour and the second outer contour is determined as the fusion error. Therefore, the present application uses the predetermined correspondence between the unit pixel distance and the physical distance, the first outer contour of the biological tissue in the two-dimensional image and the second outer contour of the biological tissue in the two-dimensional projection image of the three-dimensional model of the biological tissue. , which can more accurately quantify the error between the two-dimensional map and the three-dimensional model of the same biological tissue, improve the accuracy of the actual fusion error between the three-dimensional model of the biological tissue and the two-dimensional image of the biological tissue in the laparoscopic video, so as to be more accurate The fusion error of the 3D model and the 2D image can be given accurately, which can improve the user experience.

需要说明的是,本申请不仅适用于确定三维模型和腹腔镜视频的融合,也适用于生物组织的三维模型和任意二维医学图像的融合误差的计算。本申请实施例中的生物组织例如肝脏、胃、肾脏等内部器官。It should be noted that the present application is not only applicable to determining the fusion of a 3D model and a laparoscopic video, but also applicable to the calculation of fusion errors between a 3D model of biological tissue and any 2D medical image. Biological tissues in the examples of the present application are internal organs such as liver, stomach, kidney, etc.

在介绍完本申请实施例的主要发明思想之后,下面结合附图对本申请实施例提供的三维模型和二维图像的融合误差的确定方法进行介绍。After introducing the main inventive idea of the embodiments of the present application, the following describes the method for determining the fusion error of the three-dimensional model and the two-dimensional image provided by the embodiments of the present application with reference to the accompanying drawings.

图2为本申请实施例提供的三维模型和二维图像的融合误差的确定方法的流程示意图。如图2所示,该方法包括以下步骤:FIG. 2 is a schematic flowchart of a method for determining a fusion error of a three-dimensional model and a two-dimensional image according to an embodiment of the present application. As shown in Figure 2, the method includes the following steps:

在步骤201中,将生物组织的二维图像和三维模型在虚拟相机的视角下使用配准算法进行配准后融合显示。In step 201, the two-dimensional image of the biological tissue and the three-dimensional model are fused and displayed after being registered using a registration algorithm from the perspective of the virtual camera.

具体可以实施为,基于Mitk(医学影像交互工具包)的开源软件平台实现本申请实施例提供的三维模型和二维图像的融合误差的确定方法。该方法中首先需要将dicom(Digital Imaging and Communications in Medicine,医学数字成像和通信)数据和生物组织的三维模型的数据加载至Mitk的开源软件平台中,同时将腹腔镜的手术视频中的生物组织的二维图像渲染到三维窗口中,然后利用2D/3D配准算法将如图1b所示的生物组织的三维模型与如图1a所示的腹腔镜视频中的生物组织的区域配准到一起,进行融合显示,融合显示效果图如图1c所示。Specifically, the method for determining the fusion error of the three-dimensional model and the two-dimensional image provided by the embodiments of the present application can be implemented based on the open source software platform of Mitk (Medical Image Interaction Toolkit). In this method, it is first necessary to load dicom (Digital Imaging and Communications in Medicine, digital imaging and communication in medicine) data and the data of the three-dimensional model of the biological tissue into the open source software platform of Mitk, and at the same time load the biological tissue in the laparoscopic surgical video The 2D image is rendered into the 3D window, and then the 3D model of the biological tissue shown in Figure 1b is registered with the region of the biological tissue in the laparoscopic video shown in Figure 1a using the 2D/3D registration algorithm. , the fusion display is performed, and the fusion display effect diagram is shown in Figure 1c.

在步骤202中,获取二维图像中生物组织的第一外轮廓,并获取生物组织的三维模型投影到二维平面得到的参考二维图像中生物组织的第二外轮廓。In step 202, the first outer contour of the biological tissue in the two-dimensional image is acquired, and the second outer contour of the biological tissue in the reference two-dimensional image obtained by projecting the three-dimensional model of the biological tissue onto the two-dimensional plane is acquired.

在一种可能的实施方式中,本申请实施例中获取二维图像中生物组织的第一外轮廓,并获取生物组织的三维模型投影到二维平面得到的参考二维图像中生物组织的第二外轮廓可以具体实施为:首先利用人工智能算法,获取二维图像中生物组织的第一外轮廓,然后创建一个视频播放尺寸与二维图像播放尺寸一致的虚拟窗口,使用步骤201中的虚拟相机作用在虚拟窗口上,在同一视角下,将生物组织的三维模型投影到二维平面;然后获取二维平面的二维图像,得到参考二维图像;最后在参考二维图像中对生物组织进行边缘检测,得到生物组织的第二外轮廓。In a possible implementation, in the example of this application, the first outer contour of the biological tissue in the two-dimensional image is acquired, and the first outer contour of the biological tissue in the reference two-dimensional image obtained by projecting the three-dimensional model of the biological tissue onto the two-dimensional plane is acquired. The second outer contour can be specifically implemented as follows: first, use artificial intelligence algorithm to obtain the first outer contour of the biological tissue in the two-dimensional image, then create a virtual window with the video playback size consistent with the two-dimensional image playback size, and use the virtual window in step 201. The camera acts on the virtual window and projects the 3D model of the biological tissue to a 2D plane under the same viewing angle; then acquires a 2D image of the 2D plane to obtain a reference 2D image; finally, the biological tissue is analyzed in the reference 2D image. Edge detection is performed to obtain the second outer contour of the biological tissue.

其中第一外轮廓为构成生物组织在二维图像中的前沿和后沿的边缘的所有像素点的集合,第二外轮廓为构成生物组织在参考二维图像中的前沿和后沿的边缘的所有像素点的集合。The first outer contour is the set of all pixel points that constitute the edges of the leading and trailing edges of the biological tissue in the two-dimensional image, and the second outer contour is the edges that constitute the leading and trailing edges of the biological tissue in the reference two-dimensional image. The collection of all pixels.

示例性的,生物组织以肝脏为例,首先利用人工智能算法,分割出腹腔镜视频中二维图像中肝脏的第一外轮廓,如图3a所示,第一外轮廓是指肝脏在二维图像中的前沿以及后沿,粗线条为肝脏的后沿,细线条为肝脏的后沿。接下来需要获取肝脏三维模型在步骤201的虚拟相机的相同视角下的投影轮廓,可以通过创建一个视频播放尺寸与腹腔镜视频的播放窗口尺寸一致的虚拟窗口,使用虚拟相机作用在虚拟窗口上,得到肝脏三维模型投影到二维平面后得到的参考二维图像,获取肝脏的投影图像的第二外轮廓,如图3b所示,粗线条为肝脏的后沿,细线条为肝脏的前沿。由此使用视频播放尺寸与二维图像播放尺寸一致的虚拟窗口,可以保证三维图像投影图像尺寸与视频图像的尺寸一致。Exemplarily, taking the liver as a biological tissue, the first outer contour of the liver in the two-dimensional image in the laparoscopic video is segmented by using an artificial intelligence algorithm, as shown in Figure 3a, the first outer contour refers to the liver in the two-dimensional The leading and trailing edges in the image, the thick line is the trailing edge of the liver, and the thin line is the trailing edge of the liver. Next, it is necessary to obtain the projection outline of the three-dimensional liver model under the same viewing angle of the virtual camera in step 201. By creating a virtual window whose video playback size is the same as the size of the playback window of the laparoscopic video, the virtual camera can be used to act on the virtual window, The reference two-dimensional image obtained after the three-dimensional model of the liver is projected onto the two-dimensional plane is obtained, and the second outer contour of the projected image of the liver is obtained, as shown in Figure 3b, the thick line is the rear edge of the liver, and the thin line is the front edge of the liver. Therefore, using a virtual window with the same video playback size as the two-dimensional image playback size can ensure that the projected image size of the three-dimensional image is consistent with the size of the video image.

在一种可能的实施方式中,为了减少第一外轮廓和第二外轮廓像素点匹配过程中的算法复杂度,加快算法计算时间,本申请实施例中还可以利用曲线数据压缩(Douglas–Peucker)算法,对第一外轮廓和第二外轮廓进行下采样处理,得到像素点数量相同的第一外轮廓和第二外轮廓,如图3c和图3d所示。由此可以减少第一外轮廓和第二外轮廓中像素点的数量,同时可以保证第一外轮廓和第二外轮廓中的像素点可以一一对应。In a possible implementation, in order to reduce the algorithm complexity in the process of matching the pixels of the first outer contour and the second outer contour and speed up the calculation time of the algorithm, curve data compression (Douglas–Peucker ) algorithm, down-sampling the first outer contour and the second outer contour to obtain the first outer contour and the second outer contour with the same number of pixels, as shown in Figure 3c and Figure 3d. Thereby, the number of pixel points in the first outer contour and the second outer contour can be reduced, and at the same time, it can be ensured that the pixel points in the first outer contour and the second outer contour can be in one-to-one correspondence.

在步骤203中,确定第一外轮廓和第二外轮廓之间的像素距离。In step 203, the pixel distance between the first outer contour and the second outer contour is determined.

在一种可能的实施方式中,本申请中确定第一外轮廓和第二外轮廓之间的像素距离,具体可以实施为如图4所示的步骤:In a possible implementation manner, the pixel distance between the first outer contour and the second outer contour is determined in the present application, which can be specifically implemented as the steps shown in FIG. 4 :

在步骤401中,基于第一外轮廓和第二外轮廓,确定第一外轮廓和第二外轮廓中用于描述同一位置的像素点对。In step 401, based on the first outer contour and the second outer contour, a pixel point pair used to describe the same position in the first outer contour and the second outer contour is determined.

在一种可能的实施方式中,本申请实施例可以使用形状上下文算法(shapecontext)获取第一外轮廓和第二外轮廓中用于描述同一位置的像素点对,具体可以实施为:首先计算第一外轮廓和第二外轮廓中每个像素点对应的上下文信息;然后计算第一外轮廓和第二外轮廓中任意两个像素点的花费值;最后使用匈牙利算法统计出第一外轮廓和第二外轮廓的总体花费值最低的一组像素点对,如图5所示,白色的像素点为第一外轮廓的像素点,灰色的像素点为第二外轮廓的像素点。其中上下文信息为像素点的邻域结构,一组像素点对包括多个同一位置的像素点对。In a possible implementation, the embodiment of the present application may use a shape context algorithm (shapecontext) to obtain the pixel point pair used to describe the same position in the first outer contour and the second outer contour, which may be specifically implemented as: first calculate the first outer contour The context information corresponding to each pixel in the first outer contour and the second outer contour; then calculate the cost value of any two pixels in the first outer contour and the second outer contour; finally use the Hungarian algorithm to count the first outer contour and A group of pixel pairs with the lowest overall cost value of the second outer contour, as shown in FIG. 5 , the white pixels are the pixels of the first outer contour, and the gray pixels are the pixels of the second outer contour. The context information is the neighborhood structure of pixel points, and a group of pixel point pairs includes multiple pixel point pairs at the same position.

在步骤402中,基于像素点对,确定第一外轮廓和第二外轮廓的平均像素距离误差作为第一外轮廓和第二外轮廓之间的像素距离。In step 402, based on the pixel point pair, an average pixel distance error of the first outer contour and the second outer contour is determined as the pixel distance between the first outer contour and the second outer contour.

在一种可能的实施方式中,本申请实施例中基于像素点对,确定第一外轮廓和第二外轮廓的平均像素距离误差,具体可以实施为如图6所示的步骤:In a possible implementation manner, in the embodiment of the present application, the average pixel distance error of the first outer contour and the second outer contour is determined based on the pixel point pair, which may be specifically implemented as the steps shown in FIG. 6 :

在步骤601中,在同一坐标系下,确定第一外轮廓和第二外轮廓中每个像素点的坐标;同一坐标系包括X轴坐标和Y轴坐标;In step 601, under the same coordinate system, determine the coordinates of each pixel point in the first outer contour and the second outer contour; the same coordinate system includes X-axis coordinates and Y-axis coordinates;

在步骤602中,将第一外轮廓和第二外轮廓中每个像素点对的X轴上的坐标相减,得到X轴上所有像素点对的像素距离误差;In step 602, the coordinates on the X axis of each pixel pair in the first outer contour and the second outer contour are subtracted to obtain the pixel distance error of all pixel pairs on the X axis;

在步骤603中,将X轴上所有像素点对的像素距离误差除以X轴上像素点对的数量,得到X轴上的平均像素距离误差;In step 603, the pixel distance error of all pixel point pairs on the X axis is divided by the number of pixel point pairs on the X axis to obtain the average pixel distance error on the X axis;

在步骤604中,将第一外轮廓和第二外轮廓中每个像素点对的Y轴上的坐标相减,得到Y轴上所有像素点对的像素距离误差;In step 604, the coordinates on the Y-axis of each pixel pair in the first outer contour and the second outer contour are subtracted to obtain the pixel distance error of all pixel pairs on the Y-axis;

在步骤605中,将Y轴上所有像素点对的像素距离误差除以Y轴上像素点对的数量,得到Y轴上的平均像素距离误差。In step 605, the pixel distance error of all pixel point pairs on the Y axis is divided by the number of pixel point pairs on the Y axis to obtain the average pixel distance error on the Y axis.

示例性的,假设第一外轮廓的像素点集合为

Figure BDA0003479742590000111
其中第i个像素点的二维坐标为
Figure BDA0003479742590000112
表示像素点
Figure BDA0003479742590000113
在二维图像坐标系下的X轴上的坐标为
Figure BDA0003479742590000114
在Y轴上的坐标为
Figure BDA0003479742590000115
其中坐标原点为虚拟窗口的左上角;m表示第一外轮廓的像素点的数量;与第一外轮廓的像素点集合对应的第二外轮廓的像素点集合为
Figure BDA0003479742590000121
其中第i个像素点的二维坐标为
Figure BDA0003479742590000122
表示像素点
Figure BDA0003479742590000123
在三维模型投影得到的参考二维图像坐标系下的X轴上的坐标为
Figure BDA0003479742590000124
在Y轴上的坐标为
Figure BDA0003479742590000125
因为前文得到的生物组织的二维图像与生物组织的三维模型投影得到的参考二维图像尺寸一致,并且重合,所以第一外轮廓的像素点集合与第二外轮廓的像素点集合在同一坐标系下,均以三维模型的虚拟窗口的左上角为坐标原点。Exemplarily, it is assumed that the set of pixel points of the first outer contour is
Figure BDA0003479742590000111
The two-dimensional coordinates of the i-th pixel are
Figure BDA0003479742590000112
Represents a pixel
Figure BDA0003479742590000113
The coordinates on the X-axis in the two-dimensional image coordinate system are
Figure BDA0003479742590000114
The coordinates on the Y axis are
Figure BDA0003479742590000115
The origin of the coordinates is the upper left corner of the virtual window; m represents the number of pixels of the first outer contour; the set of pixels of the second outer contour corresponding to the set of pixels of the first outer contour is
Figure BDA0003479742590000121
The two-dimensional coordinates of the i-th pixel are
Figure BDA0003479742590000122
Represents a pixel
Figure BDA0003479742590000123
The coordinates on the X-axis under the reference two-dimensional image coordinate system obtained by the projection of the three-dimensional model are
Figure BDA0003479742590000124
The coordinates on the Y axis are
Figure BDA0003479742590000125
Because the two-dimensional image of the biological tissue obtained above is the same size as the reference two-dimensional image projected from the three-dimensional model of the biological tissue and overlaps, the set of pixels of the first outer contour and the set of pixels of the second outer contour are at the same coordinates Under the system, the upper left corner of the virtual window of the 3D model is used as the coordinate origin.

可以使用第一外轮廓的像素点的坐标和第二外轮廓的像素点的坐标计算每一个同一位置的像素点对之间的距离,得到第一外轮廓和第二外轮廓像素点对之间的距离为

Figure BDA0003479742590000126
具体可以实施为,先将X轴上所有像素点对的X轴上坐标的差值相加,得到X轴上的像素距离误差,如
Figure BDA0003479742590000127
然后再除以像素点对的数量m,得到X轴上的平均像素距离误差;同样的,先将Y轴上所有像素点对的Y轴上坐标的差值相加,得到Y轴上的像素距离误差,如
Figure BDA0003479742590000128
然后再除以像素点对的数量m,得到Y轴上的平均像素距离误差,最后得到第一外轮廓和第二外轮廓的平均像素距离误差为
Figure BDA0003479742590000129
You can use the coordinates of the pixels of the first outer contour and the coordinates of the pixels of the second outer contour to calculate the distance between each pixel point pair at the same position, and obtain the distance between the first outer contour and the second outer contour pixel point pair The distance is
Figure BDA0003479742590000126
Specifically, it can be implemented as follows: first, the difference values of the coordinates on the X axis of all pixel point pairs on the X axis are added to obtain the pixel distance error on the X axis, such as
Figure BDA0003479742590000127
Then divide by the number m of pixel pairs to get the average pixel distance error on the X axis; similarly, first add the difference of the coordinates on the Y axis of all pixel pairs on the Y axis to get the pixels on the Y axis distance error, such as
Figure BDA0003479742590000128
Then divide by the number m of pixel pairs to get the average pixel distance error on the Y axis, and finally get the average pixel distance error of the first outer contour and the second outer contour as
Figure BDA0003479742590000129

由此,可以通过第一外轮廓和第二外轮廓每个像素对的坐标计算出第一外轮廓和第二外轮廓的平均像素距离误差,即可以确定第一外轮廓和第二外轮廓之间的像素距离。Therefore, the average pixel distance error of the first outer contour and the second outer contour can be calculated from the coordinates of each pixel pair of the first outer contour and the second outer contour, and the difference between the first outer contour and the second outer contour can be determined. pixel distance between.

在步骤204中,采用预先确定的单位像素距离和物理距离之间的对应关系,确定第一外轮廓和第二外轮廓之间的像素距离对应的物理距离作为融合误差。In step 204, using the predetermined correspondence between the unit pixel distance and the physical distance, the physical distance corresponding to the pixel distance between the first outer contour and the second outer contour is determined as the fusion error.

在一种可能的实施方式中,为了得到第一外轮廓和第二外轮廓每个像素点对之间的像素距离对应的物理距离,需要得到虚拟窗口中二维图像中像素点对之间的单位像素距离代表的物理距离,因此本申请实施例中确定单位像素距离和物理距离之间的对应关系,具体可以实施为如图7所示的步骤:In a possible implementation manner, in order to obtain the physical distance corresponding to the pixel distance between each pixel pair of the first outer contour and the second outer contour, it is necessary to obtain the distance between the pixel pairs in the two-dimensional image in the virtual window. The physical distance represented by the unit pixel distance, so the corresponding relationship between the unit pixel distance and the physical distance is determined in the embodiment of the present application, which can be specifically implemented as the steps shown in Figure 7:

在步骤701中,在世界坐标系中确定生物组织在三维模型中的三维坐标最小值和三维坐标最大值;In step 701, the minimum and maximum three-dimensional coordinates of the biological tissue in the three-dimensional model are determined in the world coordinate system;

在步骤702中,在虚拟相机的同一视角下,将三维坐标最小值和三维坐标最大值投影到二维平面,得到三维坐标最小值和三维坐标最大值对应的二维坐标;In step 702, under the same viewing angle of the virtual camera, the three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value are projected to a two-dimensional plane to obtain the two-dimensional coordinates corresponding to the three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value;

在步骤703中,基于世界坐标系中三维坐标最小值和三维坐标最大值的世界坐标,计算得到三维坐标最小值和三维坐标最大值之间的物理距离;In step 703, based on the world coordinates of the minimum three-dimensional coordinate and the maximum three-dimensional coordinate in the world coordinate system, calculate the physical distance between the minimum three-dimensional coordinate and the maximum three-dimensional coordinate;

在步骤704中,基于三维坐标最小值和三维坐标最大值对应的二维坐标,计算得到两个三维坐标最小值和三维坐标最大值对应的二维坐标之间的像素距离;In step 704, based on the two-dimensional coordinates corresponding to the three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value, calculate the pixel distance between the two-dimensional coordinates corresponding to the two three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value;

在步骤705中,基于三维坐标最小值和三维坐标最大值之间的物理距离和三维坐标最小值和三维坐标最大值对应的二维坐标之间的像素距离,确定二维坐标系中X轴上的单位像素距离对应的物理距离、以及Y轴上的单位像素距离对应的物理距离。In step 705, based on the physical distance between the three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value and the pixel distance between the two-dimensional coordinates corresponding to the three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value, determine the X-axis in the two-dimensional coordinate system. The physical distance corresponding to the unit pixel distance of , and the physical distance corresponding to the unit pixel distance on the Y axis.

示例性的,由于本申请中生物组织的三维模型与生物组织的二维图像已经配准并融合显示,因此本申请实施例中借助渲染生物组织的三维模型的VTK(VisualizationToolkit,视觉化工具函式库)虚拟相机以及VTK坐标变换,将世界坐标系中的生物组织的三维模型中的像素点映射到二维坐标系中,然后找到世界坐标中三维坐标最小值的像素点和三维坐标最大值的像素点映射到二维坐标系后的坐标。例如可以确定渲染生物组织的三维模型在世界坐标系中的最小包围框,然后可以确定最小包围框中各个像素点的坐标,找出三维坐标最小值的像素点和三维坐标最大值的像素点。假设最小包围框的范围为[Minx,Maxx,Miny,Maxy,Minz,Maxz],则在世界坐标系中确定生物组织在三维模型中的三维坐标最小值和三维坐标最大值分别为Min=[Minx,Miny,Min_z]和Max=[Maxx,Maxy,Maxz]。在虚拟相机的同一视角下,利用VTK坐标变换,将这两个像素点投影到二维平面,得到三维坐标最小值和三维坐标最大值对应的二维坐标为:Exemplarily, since the three-dimensional model of the biological tissue and the two-dimensional image of the biological tissue in the present application have been registered and displayed in fusion, in the embodiment of the present application, the VTK (Visualization Toolkit, Visualization Toolkit, Visualization Toolkit, which renders the three-dimensional model of the biological tissue) is used. Library) virtual camera and VTK coordinate transformation, map the pixels in the three-dimensional model of the biological tissue in the world coordinate system to the two-dimensional coordinate system, and then find the pixel point of the minimum three-dimensional coordinate in the world coordinate and the maximum value of the three-dimensional coordinate. The coordinates of the pixel after mapping to the two-dimensional coordinate system. For example, the minimum bounding box of the rendered 3D model of biological tissue in the world coordinate system can be determined, and then the coordinates of each pixel in the minimum bounding box can be determined, and the pixel with the minimum 3D coordinate and the pixel with the maximum 3D coordinate can be found. Assuming that the range of the minimum bounding box is [Min x ,Max x ,Min y ,Max y ,Min z ,Max z ], then determine the minimum and maximum three-dimensional coordinates of the biological tissue in the three-dimensional model in the world coordinate system They are Min=[Min x , Min y , Min_z] and Max=[Max x , Max y , Max z ], respectively. Under the same viewing angle of the virtual camera, using VTK coordinate transformation, project these two pixels to a two-dimensional plane, and obtain the two-dimensional coordinates corresponding to the minimum three-dimensional coordinate and the maximum three-dimensional coordinate as:

Figure BDA0003479742590000141
Figure BDA0003479742590000142
Figure BDA0003479742590000141
and
Figure BDA0003479742590000142

然后首先根据世界坐标系中的生物组织在三维模型中的三维坐标最小值和三维坐标最大值的两个像素点,使用公式(1)计算得到三维坐标最小值和三维坐标最大值之间的物理距离为:Then, according to the two pixel points of the minimum three-dimensional coordinate and the maximum three-dimensional coordinate of the biological tissue in the three-dimensional model in the world coordinate system, use formula (1) to calculate the physical distance between the minimum three-dimensional coordinate and the maximum three-dimensional coordinate. The distance is:

Figure BDA0003479742590000143
Figure BDA0003479742590000143

再根据三维坐标最小值和三维坐标最大值对应的二维坐标,使用公式(2)计算得到两个三维坐标最小值和三维坐标最大值对应的二维坐标之间的像素距离为:Then, according to the two-dimensional coordinates corresponding to the minimum three-dimensional coordinates and the maximum three-dimensional coordinates, use formula (2) to calculate the pixel distance between the two-dimensional coordinates corresponding to the two minimum three-dimensional coordinates and the maximum three-dimensional coordinates:

Figure BDA0003479742590000144
Figure BDA0003479742590000144

最后根据公式(1)计算得到的三维坐标最小值和三维坐标最大值之间的物理距离和公式(2)计算得到的三维坐标最小值和三维坐标最大值对应的二维坐标之间的像素距离,使用公式(3)计算二维坐标系中X轴上的单位像素距离对应的物理距离、以及使用公式(4)计算Y轴上的单位像素距离对应的物理距离:Finally, the physical distance between the minimum three-dimensional coordinate and the maximum three-dimensional coordinate calculated according to formula (1) and the pixel distance between the two-dimensional coordinate corresponding to the minimum three-dimensional coordinate calculated by formula (2) and the maximum three-dimensional coordinate , use formula (3) to calculate the physical distance corresponding to the unit pixel distance on the X axis in the two-dimensional coordinate system, and use formula (4) to calculate the physical distance corresponding to the unit pixel distance on the Y axis:

Figure BDA0003479742590000145
Figure BDA0003479742590000145

Figure BDA0003479742590000146
Figure BDA0003479742590000146

其中,

Figure BDA0003479742590000147
是获取三维坐标最小值的像素点和三维坐标最大值的像素点对应的两个坐标之间的直线距离上单位像素距离对应的实际物理距离,而
Figure BDA0003479742590000148
表示二维坐标下X轴的占比,因此公式(3)就表示二维坐标系中X轴上的单位像素距离对应的物理距离,而
Figure BDA0003479742590000149
表示二维坐标下Y轴的占比,因此公式(4)就表示二维坐标系中Y轴上的单位像素距离对应的物理距离。in,
Figure BDA0003479742590000147
is the actual physical distance corresponding to the unit pixel distance on the straight-line distance between the two coordinates corresponding to the pixel point of the minimum three-dimensional coordinate and the pixel of the maximum three-dimensional coordinate, and
Figure BDA0003479742590000148
Represents the proportion of the X-axis in the two-dimensional coordinate system, so formula (3) represents the physical distance corresponding to the unit pixel distance on the X-axis in the two-dimensional coordinate system, and
Figure BDA0003479742590000149
Represents the proportion of the Y-axis in the two-dimensional coordinate system, so formula (4) represents the physical distance corresponding to the unit pixel distance on the Y-axis in the two-dimensional coordinate system.

由此,可以确定单位像素距离和物理距离之间的对应关系,即得到像素点对之间的单位像素距离代表的物理距离为:Thus, the correspondence between the unit pixel distance and the physical distance can be determined, that is, the physical distance represented by the unit pixel distance between the pixel pairs is:

Figure BDA0003479742590000151
Figure BDA0003479742590000151

由此,通过渲染生物组织的三维模型在世界坐标系中的最小包围框的两个端点以及坐标转换,可以计算得到虚拟窗口中二维图像中像素点对之间的单位像素距离代表的物理距离。Therefore, by rendering the two endpoints of the minimum bounding box of the three-dimensional model of biological tissue in the world coordinate system and the coordinate transformation, the physical distance represented by the unit pixel distance between the pixel pairs in the two-dimensional image in the virtual window can be calculated. .

最后,本申请实施例中可以根据图6所示的步骤得到的第一外轮廓和第二外轮廓的平均像素距离误差Distpixel和根据图7所示的步骤得到的像素点对之间的单位像素距离代表的物理距离pixelspacing,确定第一外轮廓和第二外轮廓之间的像素距离对应的物理距离,具体可以实施为如图8所示的步骤:Finally, in the embodiment of the present application, the average pixel distance error Dist pixel of the first outer contour and the second outer contour obtained according to the steps shown in FIG. 6 and the unit between the pixel point pairs obtained according to the steps shown in FIG. 7 The physical distance pixel spacing represented by the pixel distance determines the physical distance corresponding to the pixel distance between the first outer contour and the second outer contour, which can be specifically implemented as the steps shown in Figure 8:

在步骤801中,将X轴上的平均像素距离误差乘以X轴上的单位像素距离对应的物理距离,得到第一外轮廓和第二外轮廓之间在X轴上的像素距离对应的物理距离;In step 801, multiply the average pixel distance error on the X axis by the physical distance corresponding to the unit pixel distance on the X axis to obtain the physical distance corresponding to the pixel distance on the X axis between the first outer contour and the second outer contour distance;

在步骤802中,将Y轴上的平均像素距离误差乘以Y轴上的单位像素距离对应的物理距离,得到第一外轮廓和第二外轮廓之间在Y轴上的像素距离对应的物理距离;In step 802, multiply the average pixel distance error on the Y axis by the physical distance corresponding to the unit pixel distance on the Y axis to obtain the physical distance corresponding to the pixel distance on the Y axis between the first outer contour and the second outer contour distance;

在步骤803中,基于第一外轮廓和第二外轮廓之间在X轴上的像素距离对应的物理距离和在Y轴上的像素距离对应的物理距离,得到第一外轮廓和第二外轮廓之间的像素距离对应的物理距离;In step 803, the first outer contour and the second outer contour are obtained based on the physical distance corresponding to the pixel distance on the X axis and the physical distance corresponding to the pixel distance on the Y axis between the first outer contour and the second outer contour The physical distance corresponding to the pixel distance between contours;

在步骤804中,将第一外轮廓和第二外轮廓之间的像素距离对应的物理距离作为融合误差。In step 804, the physical distance corresponding to the pixel distance between the first outer contour and the second outer contour is taken as the fusion error.

示例性的,根据图6所示的步骤得到的第一外轮廓和第二外轮廓的平均像素距离误差Distpixel和根据图7所示的步骤得到的像素点对之间的单位像素距离代表的物理距离pixelspacing,根据公式(5)计算出第一外轮廓和第二外轮廓之间的像素距离对应的物理距离,最后将计算得到的第一外轮廓和第二外轮廓之间的像素距离对应的物理距离作为融合误差。Exemplarily, the average pixel distance error Dist pixel of the first outer contour and the second outer contour obtained according to the steps shown in FIG. 6 and the unit pixel distance between the pixel pairs obtained according to the steps shown in FIG. 7 are represented by Physical distance pixel spacing , calculate the physical distance corresponding to the pixel distance between the first outer contour and the second outer contour according to formula (5), and finally calculate the calculated pixel distance between the first outer contour and the second outer contour The corresponding physical distance is used as the fusion error.

Figure BDA0003479742590000161
Figure BDA0003479742590000161

其中,Distpixel[0]表示X轴上的平均像素距离误差,而Distpixel[1]表示Y轴上的平均像素距离误差,即

Figure BDA0003479742590000162
Figure BDA0003479742590000163
where Dist pixel [0] represents the average pixel distance error on the X axis, and Dist pixel [1] represents the average pixel distance error on the Y axis, i.e.
Figure BDA0003479742590000162
which is
Figure BDA0003479742590000163

由此,通过计算二维图像中生物组织的第一外轮廓和生物组织的三维模型投影到二维平面得到的参考二维图像中生物组织的第二外轮廓的所有像素点对的平均像素距离误差,以及世界坐标系中像素点对之间的单位像素距离代表的物理距离,从而计算出第一外轮廓和第二外轮廓之间的像素距离对应的物理距离,在二维图像上给出视觉上的配准精度基础上,能够非常直接的给出三维模型和二维图像的融合误差,能够提升用户的体验。Thus, the average pixel distance of all pixel point pairs of the second outer contour of the biological tissue in the reference two-dimensional image obtained by calculating the first outer contour of the biological tissue in the two-dimensional image and the projection of the three-dimensional model of the biological tissue to the two-dimensional plane error, and the physical distance represented by the unit pixel distance between pixel pairs in the world coordinate system, so as to calculate the physical distance corresponding to the pixel distance between the first outer contour and the second outer contour, which is given on the two-dimensional image Based on the visual registration accuracy, the fusion error of the 3D model and the 2D image can be given very directly, which can improve the user experience.

基于前文的描述,本申请实施例会首先确定单位像素距离对应的实际物理距离,然后通过将生物组织的二维图像和三维模型在虚拟相机的视角下使用配准算法进行配准后融合显示;获取二维图像中生物组织的第一外轮廓和生物组织的三维模型投影到二维平面得到的参考二维图像中生物组织的第二外轮廓;确定第一外轮廓和第二外轮廓之间的像素距离;采用预先确定的单位像素距离和物理距离之间的对应关系,确定第一外轮廓和第二外轮廓之间的像素距离对应的物理距离作为融合误差。由此本申请通过预先确定的单位像素距离和物理距离之间的对应关系以及二维图像中生物组织的第一外轮廓和生物组织的三维模型的二维投影图像中生物组织的第二外轮廓,可以较为准确的量化同一生物组织在二维图和三维模型之间的误差,提高生物组织的三维模型与腹腔镜视频中生物组织的二维图像的实际融合误差的准确性,从而能够更加精确地给出三维模型和二维图像的融合误差,能够提升用户的体验。Based on the foregoing description, the embodiment of the present application will first determine the actual physical distance corresponding to the unit pixel distance, and then fuse and display the two-dimensional image and three-dimensional model of the biological tissue using a registration algorithm from the perspective of the virtual camera after registration; The second outer contour of the biological tissue in the reference two-dimensional image obtained by projecting the first outer contour of the biological tissue and the three-dimensional model of the biological tissue to the two-dimensional plane in the two-dimensional image; determining the distance between the first outer contour and the second outer contour; Pixel distance: Using the predetermined correspondence between the unit pixel distance and the physical distance, the physical distance corresponding to the pixel distance between the first outer contour and the second outer contour is determined as the fusion error. Therefore, the present application uses the predetermined correspondence between the unit pixel distance and the physical distance, the first outer contour of the biological tissue in the two-dimensional image and the second outer contour of the biological tissue in the two-dimensional projection image of the three-dimensional model of the biological tissue. , which can more accurately quantify the error between the two-dimensional map and the three-dimensional model of the same biological tissue, improve the accuracy of the actual fusion error between the three-dimensional model of the biological tissue and the two-dimensional image of the biological tissue in the laparoscopic video, so as to be more accurate The fusion error of the 3D model and the 2D image can be given accurately, which can improve the user experience.

下面参照图9来描述根据本申请的这种实施方式的电子设备900。图9显示的电子设备900仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。The electronic device 900 according to this embodiment of the present application is described below with reference to FIG. 9 . The electronic device 900 shown in FIG. 9 is only an example, and should not impose any limitations on the functions and scope of use of the embodiments of the present application.

如图9所示,电子设备900以通用电子设备的形式表现。电子设备900的组件可以包括但不限于:上述至少一个处理器901、上述至少一个存储器902、连接不同系统组件(包括存储器902和处理器901)的总线903。As shown in FIG. 9, the electronic device 900 takes the form of a general electronic device. The components of the electronic device 900 may include, but are not limited to: the above-mentioned at least one processor 901 , the above-mentioned at least one memory 902 , and a bus 903 connecting different system components (including the memory 902 and the processor 901 ).

总线903表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器、外围总线、处理器或者使用多种总线结构中的任意总线结构的局域总线。Bus 903 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus structures.

存储器902可以包括易失性存储器形式的可读介质,例如随机存取存储器(RAM)9021和/或高速缓存存储器9022,还可以进一步包括只读存储器(ROM)9023。Memory 902 may include readable media in the form of volatile memory, such as random access memory (RAM) 9021 and/or cache memory 9022 , and may further include read only memory (ROM) 9023 .

存储器902还可以包括具有一组(至少一个)程序模块9024的程序/实用工具9025,这样的程序模块9024包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。The memory 902 may also include a program/utility 9025 having a set (at least one) of program modules 9024 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, which An implementation of a network environment may be included in each or some combination of the examples.

电子设备900也可以与一个或多个外部设备904(例如键盘、指向设备等)通信,还可与一个或者多个使得用户能与电子设备900交互的设备通信,和/或与使得该电子设备900能与一个或多个其它电子设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口905进行。并且,电子设备900还可以通过网络适配器906与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器906通过总线903与用于电子设备900的其它模块通信。应当理解,尽管图中未示出,可以结合电子设备900使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理器、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。Electronic device 900 may also communicate with one or more external devices 904 (eg, keyboards, pointing devices, etc.), may also communicate with one or more devices that enable a user to interact with electronic device 900, and/or communicate with the electronic device 900 can communicate with any device (eg, router, modem, etc.) that communicates with one or more other electronic devices. Such communication may take place through input/output (I/O) interface 905 . Also, the electronic device 900 may communicate with one or more networks (eg, a local area network (LAN), a wide area network (WAN), and/or a public network such as the Internet) through a network adapter 906 . As shown, network adapter 906 communicates with other modules for electronic device 900 via bus 903 . It should be understood that, although not shown, other hardware and/or software modules may be used in conjunction with electronic device 900, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives and data backup storage systems.

在示例性实施例中,还提供了一种包括指令的计算机可读存储介质,例如包括指令的存储器902,上述指令可由处理器901执行以完成上述三维模型和二维图像的融合误差的确定方法。可选地,存储介质可以是非临时性计算机可读存储介质,例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。In an exemplary embodiment, a computer-readable storage medium including instructions is also provided, such as a memory 902 including instructions, and the instructions can be executed by the processor 901 to complete the method for determining the fusion error of the three-dimensional model and the two-dimensional image. . Alternatively, the storage medium may be a non-transitory computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, and optical data storage equipment, etc.

在示例性实施例中,还提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器901执行时实现如本申请提供的三维模型和二维图像的融合误差的确定方法的任一方法。In an exemplary embodiment, a computer program product is also provided, including a computer program that, when executed by the processor 901, implements any one of the methods for determining fusion errors of a three-dimensional model and a two-dimensional image provided by the present application method.

在示例性实施例中,本申请提供的一种三维模型和二维图像的融合误差的确定方法的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当程序产品在计算机设备上运行时,程序代码用于使计算机设备执行本说明书上述描述的根据本申请各种示例性实施方式的三维模型和二维图像的融合误差的确定方法中的步骤。In an exemplary embodiment, various aspects of the method for determining a fusion error of a three-dimensional model and a two-dimensional image provided by the present application can also be implemented in the form of a program product, which includes program code, and when the program product is stored in a computer device When running on the computer, the program code is used to cause the computer device to execute the steps in the method for determining the fusion error of the three-dimensional model and the two-dimensional image described above in the present specification according to various exemplary embodiments of the present application.

程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以是但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.

本申请的实施方式的用于三维模型和二维图像的融合误差的确定方法的程序产品可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在电子设备上运行。然而,本申请的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。The program product for the determination method of the fusion error of the three-dimensional model and the two-dimensional image of the embodiment of the present application may adopt a portable compact disk read only memory (CD-ROM) and include program codes, and may be executed on an electronic device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.

可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。A readable signal medium may include a propagated data signal in baseband or as part of a carrier wave, carrying readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A readable signal medium can also be any readable medium, other than a readable storage medium, that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.

可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。Program code embodied on a readable medium may be transmitted using any suitable medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.

可以以一种或多种程序设计语言的任意组合来编写用于执行本申请操作的程序代码,程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户电子设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户电子设备上部分在远程电子设备上执行、或者完全在远程电子设备或服务端上执行。在涉及远程电子设备的情形中,远程电子设备可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户电子设备,或者,可以连接到外部电子设备(例如利用因特网服务提供商来通过因特网连接)。Program code for performing the operations of the present application may be written in any combination of one or more programming languages, including object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural programming Language - such as the "C" language or similar programming language. The program code may execute entirely on the user's electronic device, partly on the user's device, as a stand-alone software package, partly on the user's electronic device and partly on a remote electronic device, or entirely on the remote electronic device or service Execute on the end. In the case of remote electronic equipment, the remote electronic equipment may be connected to the user electronic equipment through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to external electronic equipment (eg, using an Internet service provider business via an Internet connection).

应当注意,尽管在上文详细描述中提及了装置的若干单元或子单元,但是这种划分仅仅是示例性的并非强制性的。实际上,根据本申请的实施方式,上文描述的两个或更多单元的特征和功能可以在一个单元中具体化。反之,上文描述的一个单元的特征和功能可以进一步划分为由多个单元来具体化。It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, this division is merely exemplary and not mandatory. Indeed, according to embodiments of the present application, the features and functions of two or more units described above may be embodied in one unit. Conversely, the features and functions of one unit described above may be further subdivided to be embodied by multiple units.

此外,尽管在附图中以特定顺序描述了本申请方法的操作,但是,这并非要求或者暗示必须按照该特定顺序来执行这些操作,或是必须执行全部所示的操作才能实现期望的结果。附加地或备选地,可以省略某些步骤,将多个步骤合并为一个步骤执行,和/或将一个步骤分解为多个步骤执行。Furthermore, although the operations of the methods of the present application are depicted in the figures in a particular order, this does not require or imply that the operations must be performed in the particular order, or that all illustrated operations must be performed to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps may be combined to be performed as one step, and/or one step may be decomposed into multiple steps to be performed.

本申请提供的实施例之间的相似部分相互参见即可,以上提供的具体实施方式只是本申请总的构思下的几个示例,并不构成本申请保护范围的限定。对于本领域的技术人员而言,在不付出创造性劳动的前提下依据本申请方案所扩展出的任何其他实施方式都属于本申请的保护范围。Similar parts between the embodiments provided in the present application may be referred to each other. The specific embodiments provided above are just a few examples under the general concept of the present application, and do not constitute a limitation on the protection scope of the present application. For those skilled in the art, any other implementations expanded according to the solution of the present application without creative work fall within the protection scope of the present application.

本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。As will be appreciated by those skilled in the art, the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.

本申请是参照根据本申请的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the present application. It will be understood that each flow and/or block in the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.

这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions The apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.

这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.

显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。Obviously, those skilled in the art can make various changes and modifications to the present application without departing from the spirit and scope of the present application. Thus, if these modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to include these modifications and variations.

Claims (10)

1.一种三维模型和二维图像的融合误差的确定方法,其特征在于,所述方法包括:1. a method for determining the fusion error of a three-dimensional model and a two-dimensional image, wherein the method comprises: 将生物组织的二维图像和三维模型在虚拟相机的视角下使用配准算法进行配准后融合显示;The two-dimensional image and three-dimensional model of biological tissue are fused and displayed after registration using the registration algorithm from the perspective of the virtual camera; 获取二维图像中所述生物组织的第一外轮廓,并获取所述生物组织的三维模型投影到二维平面得到的参考二维图像中所述生物组织的第二外轮廓;acquiring the first outer contour of the biological tissue in the two-dimensional image, and acquiring the second outer contour of the biological tissue in the reference two-dimensional image obtained by projecting the three-dimensional model of the biological tissue onto the two-dimensional plane; 确定所述第一外轮廓和所述第二外轮廓之间的像素距离;determining the pixel distance between the first outer contour and the second outer contour; 采用预先确定的单位像素距离和物理距离之间的对应关系,确定所述第一外轮廓和所述第二外轮廓之间的像素距离对应的物理距离作为所述融合误差。Using the predetermined correspondence between the unit pixel distance and the physical distance, the physical distance corresponding to the pixel distance between the first outer contour and the second outer contour is determined as the fusion error. 2.根据权利要求1所述的方法,其特征在于,所述确定所述第一外轮廓和所述第二外轮廓之间的像素距离,包括:2. The method according to claim 1, wherein the determining the pixel distance between the first outer contour and the second outer contour comprises: 基于所述第一外轮廓和所述第二外轮廓,确定所述第一外轮廓和所述第二外轮廓中用于描述同一位置的像素点对;Based on the first outer contour and the second outer contour, determine the pixel point pair used to describe the same position in the first outer contour and the second outer contour; 基于所述像素点对,确定所述第一外轮廓和所述第二外轮廓的平均像素距离误差作为所述第一外轮廓和所述第二外轮廓之间的像素距离。Based on the pixel point pairs, an average pixel distance error of the first outer contour and the second outer contour is determined as a pixel distance between the first outer contour and the second outer contour. 3.根据权利要求2所述的方法,其特征在于,所述基于所述第一外轮廓和所述第二外轮廓,确定所述第一外轮廓和所述第二外轮廓中用于描述同一位置的像素点对,包括:3 . The method according to claim 2 , wherein determining the first outer contour and the second outer contour based on the first outer contour and the second outer contour is used for describing the Pairs of pixels at the same location, including: 计算所述第一外轮廓和所述第二外轮廓中每个像素点对应的上下文信息;所述上下文信息为所述像素点的邻域结构;Calculate the context information corresponding to each pixel in the first outer contour and the second outer contour; the context information is the neighborhood structure of the pixel; 计算所述第一外轮廓和所述第二外轮廓中任意两个像素点的花费值;Calculate the cost value of any two pixels in the first outer contour and the second outer contour; 使用匈牙利算法统计出所述第一外轮廓和所述第二外轮廓的总体花费值最低的一组像素点对;其中所述一组像素点对包括多个同一位置的像素点对。A group of pixel point pairs with the lowest overall cost value of the first outer contour and the second outer contour is counted by using the Hungarian algorithm; wherein the group of pixel point pairs includes a plurality of pixel point pairs at the same position. 4.根据权利要求2所述的方法,其特征在于,所述基于所述像素点对,确定所述第一外轮廓和所述第二外轮廓的平均像素距离误差,包括:4. The method according to claim 2, wherein the determining the average pixel distance error of the first outer contour and the second outer contour based on the pixel point pair comprises: 在同一坐标系下,确定所述第一外轮廓和所述第二外轮廓中每个像素点的坐标;所述同一坐标系包括X轴坐标和Y轴坐标;Under the same coordinate system, determine the coordinates of each pixel in the first outer contour and the second outer contour; the same coordinate system includes X-axis coordinates and Y-axis coordinates; 将所述第一外轮廓和所述第二外轮廓中每个像素点对的X轴上的坐标相减,得到X轴上所有像素点对的像素距离误差;Subtract the coordinates on the X axis of each pixel pair in the first outer contour and the second outer contour to obtain the pixel distance error of all pixel pairs on the X axis; 将所述X轴上所有像素点对的像素距离误差除以X轴上像素点对的数量,得到所述X轴上的平均像素距离误差;Divide the pixel distance error of all pixel point pairs on the X axis by the number of pixel point pairs on the X axis to obtain the average pixel distance error on the X axis; 将所述第一外轮廓和所述第二外轮廓中每个像素点对的Y轴上的坐标相减,得到Y轴上所有像素点对的像素距离误差;Subtract the coordinates on the Y-axis of each pixel pair in the first outer contour and the second outer contour to obtain the pixel distance error of all pixel pairs on the Y-axis; 将所述Y轴上所有像素点对的像素距离误差除以Y轴上像素点对的数量,得到所述Y轴上的平均像素距离误差。The average pixel distance error on the Y axis is obtained by dividing the pixel distance error of all pixel point pairs on the Y axis by the number of pixel point pairs on the Y axis. 5.根据权利要求4所述的方法,其特征在于,确定所述单位像素距离和物理距离之间的对应关系,具体包括:5. The method according to claim 4, wherein determining the correspondence between the unit pixel distance and the physical distance, specifically comprising: 在世界坐标系中确定所述生物组织在三维模型中的三维坐标最小值和三维坐标最大值;determining the minimum three-dimensional coordinate value and the maximum three-dimensional coordinate value of the biological tissue in the three-dimensional model in the world coordinate system; 在所述虚拟相机的同一视角下,将所述三维坐标最小值和三维坐标最大值投影到二维平面,得到所述三维坐标最小值和所述三维坐标最大值对应的二维坐标;Under the same viewing angle of the virtual camera, project the three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value to a two-dimensional plane to obtain two-dimensional coordinates corresponding to the three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value; 基于所述世界坐标系中三维坐标最小值和三维坐标最大值的世界坐标,计算得到三维坐标最小值和三维坐标最大值之间的物理距离;Based on the world coordinates of the minimum three-dimensional coordinate and the maximum three-dimensional coordinate in the world coordinate system, calculate the physical distance between the minimum three-dimensional coordinate and the maximum three-dimensional coordinate; 基于所述三维坐标最小值和三维坐标最大值对应的二维坐标,计算得到两个三维坐标最小值和三维坐标最大值对应的二维坐标之间的像素距离;Based on the two-dimensional coordinates corresponding to the three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value, calculate the pixel distance between the two-dimensional coordinates corresponding to the three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value; 基于所述三维坐标最小值和三维坐标最大值之间的物理距离和所述三维坐标最小值和三维坐标最大值对应的二维坐标之间的像素距离,确定二维坐标系中X轴上的单位像素距离对应的物理距离、以及Y轴上的单位像素距离对应的物理距离。Based on the physical distance between the three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value and the pixel distance between the two-dimensional coordinates corresponding to the three-dimensional coordinate minimum value and the three-dimensional coordinate maximum value, determine the X-axis in the two-dimensional coordinate system. The physical distance corresponding to the unit pixel distance, and the physical distance corresponding to the unit pixel distance on the Y axis. 6.根据权利要求5所述的方法,其特征在于,所述确定所述第一外轮廓和所述第二外轮廓之间的像素距离对应的物理距离,具体包括:6. The method according to claim 5, wherein the determining the physical distance corresponding to the pixel distance between the first outer contour and the second outer contour specifically comprises: 将所述X轴上的平均像素距离误差乘以所述X轴上的单位像素距离对应的物理距离,得到所述第一外轮廓和所述第二外轮廓之间在所述X轴上的像素距离对应的物理距离;Multiply the average pixel distance error on the X axis by the physical distance corresponding to the unit pixel distance on the X axis to obtain the distance between the first outer contour and the second outer contour on the X axis. The physical distance corresponding to the pixel distance; 将所述Y轴上的平均像素距离误差乘以所述Y轴上的单位像素距离对应的物理距离,得到所述第一外轮廓和所述第二外轮廓之间在所述Y轴上的像素距离对应的物理距离;Multiply the average pixel distance error on the Y-axis by the physical distance corresponding to the unit pixel distance on the Y-axis to obtain the distance between the first outer contour and the second outer contour on the Y-axis. The physical distance corresponding to the pixel distance; 基于所述第一外轮廓和所述第二外轮廓之间在所述X轴上的像素距离对应的物理距离和在所述Y轴上的像素距离对应的物理距离,得到所述第一外轮廓和所述第二外轮廓之间的像素距离对应的物理距离;The first outer contour is obtained based on the physical distance corresponding to the pixel distance on the X axis and the physical distance corresponding to the pixel distance on the Y axis between the first outer contour and the second outer contour the physical distance corresponding to the pixel distance between the contour and the second outer contour; 将所述第一外轮廓和所述第二外轮廓之间的像素距离对应的物理距离作为所述融合误差。The physical distance corresponding to the pixel distance between the first outer contour and the second outer contour is used as the fusion error. 7.根据权利要求1所述的方法,其特征在于,所述获取二维图像中所述生物组织的第一外轮廓,并获取所述生物组织的三维模型投影到二维平面得到的参考二维图像中所述生物组织的第二外轮廓,具体包括:7 . The method according to claim 1 , wherein, acquiring the first outer contour of the biological tissue in the two-dimensional image, and acquiring a reference two obtained by projecting the three-dimensional model of the biological tissue onto a two-dimensional plane. 8 . The second outer contour of the biological tissue in the dimensional image, specifically including: 利用人工智能算法,获取二维图像中所述生物组织的第一外轮廓;Using an artificial intelligence algorithm to obtain the first outer contour of the biological tissue in the two-dimensional image; 创建一个视频播放尺寸与所述二维图像播放尺寸一致的虚拟窗口,使用所述虚拟相机在所述虚拟窗口上,将所述生物组织的三维模型在所述虚拟相机的同一视角下,投影到二维平面;Create a virtual window with the same video playback size as the two-dimensional image playback size, and use the virtual camera on the virtual window to project the three-dimensional model of the biological tissue on the virtual camera from the same viewing angle. two-dimensional plane; 获取所述二维平面的二维图像,得到所述参考二维图像;acquiring a two-dimensional image of the two-dimensional plane to obtain the reference two-dimensional image; 在所述参考二维图像中对所述生物组织进行边缘检测,得到所述生物组织的第二外轮廓。Edge detection is performed on the biological tissue in the reference two-dimensional image to obtain a second outer contour of the biological tissue. 8.根据权利要求7所述的方法,其特征在于,所述确定所述第一外轮廓和所述第二外轮廓之间的像素距离之前,所述方法还包括:8. The method according to claim 7, wherein before the determining the pixel distance between the first outer contour and the second outer contour, the method further comprises: 利用曲线数据压缩算法,对所述第一外轮廓和所述第二外轮廓进行下采样处理,得到像素点数量相同的第一外轮廓和第二外轮廓。Using a curve data compression algorithm, the first outer contour and the second outer contour are down-sampled to obtain the first outer contour and the second outer contour with the same number of pixel points. 9.一种电子设备,其特征在于,包括处理器和存储器:9. An electronic device, comprising a processor and a memory: 所述存储器,用于存储可被所述处理器执行的计算机程序;the memory for storing a computer program executable by the processor; 所述处理器与所述存储器连接,被配置为执行所述指令以实现如权利要求1-8中任一项所述的三维模型和二维图像的融合误差的确定方法。The processor is connected to the memory and is configured to execute the instructions to implement the method for determining a fusion error of a three-dimensional model and a two-dimensional image according to any one of claims 1-8. 10.一种计算机可读存储介质,其特征在于,当所述计算机可读存储介质中的指令由电子设备执行时,使得所述电子设备能够实现如权利要求1-8中任一项所述的三维模型和二维图像的融合误差的确定方法。10. A computer-readable storage medium, characterized in that, when the instructions in the computer-readable storage medium are executed by an electronic device, the electronic device is enabled to implement any one of claims 1-8 A method for determining the fusion error of a 3D model and a 2D image.
CN202210064500.6A 2022-01-20 2022-01-20 Method for determining fusion error of three-dimensional model and two-dimensional image and electronic equipment Pending CN114494374A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210064500.6A CN114494374A (en) 2022-01-20 2022-01-20 Method for determining fusion error of three-dimensional model and two-dimensional image and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210064500.6A CN114494374A (en) 2022-01-20 2022-01-20 Method for determining fusion error of three-dimensional model and two-dimensional image and electronic equipment

Publications (1)

Publication Number Publication Date
CN114494374A true CN114494374A (en) 2022-05-13

Family

ID=81471804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210064500.6A Pending CN114494374A (en) 2022-01-20 2022-01-20 Method for determining fusion error of three-dimensional model and two-dimensional image and electronic equipment

Country Status (1)

Country Link
CN (1) CN114494374A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842179A (en) * 2022-05-20 2022-08-02 青岛海信医疗设备股份有限公司 Method for matching three-dimensional organ model with intraoperative organ image and electronic equipment
CN115775348A (en) * 2022-11-16 2023-03-10 武汉中海庭数据技术有限公司 An overall confidence evaluation method and system for geometric shape fusion
CN116993664A (en) * 2023-06-13 2023-11-03 浪潮软件集团有限公司 Cable shielding layer coverage rate detection method and device, medium and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096901A (en) * 2009-11-17 2011-06-15 精工爱普生株式会社 Context constrained novel view interpolation
WO2015157543A2 (en) * 2014-04-09 2015-10-15 Virginia Tech Intellectual Properties, Inc. Four-dimensional (4d) combustion monitoring using endoscopic and tomographic techniques
CN107534789A (en) * 2015-06-25 2018-01-02 松下知识产权经营株式会社 Image synchronization device and image synchronization method
CN109523635A (en) * 2018-11-01 2019-03-26 深圳蒜泥科技投资管理合伙企业(有限合伙) A kind of non-rigid reconstruction of 3D anthropometric scanning and measurement method and device
CN113283374A (en) * 2021-06-09 2021-08-20 广东中运信息科技有限公司 Face recognition station passenger state monitoring and early warning system based on Internet of things

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096901A (en) * 2009-11-17 2011-06-15 精工爱普生株式会社 Context constrained novel view interpolation
WO2015157543A2 (en) * 2014-04-09 2015-10-15 Virginia Tech Intellectual Properties, Inc. Four-dimensional (4d) combustion monitoring using endoscopic and tomographic techniques
CN107534789A (en) * 2015-06-25 2018-01-02 松下知识产权经营株式会社 Image synchronization device and image synchronization method
CN109523635A (en) * 2018-11-01 2019-03-26 深圳蒜泥科技投资管理合伙企业(有限合伙) A kind of non-rigid reconstruction of 3D anthropometric scanning and measurement method and device
CN113283374A (en) * 2021-06-09 2021-08-20 广东中运信息科技有限公司 Face recognition station passenger state monitoring and early warning system based on Internet of things

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842179A (en) * 2022-05-20 2022-08-02 青岛海信医疗设备股份有限公司 Method for matching three-dimensional organ model with intraoperative organ image and electronic equipment
CN115775348A (en) * 2022-11-16 2023-03-10 武汉中海庭数据技术有限公司 An overall confidence evaluation method and system for geometric shape fusion
CN116993664A (en) * 2023-06-13 2023-11-03 浪潮软件集团有限公司 Cable shielding layer coverage rate detection method and device, medium and equipment

Similar Documents

Publication Publication Date Title
CN114494374A (en) Method for determining fusion error of three-dimensional model and two-dimensional image and electronic equipment
CN113362446B (en) Method and device for reconstructing object based on point cloud data
CN113781653B (en) Object model generation method and device, electronic equipment and storage medium
US9192339B2 (en) Scanning system and image display method
CN112529097B (en) Sample image generation method, device and electronic equipment
CN113129352B (en) A sparse light field reconstruction method and device
CN112509135B (en) Element labeling method, element labeling device, element labeling equipment, element labeling storage medium and element labeling computer program product
CN114998406B (en) Self-supervision multi-view depth estimation method and device
WO2023198101A1 (en) Artificial intelligence-based oral cavity examination method and apparatus, electronic device, and medium
CN114519772A (en) Three-dimensional reconstruction method and system based on sparse point cloud and cost aggregation
WO2021185036A1 (en) Point cloud data generation and real-time display method and apparatus, device, and medium
CN108564604B (en) Binocular vision stereo matching method and device based on plane constraint and triangulation
WO2022011560A1 (en) Image cropping method and apparatus, electronic device, and storage medium
WO2023184278A1 (en) Method for semantic map building, server, terminal device and storage medium
CN113129362B (en) Method and device for acquiring three-dimensional coordinate data
CN115457205A (en) Method, device, equipment and medium for constructing three-dimensional visceral organ model
CN117197204A (en) Registration method and device for two-dimensional images and three-dimensional models
CN115267251B (en) Volumetric particle image velocity measurement method and device
CN115375740B (en) Pose determining method, three-dimensional model generating method, pose determining device, pose determining equipment and three-dimensional model generating medium
CN116758205A (en) Data processing methods, devices, equipment and media
CN116684569A (en) Virtual three-dimensional object imaging method, device, camera and storage medium
WO2020124562A1 (en) Image processing method, display device and computer-readable storage medium
Chandelon et al. Kidney tracking for live augmented reality in stereoscopic mini-invasive partial nephrectomy
CN114972228A (en) Imaging and processing method and system for target organ in endoscopic surgery
CN115131507A (en) Image processing method, image processing device and metaverse three-dimensional reconstruction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination