CN104102357A - Method and device for checking 3D (three-dimensional) models in virtual scenes - Google Patents
Method and device for checking 3D (three-dimensional) models in virtual scenes Download PDFInfo
- Publication number
- CN104102357A CN104102357A CN201410317624.6A CN201410317624A CN104102357A CN 104102357 A CN104102357 A CN 104102357A CN 201410317624 A CN201410317624 A CN 201410317624A CN 104102357 A CN104102357 A CN 104102357A
- Authority
- CN
- China
- Prior art keywords
- model
- controlling line
- enclosure body
- angle
- axle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
本发明适用于人机交互的模型检测领域,提供了一种虚拟场景中的3D模型检测方法及装置,所述方法包括:对应所述交互笔的空间姿态角,于所述虚拟场景中实时映射出一虚拟控制线;对应所述虚拟场景中的每一3D模型,分别构造出一包围体,所述包围体包裹住对应的3D模型;判断所述实时映射的虚拟控制线与3D模型的包围体是否相交;在所述虚拟控制线与3D模型的包围体相交时,判断所述虚拟控制线与相交的包围体对应的3D模型中的任一三角网格是否相交;在所述虚拟控制线与相交的包围体对应的3D模型中的任一三角网格相交时,判定所述虚拟控制线与相交的包围体对应的3D模型相交,以判定检测到3D模型。本发明实施例能够提高3D模型的检测精度。
The present invention is applicable to the field of model detection of human-computer interaction, and provides a 3D model detection method and device in a virtual scene. The method includes: corresponding to the spatial attitude angle of the interactive pen, and mapping it in real time in the virtual scene Draw a virtual control line; corresponding to each 3D model in the virtual scene, construct a surrounding volume respectively, and the surrounding volume wraps the corresponding 3D model; judge the encirclement between the real-time mapped virtual control line and the 3D model Whether the body intersects; when the virtual control line intersects with the bounding volume of the 3D model, judge whether any triangular mesh in the 3D model corresponding to the virtual control line and the intersecting bounding volume intersects; When intersecting with any triangle mesh in the 3D model corresponding to the intersecting bounding volume, it is determined that the virtual control line intersects with the 3D model corresponding to the intersecting bounding volume, so as to determine that the 3D model is detected. The embodiment of the present invention can improve the detection accuracy of the 3D model.
Description
技术领域technical field
本发明属于人机交互的模型检测领域,尤其涉及一种虚拟场景中的3D模型检测方法及装置。The invention belongs to the field of model detection of human-computer interaction, and in particular relates to a 3D model detection method and device in a virtual scene.
背景技术Background technique
随着虚拟现实技术的深入研究与应用,三维可视化显示技术及人机交互技术的进一步发展,人们可以渲染、设计出各种虚拟现实场景,并对虚拟场景中的3D模型进行必要控制。而在控制虚拟场景中的3D模型之前,需要检测虚拟场景中的3D模型。With the in-depth research and application of virtual reality technology, the further development of 3D visualization display technology and human-computer interaction technology, people can render and design various virtual reality scenes, and carry out necessary control over the 3D models in the virtual scene. Before controlling the 3D model in the virtual scene, it is necessary to detect the 3D model in the virtual scene.
目前,现有的虚拟场景中的3D模型检测方法主要通过以下方式实现:建立一个射线簇,再检测建立的射线簇与检测体的相交性,并在检测到某个射线与检测体相交时,根据检测体与模型的映射关系查找到命中的3D模型。由于仅通过检测建立的射线簇与检测体的相交性来检测模型,因此检测精度过低。At present, the existing 3D model detection method in the virtual scene is mainly realized by the following method: establish a ray cluster, and then detect the intersection between the established ray cluster and the detection object, and when a certain ray is detected to intersect the detection object, Find the hit 3D model according to the mapping relationship between the object and the model. Since the model is only detected by detecting the intersection of the established ray cluster and the object, the detection accuracy is too low.
发明内容Contents of the invention
本发明实施例提供了一种虚拟场景中的3D模型检测方法,旨在解决现有方法在检测3D模型时精度过低的问题。An embodiment of the present invention provides a method for detecting a 3D model in a virtual scene, aiming at solving the problem of low accuracy in detecting a 3D model in existing methods.
本发明实施例是这样实现的,一种虚拟场景中的3D模型检测方法,所述方法包括下述步骤:The embodiment of the present invention is achieved in this way, a 3D model detection method in a virtual scene, the method includes the following steps:
对应所述交互笔的空间姿态角,于所述虚拟场景中实时映射出一虚拟控制线;Corresponding to the spatial attitude angle of the interactive pen, a virtual control line is mapped in real time in the virtual scene;
对应所述虚拟场景中的每一3D模型,分别构造出一包围体,所述包围体包裹住对应的3D模型,所述3D模型由多个三角网格组成;Corresponding to each 3D model in the virtual scene, a surrounding volume is respectively constructed, the surrounding volume wraps the corresponding 3D model, and the 3D model is composed of a plurality of triangular meshes;
判断所述实时映射的虚拟控制线与3D模型的包围体是否相交;Judging whether the virtual control line of the real-time mapping intersects with the bounding volume of the 3D model;
在所述虚拟控制线与3D模型的包围体相交时,判断所述虚拟控制线与相交的包围体对应的3D模型中的任一三角网格是否相交;When the virtual control line intersects with the bounding volume of the 3D model, it is judged whether the virtual control line intersects with any triangular mesh in the 3D model corresponding to the intersecting bounding volume;
在所述虚拟控制线与相交的包围体对应的3D模型中的任一三角网格相交时,判定所述虚拟控制线与相交的包围体对应的3D模型相交,以判定检测到3D模型。When the virtual control line intersects any triangle mesh in the 3D model corresponding to the intersecting bounding volume, it is determined that the virtual control line intersects the 3D model corresponding to the intersecting bounding volume, so as to determine that the 3D model is detected.
本发明实施例的另一目的在于提供一种虚拟场景中的3D模型检测装置,所述装置包括:Another object of the embodiments of the present invention is to provide a 3D model detection device in a virtual scene, the device comprising:
虚拟控制线生成单元,用于对应所述交互笔的空间姿态角,于所述虚拟场景中实时映射出一虚拟控制线;A virtual control line generating unit, configured to map a virtual control line in the virtual scene in real time corresponding to the spatial attitude angle of the interactive pen;
模型包围体构造单元,用于对应所述虚拟场景中的每一3D模型,分别构造出一包围体,所述包围体包裹住对应的3D模型,所述3D模型由多个三角网格组成;A model enclosure construction unit, configured to construct an enclosure corresponding to each 3D model in the virtual scene, the enclosure wraps the corresponding 3D model, and the 3D model is composed of a plurality of triangular meshes;
模型包围体的相交性检测单元,用于判断所述实时映射的虚拟控制线与3D模型的包围体是否相交;The intersection detection unit of the model bounding volume is used for judging whether the virtual control line of the real-time mapping intersects with the bounding volume of the 3D model;
模型三角网格的相交性检测单元,用于在所述虚拟控制线与3D模型的包围体相交时,判断所述虚拟控制线与相交的包围体对应的3D模型中的任一三角网格是否相交;The intersection detection unit of the model triangular mesh is used to determine whether any triangular mesh in the 3D model corresponding to the virtual control line and the intersecting bounding volume is intersect;
模型检测判定单元,用于在所述虚拟控制线与相交的包围体对应的3D模型中的任一三角网格相交时,判定所述虚拟控制线与相交的包围体对应的3D模型相交,以判定检测到3D模型。A model detection and determination unit, configured to determine that the virtual control line intersects the 3D model corresponding to the intersecting volume when the virtual control line intersects with any triangle mesh in the 3D model corresponding to the intersecting volume, so as to It is determined that a 3D model is detected.
在本发明实施例中,由于先对3D模型的包围体进行检测,在判断出交互笔旋转后在虚拟场景中对应的方向向量与3D模型的包围体相交之后,才检测交互笔旋转后在虚拟场景中对应的方向向量与3D模型中的任一个三角网格是否相交,因此减少了运算量,从而有效提高检测虚拟环境场景中3D模型的速度及精度。In the embodiment of the present invention, since the bounding volume of the 3D model is detected first, after it is judged that the corresponding direction vector in the virtual scene intersects with the bounding volume of the 3D model after the interactive pen rotates, it is detected that the interactive pen rotates in the virtual scene. Whether the corresponding direction vector in the scene intersects with any triangular mesh in the 3D model, thus reducing the amount of computation, thereby effectively improving the speed and accuracy of detecting the 3D model in the virtual environment scene.
附图说明Description of drawings
图1是本发明第一实施例提供的一种虚拟场景中的3D模型检测方法的流程图;Fig. 1 is a flowchart of a 3D model detection method in a virtual scene provided by the first embodiment of the present invention;
图2是本发明第一实施例提供的姿态角的示意图;Fig. 2 is a schematic diagram of the attitude angle provided by the first embodiment of the present invention;
图3是本发明第二实施例提供的一种虚拟场景中的3D模型检测装置的结构图;3 is a structural diagram of a 3D model detection device in a virtual scene provided by a second embodiment of the present invention;
图4是本发明第二实施例提供的另一种虚拟场景中的3D模型检测装置的结构图。Fig. 4 is a structural diagram of another 3D model detection device in a virtual scene provided by the second embodiment of the present invention.
具体实施方式Detailed ways
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.
本发明实施例中,根据交互笔的空间姿态角,在虚拟场景中实时映射出一虚拟控制线,以及根据存储的3D模型的顶点数据信息确定3D模型的包围体,再判断所述虚拟控制线与3D模型的包围体是否相交,并在虚拟控制线与3D模型的包围体相交时,判断所述虚拟控制线与相交的包围体对应的3D模型中的任一三角网格是否相交,若相交,判定检测到3D模型。In the embodiment of the present invention, according to the spatial attitude angle of the interactive pen, a virtual control line is mapped in real time in the virtual scene, and the bounding volume of the 3D model is determined according to the stored vertex data information of the 3D model, and then the virtual control line is judged Whether it intersects with the bounding volume of the 3D model, and when the virtual control line intersects with the bounding volume of the 3D model, it is judged whether the virtual control line intersects with any triangle mesh in the 3D model corresponding to the intersecting bounding volume, and if it intersects , to determine that a 3D model has been detected.
为了说明本发明所述的技术方案,下面通过具体实施例来进行说明。In order to illustrate the technical solutions of the present invention, specific examples are used below to illustrate.
实施例一:Embodiment one:
图1示出了本发明第一实施例提供的一种虚拟场景中的3D模型检测方法的流程图,详述如下:Fig. 1 shows a flow chart of a 3D model detection method in a virtual scene provided by the first embodiment of the present invention, which is described in detail as follows:
步骤S11,对应所述交互笔的空间姿态角,于所述虚拟场景中实时映射出一虚拟控制线。Step S11 , mapping a virtual control line in real time in the virtual scene corresponding to the spatial attitude angle of the interactive pen.
该步骤中,交互笔是指用户与智能终端交互时使用的一种类似笔的形状的操作工具,用户通过改变该交互笔的姿态实现与智能终端中的3D模型的互动。In this step, the interactive pen refers to a pen-like operating tool used by the user to interact with the smart terminal, and the user can interact with the 3D model in the smart terminal by changing the posture of the interactive pen.
其中,交互笔的姿态角包括交互笔的俯仰角、横滚角和偏航角。参见图2,在图2中,Xg轴、Yg轴、Zg轴构成的坐标系为地面坐标系,Zg轴垂直于地面并指向地心,Yg轴在水平面内垂直于Xg轴,其指向按右手定则确定;Xb轴、Yb轴、Zb轴构成的坐标系为交互笔坐标系,Xb轴在交互笔平面内并平行于交互笔的设计轴线,Yb轴垂直于交互笔对称平面指向交互笔右方,Zb轴在交互笔对称平面内,与Xb轴垂直并指向交互笔下方。俯仰角是指交互笔坐标系的Xb轴与水平面的夹角,当Xb轴的正半轴位于过坐标原点的水平面之上时,俯仰角为正,否则为负;偏航角是指交互笔坐标系的Xb轴在水平面上的投影与地面坐标系Xg轴之间的夹角,由Xg轴逆时针转至Xb轴的投影时,偏航角为正,反之为负;横滚角是指交互笔坐标系Zb轴与通过Xb轴的铅垂面之间的夹角,交互笔向右滚为正,反之为负。Wherein, the attitude angle of the interactive pen includes a pitch angle, a roll angle and a yaw angle of the interactive pen. See Figure 2. In Figure 2, the coordinate system composed of Xg axis, Yg axis, and Zg axis is the ground coordinate system. The Zg axis is perpendicular to the ground and points to the center of the earth. The Yg axis is perpendicular to the Xg axis in the horizontal plane, and its direction is right-handed. The rules are determined; the coordinate system composed of Xb axis, Yb axis and Zb axis is the coordinate system of the interactive pen, the Xb axis is in the plane of the interactive pen and parallel to the design axis of the interactive pen, and the Yb axis is perpendicular to the symmetry plane of the interactive pen and points to the right of the interactive pen The Zb axis is in the symmetry plane of the interactive pen, perpendicular to the Xb axis and pointing downward of the interactive pen. The pitch angle refers to the angle between the Xb axis of the interactive pen coordinate system and the horizontal plane. When the positive semi-axis of the Xb axis is above the horizontal plane passing through the coordinate origin, the pitch angle is positive, otherwise it is negative; the yaw angle refers to the interactive pen The angle between the projection of the Xb axis of the coordinate system on the horizontal plane and the Xg axis of the ground coordinate system, when the Xg axis is turned counterclockwise to the projection of the Xb axis, the yaw angle is positive, otherwise it is negative; the roll angle is The angle between the Zb axis of the interactive pen coordinate system and the vertical plane passing through the Xb axis is positive when the interactive pen rolls to the right, and negative when the interactive pen is rolled to the right.
优选地,在步骤S11,对应所述交互笔的空间姿态角,于所述虚拟场景中实时映射出一虚拟控制线之前,包括下述步骤:Preferably, in step S11, corresponding to the spatial attitude angle of the interactive pen, before mapping a virtual control line in the virtual scene in real time, the following steps are included:
A1、确定初始的俯仰角和横滚角。该步骤中,通过安装在交互笔上的三轴陀螺仪、三轴加速度计确定初始的俯仰角和横滚角。A1. Determine the initial pitch and roll angles. In this step, the initial pitch angle and roll angle are determined by the three-axis gyroscope and the three-axis accelerometer installed on the interactive pen.
A2、根据卡尔曼滤波以及所述初始的俯仰角和横滚角迭代确定最终的俯仰角和横滚角。由于陀螺仪精度高,但长时间会有漂移,而加速度计动态精度差,但没有长期漂移,因此,为了综合利用两者的优势,可通过卡尔曼滤波做数据融合,以得到稳定的准确的俯仰角和横滚角。在该步骤中:将初始的俯仰角和横滚角采用四元数表示,并作为卡尔曼滤波系统的状态量。将姿态运动学方程作为卡尔曼滤波的状态转移方程,加速度信息作为滤波的观测信息,然后利用卡尔曼滤波的计算方法进行迭代计算,将初始的俯仰角和横滚角更新为准确、稳定的俯仰角和横滚角。A2. Iteratively determine the final pitch angle and roll angle according to the Kalman filter and the initial pitch angle and roll angle. Since the gyroscope has high precision, it will drift for a long time, while the accelerometer has poor dynamic precision, but there is no long-term drift. Therefore, in order to make full use of the advantages of the two, data fusion can be performed through Kalman filtering to obtain stable and accurate pitch and roll angles. In this step: the initial pitch angle and roll angle are represented by quaternion and used as the state quantity of the Kalman filter system. The attitude kinematics equation is used as the state transition equation of the Kalman filter, and the acceleration information is used as the observation information of the filter, and then the calculation method of the Kalman filter is used for iterative calculation to update the initial pitch angle and roll angle to an accurate and stable pitch angle and roll angle.
其中,四元数都是由实数加上三个元素i、j、k组成,而且它们有如下的关系:i^2=j^2=k^2=ijk=-1。Among them, the quaternion is composed of real numbers plus three elements i, j, k, and they have the following relationship: i^2=j^2=k^2=ijk=-1.
A3、根据所述最终的俯仰角和横滚角以及交互笔所在坐标系的x轴、y轴、z轴的磁场强度分量,确定水平面内的磁场强度分量。该步骤中,交互笔所在坐标系的x轴、y轴、z轴的磁场强度分量可通过安装在交互笔上的三轴磁力计获得。假设由三轴磁力计测出的x轴、y轴、z轴的磁场强度分量分别为hx,hy,hz,确定的最终的俯仰角为θ,横滚角为γ,由于交互笔所在的坐标系xyz绕x轴旋转,再绕y轴旋转后,交互笔所在的坐标系的xoy平面便于地面坐标系的水平面XOY重合,因此,利用上述关系可得到如下等式:A3. Determine the magnetic field strength component in the horizontal plane according to the final pitch angle and roll angle and the magnetic field strength components of the x-axis, y-axis, and z-axis of the coordinate system where the interactive pen is located. In this step, the magnetic field strength components of the x-axis, y-axis, and z-axis of the coordinate system where the interactive pen is located can be obtained through a three-axis magnetometer installed on the interactive pen. Assuming that the magnetic field strength components of the x-axis, y-axis, and z-axis measured by the three-axis magnetometer are h x , h y , h z respectively, the final pitch angle is θ, and the roll angle is γ. Since the interactive pen The coordinate system xyz where it is located rotates around the x-axis, and then rotates around the y-axis. The xoy plane of the coordinate system where the interactive pen is located facilitates the coincidence of the horizontal plane XOY of the ground coordinate system. Therefore, using the above relationship, the following equation can be obtained:
Rpitch为交互笔所在的坐标系xyz绕y轴旋转θ角度后得到的旋转矩阵,Rroll为交互笔所在的坐标系xyz绕x轴旋转γ角度后得到的旋转矩阵。将上述等式展开,得到:R pitch is the rotation matrix obtained by rotating the coordinate system xyz where the interactive pen is located by an angle θ around the y-axis, and R roll is the rotation matrix obtained by rotating the coordinate system xyz where the interactive pen is located by an angle γ around the x-axis. Expanding the above equation, we get:
HX=hx·cosγ+hy·sinγ·sinθ+hz·cosγ·sinθH X =h x ·cosγ+h y ·sinγ·sinθ+h z ·cosγ·sinθ
HY=hy·cosγ-hz·sinγH Y =h y ·cosγ-h z ·sinγ
HZ=-hx·sinθ+hy·sinγ·cosθ+hz·cosθ·cosγH Z =-h x sinθ+h y sinγcosθ+h z cosθcosγ
上述的“·”是指内积,通过上面3个公式得到HX、HY。The "·" above refers to the inner product, and H X and H Y are obtained through the above three formulas.
A4、根据确定的水平面内的磁场强度分量确定偏航角。A4. Determine the yaw angle according to the determined magnetic field intensity component in the horizontal plane.
偏航角=arctan(HY/HX),HX、HY为地面坐标系的水平面内磁场强度分量。Yaw angle=arctan(H Y /H X ), where H X and H Y are the magnetic field intensity components in the horizontal plane of the ground coordinate system.
在确定交互笔的空间姿态角之后,对应所述交互笔的空间姿态角,于所述虚拟场景中实时映射出一虚拟控制线,具体包括下述步骤:After the spatial attitude angle of the interactive pen is determined, a virtual control line is mapped in real time in the virtual scene corresponding to the spatial attitude angle of the interactive pen, which specifically includes the following steps:
B1、根据所述交互笔的姿态角确定所述交互笔的旋转矩阵。具体地:B11、确定交互笔所在坐标系绕y轴旋转θ角度后得到的矩阵,所述θ为俯仰角。由步骤S11可知,交互笔所在坐标系绕y轴旋转θ角度后得到的矩阵为
B2、根据所述交互笔的旋转矩阵、预设的虚拟场景中的原点以及预设的交互笔的初始方向向量,在虚拟场景中实时渲染出所述交互笔旋转后对应的虚拟控制线。该步骤中,假设预设的交互笔的初始方向向量为交互笔旋转后在虚拟场景中对应的方向向量(即实时映射的虚拟控制线)为R为旋转矩阵,则其中“·”表示内积运算。若确定了虚拟场景中的原点,则可确定交互笔旋转后在虚拟场景中对应的方向向量(即实时映射的虚拟控制线)所在的位置。具体地,在预设的虚拟场景中的原点,将所述交互笔旋转后在虚拟场景中对应的方向向量绘制为指定长度的虚拟控制线。该步骤中,可利用图形渲染工具OpenGL,在预设的虚拟场景中的原点沿着交互笔旋转后在虚拟场景中对应的方向向量方向,绘制出一条一定长度的虚拟控制线,从而将交互笔的空间姿态信息通过虚拟控制线实时呈现出来,便于用户直观获知交互笔的当前状态。B2. According to the rotation matrix of the interactive pen, the preset origin in the virtual scene, and the preset initial direction vector of the interactive pen, render the virtual control line corresponding to the rotation of the interactive pen in the virtual scene in real time. In this step, it is assumed that the initial direction vector of the preset interactive pen is After the interactive pen is rotated, the corresponding direction vector in the virtual scene (that is, the real-time mapped virtual control line) is R is a rotation matrix, then Where "·" represents the inner product operation. If the origin in the virtual scene is determined, the position of the corresponding direction vector (that is, the virtual control line mapped in real time) in the virtual scene after the interactive pen is rotated can be determined. Specifically, at the preset origin in the virtual scene, the corresponding direction vector in the virtual scene after the interactive pen is rotated is drawn as a virtual control line of a specified length. In this step, the graphics rendering tool OpenGL can be used to rotate the origin of the preset virtual scene along the interactive pen to the corresponding direction vector in the virtual scene Direction, draw a virtual control line of a certain length, so that the spatial attitude information of the interactive pen can be presented in real time through the virtual control line, so that the user can intuitively know the current state of the interactive pen.
步骤S12,对应所述虚拟场景中的每一3D模型,分别构造出一包围体,所述包围体包裹住对应的3D模型,所述3D模型由多个三角网格组成。Step S12, corresponding to each 3D model in the virtual scene, construct a bounding volume respectively, the bounding volume encloses the corresponding 3D model, and the 3D model is composed of a plurality of triangular meshes.
该步骤中,由于每个3D模型由多个三角网格组成,因此,3D模型的顶点数据信息为组成该3D模型的多个三角网格对应的顶点数据信息。构建一个3D模型的包围体时,比较该3D模型中各个定点数据信息,判断哪个坐标是3D模型在x轴的最大顶点坐标、最小顶点坐标;判断哪个坐标是3D模型在y轴的最大顶点坐标、最小顶点坐标;判断哪个坐标是3D模型在z轴的最大顶点坐标、最小顶点坐标。在确定上述列举的6个坐标后,该6个坐标就作为该3D模型的包围体的顶点坐标。In this step, since each 3D model is composed of multiple triangular meshes, the vertex data information of the 3D model is the vertex data information corresponding to the multiple triangular meshes forming the 3D model. When constructing a bounding volume of a 3D model, compare the data information of each fixed point in the 3D model to determine which coordinate is the maximum vertex coordinate and the minimum vertex coordinate of the 3D model on the x-axis; determine which coordinate is the maximum vertex coordinate of the 3D model on the y-axis , Minimum vertex coordinates; determine which coordinates are the maximum vertex coordinates and minimum vertex coordinates of the 3D model on the z-axis. After the 6 coordinates listed above are determined, the 6 coordinates are used as the vertex coordinates of the bounding volume of the 3D model.
步骤S13,判断所述实时映射的虚拟控制线与3D模型的包围体是否相交。Step S13, judging whether the real-time mapped virtual control line intersects with the bounding volume of the 3D model.
其中,所述判断所述实时映射的虚拟控制线与3D模型的包围体是否相交的步骤具体包括:Wherein, the step of judging whether the real-time mapped virtual control line intersects with the bounding volume of the 3D model specifically includes:
C1、确定所述虚拟控制线对应的函数f(x,y,z),f(x,y,z)任一点在x轴、y轴、z轴的分量为fx(x,y,z)、fy(x,y,z)、fz(x,y,z)。该步骤中,虚拟控制线对应的函数f(x,y,z)通过预设的虚拟场景中的原点。C1. Determine the function f(x, y, z) corresponding to the virtual control line, and the component of any point of f(x, y, z) on the x-axis, y-axis, and z-axis is f x (x, y, z ), f y (x,y,z), f z (x,y,z). In this step, the function f(x, y, z) corresponding to the virtual control line passes through the preset origin in the virtual scene.
C2、判断f(x,y,z)任一点在x轴、y轴、z轴的分量fx(x,y,z)、fy(x,y,z)、fz(x,y,z)是否都满足以下条件:C2. Determine the components f x (x, y, z), f y (x, y, z) and f z ( x , y) of any point in f (x, y, z) on the x-axis, y-axis, and z -axis , z) all meet the following conditions:
min x≤fx(x,y,z)≤max x;min x ≤ f x (x, y, z) ≤ max x;
min y≤fy(x,y,z)≤max y;min y≤f y (x,y,z)≤max y;
min z≤fz(x,y,z)≤max z;min z≤f z (x,y,z)≤max z;
其中,所述min x、max x、min y、max y、min z、max z为所述包围体的顶点坐标。该步骤中,min x、max x、min y、max y、min z、max z分别为3D模型在x轴的最小顶点坐标、最大顶点坐标;在y轴的最小顶点坐标、最大顶点坐标;在z轴的最小顶点坐标、最大顶点坐标。Wherein, the min x, max x, min y, max y, min z, max z are the vertex coordinates of the bounding volume. In this step, min x, max x, min y, max y, min z, max z are respectively the minimum vertex coordinates and maximum vertex coordinates of the 3D model on the x-axis; the minimum vertex coordinates and the maximum vertex coordinates on the y-axis; The minimum and maximum vertex coordinates of the z-axis.
C3、在f(x,y,z)任一点在x轴、y轴、z轴的分量fx(x,y,z)、fy(x,y,z)、fz(x,y,z)都满足上述条件时,判定所述虚拟控制线与3D模型的包围体相交,否则,判定所述虚拟控制线与3D模型的包围体不相交。该步骤中,一旦检测到虚拟控制线与某个3D模型的包围体相交,即可排除虚拟控制线与其他剩余的3D模型相交的可能性,即不必去检测虚拟控制线与其他3D模型的包围体是否相交,从而大大加快了3D模型的检测速度,并且,由于根据虚拟控制线的顶点的坐标与3D模型的包围体的坐标判断虚拟控制线和包围体的相交性,因此,判断结果不会受到3D模型是否被遮挡的影响,从而能够提高虚拟场景中被遮挡3D模型的检测效率。C3 . Components f x (x, y, z), f y (x, y, z), f z (x, y ,z) When the above conditions are all satisfied, it is determined that the virtual control line intersects the bounding volume of the 3D model; otherwise, it is determined that the virtual control line does not intersect the bounding volume of the 3D model. In this step, once it is detected that the virtual control line intersects with the bounding volume of a certain 3D model, the possibility that the virtual control line intersects with other remaining 3D models can be ruled out, that is, it is not necessary to detect the boundary between the virtual control line and other 3D models Whether the volume intersects, thereby greatly speeding up the detection speed of the 3D model, and because the intersection of the virtual control line and the bounding volume is judged according to the coordinates of the vertices of the virtual control line and the coordinates of the bounding volume of the 3D model, the judgment result will not Affected by whether the 3D model is occluded, the detection efficiency of the occluded 3D model in the virtual scene can be improved.
步骤S14,在所述虚拟控制线与3D模型的包围体相交时,判断所述虚拟控制线与相交的包围体对应的3D模型中的任一三角网格是否相交。Step S14, when the virtual control line intersects with the bounding volume of the 3D model, it is judged whether the virtual control line intersects with any triangular mesh in the 3D model corresponding to the intersected bounding volume.
当确定与某个3D模型的包围体相交后,接下来进行虚拟控制线与3D模型的三角网格的具体相交性检测。After it is determined to intersect with the bounding volume of a certain 3D model, the specific intersection detection between the virtual control line and the triangular mesh of the 3D model is performed next.
其中,所述在所述虚拟控制线与3D模型的包围体相交时,判断所述虚拟控制线与相交的包围体对应的3D模型中的任一三角网格是否相交的步骤具体包括:Wherein, when the virtual control line intersects with the bounding volume of the 3D model, the step of judging whether the virtual control line intersects with any triangular mesh in the 3D model corresponding to the intersecting bounding volume specifically includes:
假设3D模型中任一三角网格对应的三角形的参数方程为(1-u-v)V0+uV1+vV2,其中,(1-u-v)、u、v分别为三角形三个顶点的权重;V0,V1,V2为三角形三个顶点的坐标;Assume that the parametric equation of the triangle corresponding to any triangle mesh in the 3D model is (1-uv)V 0 +uV 1 +vV 2 , where (1-uv), u, and v are the weights of the three vertices of the triangle; V 0 , V 1 , V 2 are the coordinates of the three vertices of the triangle;
根据预设的虚拟场景中的原点、所述虚拟控制线以及3D模型中的任一三角网格的三个顶点坐标,判断u、v是否满足以下条件:u≥0,v≥0,u+v≤1,若满足,判定所述虚拟控制线与相交的包围体对应的3D模型中的所述任一三角网格相交,否则,判定所述虚拟控制线与相交的包围体对应的3D模型中的所述任一三角网格不相交。According to the origin in the preset virtual scene, the virtual control line and the three vertex coordinates of any triangular mesh in the 3D model, determine whether u and v meet the following conditions: u≥0, v≥0, u+ v≤1, if satisfied, determine that the virtual control line intersects with any triangle mesh in the 3D model corresponding to the intersecting bounding volume, otherwise, determine that the virtual control line intersects with the 3D model corresponding to the intersecting bounding volume Any of the triangular meshes in are disjoint.
假设预设的虚拟场景中的原点为O,虚拟控制线的方向为D,则该虚拟控制线可以表示为O+D·t(该表达式与上述C1出现的f(x,y)都能表示控制线),t≠0。由于3D模型的每个三角网格的顶点都存储在顶点数组中,因此可按照顶点的存储顺序,依次对每个三角网格进行相交性检测。假设某个三角网格的顶点分别为V0,V1,V2,且u,v分别是顶点V1,V2的权重,则三角网格的参数方程可以表示为(1-u-v)V0+uV1+vV2,当满足u≥0,v≥0,u+v≤1的时候,判定与该三角网格相交。而判断u、v是否满足上述条件,则需先确定u、v的值,具体通过以下步骤确定u、v的值:Assuming that the origin in the preset virtual scene is O, and the direction of the virtual control line is D, then the virtual control line can be expressed as O+D·t (this expression can be compared with the f(x,y) appearing in the above C1 represents the control line), t≠0. Since the vertices of each triangular mesh of the 3D model are stored in the vertex array, each triangular mesh can be sequentially checked for intersection according to the storage order of the vertices. Suppose the vertices of a triangular mesh are V 0 , V 1 , V 2 respectively, and u, v are the weights of the vertices V 1 , V 2 respectively, then the parametric equation of the triangular mesh can be expressed as (1-uv)V 0 +uV 1 +vV 2 , when u≥0, v≥0, u+v≤1, it is determined to intersect with the triangular mesh. To judge whether u and v meet the above conditions, you need to determine the values of u and v first, and specifically determine the values of u and v through the following steps:
根据O+D·t=(1-u-v)V0+uV1+vV2等式求出未知数t,如果t不为零,则说明虚拟控制线与该三角网格相交,即检测到具体的3D模型了。根据O+D·t=(1-u-v)V0+uV1+vV2可得线性方程组:Calculate the unknown number t according to the equation O+D·t=(1-uv)V 0 +uV 1 +vV 2 , if t is not zero, it means that the virtual control line intersects with the triangular mesh, that is, the specific 3D model up. According to O+D·t=(1-uv)V 0 +uV 1 +vV 2 , the linear equations can be obtained:
其中,“||”表示矩阵的行列式。Among them, "||" represents the determinant of the matrix.
再根据混合积公式|a b c|=a×b·c=-a×c·b,其中,“×”表示外积计算,“·”表示内积计算。
最终可化简为
根据上述最终简化的公式得到u、v的计算公式,然后可以通过下述方法来确认u、v是否符合“u≥0,v≥0,u+v≤1”这一条件:According to the final simplified formula above, the calculation formulas of u and v can be obtained, and then the following method can be used to confirm whether u and v meet the condition of "u≥0, v≥0, u+v≤1":
令ε是一个非常接近0的正浮点数,令det=|P·E1|,若det<ε,立即结束与当前三角网格的相交性判断,进入与下一个三角网格的判断。若det≥ε,则令u′=P·T,若u′<0或u′>det,判定最终计算出的u值不符合条件,则立即跳出当前的判断,进入与下一个三角网格的判断。若u′≥0或u′≤det,令v′=Q·D,若v′<0或u′+v′>det,判定最终计算出的v值不符合条件,则立即跳出当前的判断,进入与下一个三角网格的判断。否则,若det≥ε、u′≥0或u′≤det,且v′≤0或u′+v′≤det时,即可判断该虚拟控制线与某个具体的三角网格相交。Let ε be a positive floating point number very close to 0, let det=|P·E 1 |, if det<ε, immediately end the intersection judgment with the current triangular mesh, and enter the judgment with the next triangular mesh. If det≥ε, set u′=P·T, if u′<0 or u′>det, it is judged that the finally calculated u value does not meet the conditions, then immediately jump out of the current judgment and enter the next triangular mesh judgment. If u'≥0 or u'≤det, set v'=Q·D, if v'<0 or u'+v'>det, it is determined that the final calculated value of v does not meet the conditions, and immediately jump out of the current judgment , enter the judgment with the next triangular mesh. Otherwise, if det≥ε, u'≥0 or u'≤det, and v'≤0 or u'+v'≤det, it can be judged that the virtual control line intersects with a specific triangular mesh.
由于同一个三角网格存在正面和反面,在与三角网格的检测过程中,只检测与三角网格正面的相交,可以节省一半检测时间,当检测到与某个具体三角网格相交时,记录交点及对应3D模型的包围体的编号,检测完毕。Since the same triangular mesh has front and back sides, during the detection process with the triangular mesh, only the intersection with the front of the triangular mesh is detected, which can save half the detection time. When an intersection with a specific triangular mesh is detected, Record the intersection point and the number of the bounding volume corresponding to the 3D model, and the detection is completed.
步骤S15,在所述虚拟控制线与相交的包围体对应的3D模型中的任一三角网格相交时,判定所述虚拟控制线与相交的包围体对应的3D模型相交,以判定检测到3D模型。Step S15, when the virtual control line intersects with any triangular mesh in the 3D model corresponding to the intersecting bounding volume, it is determined that the virtual control line intersects with the 3D model corresponding to the intersecting bounding volume, so as to determine that a 3D Model.
该步骤中,在3D模型的所有三角网格中,只要判断出虚拟控制线与其中任意一个三角网格相交了,即可立即判断出虚拟控制线已经检测到某个具体的3D模型,不再进行后续的相交检测。In this step, in all the triangular meshes of the 3D model, as long as it is judged that the virtual control line intersects any one of the triangular meshes, it can be immediately judged that the virtual control line has detected a specific 3D model, and no longer Subsequent intersection detection is performed.
在本发明第一实施例中,搜集交互笔的姿态角信息,根据搜集的交互笔的姿态角确定所述交互笔的旋转矩阵,再根据所述交互笔的旋转矩阵、预设的虚拟场景中的原点以及预设的交互笔的初始方向向量,在虚拟场景中实时映射出一虚拟控制线,最后根据存储的3D模型的顶点数据信息确定3D模型的包围体之后,判断所述虚拟控制线与3D模型的包围体是否相交、判断所述虚拟控制线与相交的包围体对应的3D模型中的任一三角网格是否相交,以便判断是否检测到3D模型。由于先对3D模型的包围体进行检测,在判断出虚拟控制线与3D模型的包围体相交之后,才检测虚拟控制线与3D模型中的任一个三角网格是否相交,因此减少了运算量,从而有效提高检测虚拟环境场景中3D模型的速度及精度。In the first embodiment of the present invention, the attitude angle information of the interactive pen is collected, and the rotation matrix of the interactive pen is determined according to the collected attitude angle of the interactive pen, and then according to the rotation matrix of the interactive pen, in the preset virtual scene The origin and the preset initial direction vector of the interactive pen map a virtual control line in the virtual scene in real time, and finally determine the bounding volume of the 3D model according to the stored vertex data information of the 3D model, and judge the relationship between the virtual control line and Whether the bounding volume of the 3D model intersects, judging whether the virtual control line intersects with any triangle mesh in the 3D model corresponding to the intersecting bounding volume, so as to determine whether the 3D model is detected. Since the bounding volume of the 3D model is detected first, after it is judged that the virtual control line intersects with the bounding volume of the 3D model, whether the virtual control line intersects with any triangular mesh in the 3D model is detected, thereby reducing the amount of computation. Therefore, the speed and accuracy of detecting the 3D model in the virtual environment scene are effectively improved.
实施例二:Embodiment two:
图3示出了本发明第二实施例提供的一种虚拟场景中的3D模型检测装置的结构图,为了便于说明,仅示出了与本发明实施例相关的部分。FIG. 3 shows a structural diagram of a 3D model detection device in a virtual scene provided by a second embodiment of the present invention. For ease of description, only parts related to the embodiment of the present invention are shown.
该虚拟场景中的3D模型检测装置包括:虚拟控制线生成单元31、模型包围体构造单元32、模型包围体的相交性检测单元33、模型三角网格的相交性检测单元34、模型检测判定单元35。其中:The 3D model detection device in the virtual scene includes: a virtual control line generation unit 31, a model bounding volume construction unit 32, a model bounding volume intersection detection unit 33, a model triangle mesh intersection detection unit 34, a model detection and determination unit 35. in:
虚拟控制线生成单元31,用于对应所述交互笔的空间姿态角,于所述虚拟场景中实时映射出一虚拟控制线。The virtual control line generating unit 31 is configured to map a virtual control line in the virtual scene in real time corresponding to the spatial attitude angle of the interactive pen.
其中,交互笔的姿态角包括交互笔的俯仰角、横滚角和偏航角,交互笔的姿态角通过以下方式确定:Among them, the attitude angle of the interactive pen includes the pitch angle, roll angle and yaw angle of the interactive pen, and the attitude angle of the interactive pen is determined by the following method:
图4示出了本发明另一实施例提供的一种虚拟场景中的3D模型检测装置的结构图,此时,所述虚拟场景中的3D模型检测装置包括:FIG. 4 shows a structural diagram of a 3D model detection device in a virtual scene provided by another embodiment of the present invention. At this time, the 3D model detection device in the virtual scene includes:
初始俯仰角和横滚角确定单元36,用于确定交互笔初始的俯仰角和横滚角。其中,通过安装在交互笔上的三轴陀螺仪、三轴加速度计确定初始的俯仰角和横滚角。The initial pitch angle and roll angle determination unit 36 is configured to determine the initial pitch angle and roll angle of the interactive pen. Among them, the initial pitch angle and roll angle are determined by a three-axis gyroscope and a three-axis accelerometer installed on the interactive pen.
最终俯仰角和横滚角确定单元37,用于根据卡尔曼滤波以及所述初始的俯仰角和横滚角迭代确定最终的俯仰角和横滚角。将初始的俯仰角和横滚角采用四元数表示,并作为卡尔曼滤波系统的状态量。将姿态运动学方程作为卡尔曼滤波的状态转移方程,加速度信息作为滤波的观测信息,然后利用卡尔曼滤波的计算方法进行迭代计算,将初始的俯仰角和横滚角更新为准确、稳定的俯仰角和横滚角。The final pitch angle and roll angle determination unit 37 is configured to iteratively determine the final pitch angle and roll angle according to the Kalman filter and the initial pitch angle and roll angle. The initial pitch angle and roll angle are represented by quaternion and used as the state quantity of the Kalman filter system. The attitude kinematics equation is used as the state transition equation of the Kalman filter, and the acceleration information is used as the observation information of the filter, and then the calculation method of the Kalman filter is used for iterative calculation to update the initial pitch angle and roll angle to an accurate and stable pitch angle and roll angle.
磁场强度分量确定单元38,用于根据所述最终的俯仰角和横滚角以及交互笔所在坐标系的x轴、y轴、z轴的磁场强度分量,确定水平面内的磁场强度分量。The magnetic field strength component determining unit 38 is used to determine the magnetic field strength component in the horizontal plane according to the final pitch angle and roll angle and the magnetic field strength components of the x-axis, y-axis, and z-axis of the coordinate system where the interactive pen is located.
偏航角确定单39元,用于根据确定的水平面内的磁场强度分量确定偏航角。The yaw angle determining unit 39 is used to determine the yaw angle according to the determined magnetic field intensity component in the horizontal plane.
优选地,虚拟控制线生成单元31包括:Preferably, the virtual control line generating unit 31 includes:
旋转矩阵确定模块311,用于根据所述交互笔的姿态角确定所述交互笔的旋转矩阵。该交互笔的旋转矩阵包括交互笔所在坐标系绕y轴旋转θ角度后得到的矩阵,所述θ为俯仰角;交互笔所在坐标系绕z轴旋转α角度后得到的矩阵,所述α为偏航角;交互笔所在坐标系绕x轴旋转γ角度后得到的矩阵,所述γ为横滚角。The rotation matrix determining module 311 is configured to determine the rotation matrix of the interactive pen according to the attitude angle of the interactive pen. The rotation matrix of the interactive pen includes a matrix obtained by rotating the coordinate system of the interactive pen by an angle θ around the y-axis, where θ is the pitch angle; a matrix obtained by rotating the coordinate system of the interactive pen by an angle α around the z-axis, and the α is Yaw angle; the matrix obtained by rotating the coordinate system of the interactive pen around the x-axis by an angle of γ, where γ is the roll angle.
虚拟控制线渲染模块312,用于根据所述交互笔的旋转矩阵、预设的虚拟场景中的原点以及预设的交互笔的初始方向向量,在虚拟场景中实时渲染出所述交互笔旋转后对应的虚拟控制线。假设预设的交互笔的初始方向向量为交互笔旋转后在虚拟场景中对应的方向向量(即实时映射的虚拟控制线)为R为旋转矩阵,则其中“·”表示内积运算。The virtual control line rendering module 312 is used to render the interactive pen in the virtual scene in real time according to the rotation matrix of the interactive pen, the preset origin in the virtual scene, and the preset initial direction vector of the interactive pen. Corresponding virtual control line. Suppose the initial direction vector of the preset interactive pen is After the interactive pen is rotated, the corresponding direction vector in the virtual scene (that is, the real-time mapped virtual control line) is R is a rotation matrix, then Where "·" represents the inner product operation.
模型包围体构造单元32,用于对应所述虚拟场景中的每一3D模型,分别构造出一包围体,所述包围体包裹住对应的3D模型,所述3D模型由多个三角网格组成。The model enclosure construction unit 32 is configured to construct an enclosure corresponding to each 3D model in the virtual scene, the enclosure encloses the corresponding 3D model, and the 3D model is composed of a plurality of triangular meshes .
其中,3D模型的包围体是由3D模型在x轴的最大顶点坐标、最小顶点坐标、在y轴的最大顶点坐标、最小顶点坐标、在z轴的最大顶点坐标、最小顶点坐标这6个坐标组成。Among them, the bounding body of the 3D model is composed of the maximum vertex coordinates of the 3D model on the x-axis, the minimum vertex coordinates, the maximum vertex coordinates on the y-axis, the minimum vertex coordinates, the maximum vertex coordinates on the z-axis, and the minimum vertex coordinates. composition.
模型包围体的相交性检测单元33,用于判断所述实时映射的虚拟控制线与3D模型的包围体是否相交。The model bounding volume intersection detection unit 33 is configured to determine whether the real-time mapped virtual control line intersects with the bounding volume of the 3D model.
具体地,模型包围体的相交性检测单元单元33包括:Specifically, the intersection detection unit 33 of the model bounding volume includes:
向量函数确定模块331,用于确定所述虚拟控制线对应的函数f(x,y,z),f(x,y,z)任一点在x轴、y轴、z轴的分量为fx(x,y,z)、fy(x,y,z)、fz(x,y,z)。The vector function determination module 331 is used to determine the function f(x, y, z) corresponding to the virtual control line, and the component of any point of f(x, y, z) on the x-axis, y-axis and z-axis is f x (x,y,z), f y (x,y,z), f z (x,y,z).
相交条件判断模块332,用于判断f(x,y,z)任一点在x轴、y轴、z轴的分量fx(x,y,z)、fy(x,y,z)、fz(x,y,z)是否都满足以下条件:Intersection condition judging module 332, for judging any point of f(x, y, z) on x-axis, y-axis, z-axis component f x (x, y, z), f y (x, y, z), Does f z (x,y,z) all satisfy the following conditions:
min x≤fx(x,y,z)≤max x;min x ≤ f x (x, y, z) ≤ max x;
min y≤fy(x,y,z)≤max y;min y≤f y (x,y,z)≤max y;
min z≤fz(x,y,z)≤max z。min z ≤ f z (x, y, z) ≤ max z.
其中,所述min x、max x、min y、max y、min z、max z为所述包围体的顶点坐标。其中,min x、max x、min y、max y、min z、max z分别为3D模型在x轴的最小顶点坐标、最大顶点坐标;在y轴的最小顶点坐标、最大顶点坐标;在z轴的最小顶点坐标、最大顶点坐标。Wherein, the min x, max x, min y, max y, min z, max z are the vertex coordinates of the bounding volume. Among them, min x, max x, min y, max y, min z, max z are the minimum and maximum vertex coordinates of the 3D model on the x-axis; the minimum and maximum vertex coordinates on the y-axis; The minimum and maximum vertex coordinates of .
包围体相交判断模块333,用于在f(x,y,z)任一点在x轴、y轴、z轴的分量fx(x,y,z)、fy(x,y,z)、fz(x,y,z)都满足上述条件时,判定所述虚拟控制线与3D模型的包围体相交,否则,判定所述虚拟控制线与3D模型的包围体不相交。The bounding volume intersection judging module 333 is used for the components f x (x, y, z) and f y (x, y, z) at any point of f ( x , y, z) on the x-axis, y-axis, and z-axis , f z (x, y, z) all satisfy the above conditions, it is determined that the virtual control line intersects the bounding volume of the 3D model, otherwise, it is determined that the virtual control line does not intersect the bounding volume of the 3D model.
模型三角网格的相交性检测单元34,用于在所述虚拟控制线与3D模型的包围体相交时,判断所述虚拟控制线与相交的包围体对应的3D模型中的任一三角网格是否相交。The intersection detection unit 34 of the model triangular mesh is used for judging any triangular mesh in the 3D model corresponding to the virtual control line and the intersecting bounding volume when the virtual control line intersects with the bounding volume of the 3D model Whether to intersect.
具体地,模型三角网格的相交性检测单元34包括:Specifically, the intersection detection unit 34 of the model triangular mesh includes:
顶点权重值范围判断模块341,用于在3D模型中任一三角网格对应的三角形的参数方程为(1-u-v)V0+uV1+vV2时,根据预设的虚拟场景中的原点、所述虚拟控制线以及3D模型中的任一三角网格的三个顶点坐标,判断u、v是否满足以下条件:u≥0,v≥0,u+v≤1,其中,(1-u-v)、u、v分别为三角形三个顶点的权重;V0,V1,V2为三角形三个顶点的坐标。Vertex weight value range judging module 341, used for when the parameter equation of the triangle corresponding to any triangle mesh in the 3D model is (1-uv)V 0 +uV 1 +vV 2 , according to the origin in the preset virtual scene , the virtual control line and the three vertex coordinates of any triangular mesh in the 3D model, and judge whether u and v meet the following conditions: u≥0, v≥0, u+v≤1, wherein, (1- uv), u, v are the weights of the three vertices of the triangle respectively; V 0 , V 1 , V 2 are the coordinates of the three vertices of the triangle.
三角网格相交性判定模块342,用于在u、v满足u≥0,v≥0,u+v≤1时,判定所述虚拟控制线与相交的包围体对应的3D模型中的所述任一三角网格相交,在u、v不满足u≥0,v≥0,u+v≤1时,判定所述虚拟控制线与相交的包围体对应的3D模型中的所述任一三角网格不相交。The triangular mesh intersection judging module 342 is used to judge the virtual control line in the 3D model corresponding to the intersecting bounding volume when u and v satisfy u≥0, v≥0, and u+v≤1. Any triangular mesh intersects, when u and v do not satisfy u≥0, v≥0, u+v≤1, it is determined that the virtual control line corresponds to any triangle in the 3D model corresponding to the intersecting bounding volume Meshes are disjoint.
具体的检测过程与实施例一的相同,此处不再赘述。The specific detection process is the same as that in Embodiment 1, and will not be repeated here.
模型检测判定单元35,用于在所述虚拟控制线与相交的包围体对应的3D模型中的任一三角网格相交时,判定所述虚拟控制线与相交的包围体对应的3D模型相交,以判定检测到3D模型。A model detection and determination unit 35, configured to determine that the virtual control line intersects with the 3D model corresponding to the intersecting volume when the virtual control line intersects with any triangle mesh in the 3D model corresponding to the intersecting volume, A 3D model is detected with a judgment.
在本发明第二实施例中,由于先对3D模型的包围体进行检测,在判断出虚拟控制线与3D模型的包围体相交之后,才检测虚拟控制线与3D模型中的任一个三角网格是否相交,因此减少了运算量,从而有效提高检测虚拟环境场景中3D模型的速度及精度。In the second embodiment of the present invention, since the bounding volume of the 3D model is detected first, the virtual control line and any triangular mesh in the 3D model are detected after it is judged that the virtual control line intersects with the bounding volume of the 3D model Whether or not they intersect, thus reducing the amount of computation, thereby effectively improving the speed and accuracy of detecting 3D models in the virtual environment scene.
本领域普通技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,所述的程序可以在存储于一计算机可读取存储介质中,所述的存储介质,如ROM/RAM、磁盘、光盘等。Those of ordinary skill in the art can understand that all or part of the steps in the method of the above embodiments can be completed by instructing related hardware through a program, and the program can be stored in a computer-readable storage medium. Storage media, such as ROM/RAM, magnetic disk, optical disk, etc.
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention should be included in the protection of the present invention. within range.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410317624.6A CN104102357B (en) | 2014-07-04 | 2014-07-04 | 3D model checking methods and device in a kind of virtual scene |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410317624.6A CN104102357B (en) | 2014-07-04 | 2014-07-04 | 3D model checking methods and device in a kind of virtual scene |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN104102357A true CN104102357A (en) | 2014-10-15 |
| CN104102357B CN104102357B (en) | 2017-12-19 |
Family
ID=51670557
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201410317624.6A Expired - Fee Related CN104102357B (en) | 2014-07-04 | 2014-07-04 | 3D model checking methods and device in a kind of virtual scene |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN104102357B (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106648355A (en) * | 2016-11-07 | 2017-05-10 | 成都华域天府数字科技有限公司 | 3D model selection method and device |
| US10868810B2 (en) | 2016-08-19 | 2020-12-15 | Tencent Technology (Shenzhen) Company Limited | Virtual reality (VR) scene-based authentication method, VR device, and storage medium |
| CN112423035A (en) * | 2020-11-05 | 2021-02-26 | 上海蜂雀网络科技有限公司 | Method for automatically extracting visual attention points of user when watching panoramic video in VR head display |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090153552A1 (en) * | 2007-11-20 | 2009-06-18 | Big Stage Entertainment, Inc. | Systems and methods for generating individualized 3d head models |
| CN103076619A (en) * | 2012-12-27 | 2013-05-01 | 山东大学 | System and method for performing indoor and outdoor 3D (Three-Dimensional) seamless positioning and gesture measuring on fire man |
| US20130190087A1 (en) * | 2012-01-19 | 2013-07-25 | Zynga Inc. | Three dimensional operations in an isometric projection |
| CN103337066A (en) * | 2013-05-27 | 2013-10-02 | 清华大学 | Calibration method for 3D (three-dimensional) acquisition system |
-
2014
- 2014-07-04 CN CN201410317624.6A patent/CN104102357B/en not_active Expired - Fee Related
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090153552A1 (en) * | 2007-11-20 | 2009-06-18 | Big Stage Entertainment, Inc. | Systems and methods for generating individualized 3d head models |
| US20130190087A1 (en) * | 2012-01-19 | 2013-07-25 | Zynga Inc. | Three dimensional operations in an isometric projection |
| CN103076619A (en) * | 2012-12-27 | 2013-05-01 | 山东大学 | System and method for performing indoor and outdoor 3D (Three-Dimensional) seamless positioning and gesture measuring on fire man |
| CN103337066A (en) * | 2013-05-27 | 2013-10-02 | 清华大学 | Calibration method for 3D (three-dimensional) acquisition system |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10868810B2 (en) | 2016-08-19 | 2020-12-15 | Tencent Technology (Shenzhen) Company Limited | Virtual reality (VR) scene-based authentication method, VR device, and storage medium |
| CN106648355A (en) * | 2016-11-07 | 2017-05-10 | 成都华域天府数字科技有限公司 | 3D model selection method and device |
| CN106648355B (en) * | 2016-11-07 | 2020-10-02 | 成都华域天府数字科技有限公司 | 3D model selection method and device |
| CN112423035A (en) * | 2020-11-05 | 2021-02-26 | 上海蜂雀网络科技有限公司 | Method for automatically extracting visual attention points of user when watching panoramic video in VR head display |
Also Published As
| Publication number | Publication date |
|---|---|
| CN104102357B (en) | 2017-12-19 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107479706B (en) | A method for constructing and interacting with battlefield situation information based on HoloLens | |
| CN102243074B (en) | Method for simulating geometric distortion of aerial remote sensing image based on ray tracing technology | |
| CN103940442B (en) | A kind of localization method and device using acceleration convergence algorithm | |
| JP2021185408A (en) | Building block data merging method, device, electronic apparatus, computer readable storage medium, and computer program | |
| CN104408774B (en) | A Collision Detection Method Between Solid Mesh Models Based on GPU Acceleration | |
| US8223145B2 (en) | Method and system for 3D object positioning in 3D virtual environments | |
| CN113570664B (en) | Augmented reality navigation display method and device, electronic equipment and computer medium | |
| CN104318605B (en) | Parallel lamination rendering method of vector solid line and three-dimensional terrain | |
| JP7375149B2 (en) | Positioning method, positioning device, visual map generation method and device | |
| CN104102357B (en) | 3D model checking methods and device in a kind of virtual scene | |
| CN113590002A (en) | Handwriting forming method, handwriting forming device and electronic equipment | |
| CN103543885B (en) | The multimedia spherical screen demonstration instrument of multi-point touch and multi-touch method thereof | |
| US20210225082A1 (en) | Boundary detection using vision-based feature mapping | |
| CN108520543A (en) | A kind of method that relative accuracy map is optimized, equipment and storage medium | |
| CN106643717A (en) | Method and device for performance detection of nine-axis sensor | |
| CN112241675A (en) | Object detection model training method and device | |
| CN103177474B (en) | The neighborhood point coordinate defining method of three-dimensional model and device, construction method and device | |
| US10319132B2 (en) | Method and system for representing objects with velocity-dependent particles | |
| CN106251335B (en) | A kind of sensor visual field occlusion area based on STL gridding methods determines method | |
| CN104156969B (en) | Plane exploration method based on panoramic image depth map | |
| CN102393827A (en) | Flexible scene continuous collision detection method based on continuous normal cone remover | |
| KR102690363B1 (en) | Display device and method for displaying oval distance ring thereof | |
| CN115202483B (en) | A method for eliminating jitter in a global three-dimensional map system | |
| CN105427371B (en) | The method that the elemental areas such as Drawing Object are shown is kept in a kind of three-dimensional perspective projection scene | |
| CN117745996A (en) | Method and device for moving three-dimensional view |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20171219 |