[go: up one dir, main page]

CN119152102A - Space hierarchy rendering method and system for Internet three-dimensional map - Google Patents

Space hierarchy rendering method and system for Internet three-dimensional map Download PDF

Info

Publication number
CN119152102A
CN119152102A CN202411643561.3A CN202411643561A CN119152102A CN 119152102 A CN119152102 A CN 119152102A CN 202411643561 A CN202411643561 A CN 202411643561A CN 119152102 A CN119152102 A CN 119152102A
Authority
CN
China
Prior art keywords
dimensional
layer
depth
fragment
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411643561.3A
Other languages
Chinese (zh)
Other versions
CN119152102B (en
Inventor
王聪
蒋如乔
夏伟
史廷春
王一梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuance Information Technology Co ltd
Original Assignee
Yuance Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yuance Information Technology Co ltd filed Critical Yuance Information Technology Co ltd
Priority to CN202411643561.3A priority Critical patent/CN119152102B/en
Publication of CN119152102A publication Critical patent/CN119152102A/en
Application granted granted Critical
Publication of CN119152102B publication Critical patent/CN119152102B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Image Generation (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及三维地图渲染技术领域,尤其是指一种面向互联网三维地图的空间层次渲染方法及系统,包括:记录互联网三维地图场景中带渲染的每个图层的优先级次序;对二维图层进行空间层次计算,计算二维图层中每个片元的目标深度偏差值,对二维图层进行深度偏移;对三维模型进行空间层次计算,计算三维模型中每个片元的目标深度偏差值,对三维模型进行深度偏移;按照二维图层和三维模型的优先级次序先后调用绘制接口,得到渲染结果图。本发明避免了大俯仰角情况下可能出现的远处瓦片深度冲突问题,保证了相机在任意角度下互联网三维地图的空间关系的正确性。

The present invention relates to the field of three-dimensional map rendering technology, and in particular to a spatial hierarchical rendering method and system for Internet three-dimensional maps, including: recording the priority order of each layer with rendering in the Internet three-dimensional map scene; performing spatial hierarchical calculation on the two-dimensional layer, calculating the target depth deviation value of each fragment in the two-dimensional layer, and performing depth offset on the two-dimensional layer; performing spatial hierarchical calculation on the three-dimensional model, calculating the target depth deviation value of each fragment in the three-dimensional model, and performing depth offset on the three-dimensional model; calling the drawing interface in sequence according to the priority order of the two-dimensional layer and the three-dimensional model to obtain a rendering result map. The present invention avoids the problem of distant tile depth conflict that may occur under large pitch angles, and ensures the correctness of the spatial relationship of the Internet three-dimensional map of the camera at any angle.

Description

Space hierarchy rendering method and system for Internet three-dimensional map
Technical Field
The invention relates to the technical field of three-dimensional map rendering, in particular to a spatial hierarchy rendering method and system for an Internet three-dimensional map.
Background
Based on the rapid development of the visualization technology, the internet map is upgraded from two dimensions to three dimensions. The three-dimensional map expands the inclination angle of the camera based on the original camera translation and scaling, so that a user can scale and translate the map, and can tilt the camera to obtain different view angles, and the map information can be more intuitively checked.
In three-dimensional maps, two-dimensional layers tend to cause depth conflicts (z-fighting) to occur as the camera is tilted due to their high consistency. This is because when a plurality of two-dimensional layers are superimposed within the same viewing angle, since their height information is the same, when the camera is tilted, the projection manner of the object changes, which may cause the difference in depth values of two faces that were originally not greatly different in the depth buffer to become more remarkable, and the rendering engine cannot accurately judge the front-back relationship between the two-dimensional layers, thereby generating a phenomenon of depth conflict. The depth conflict not only can cause the boundary between the layers to be blurred and even overlap, but also can cause wrong space layering, and the readability of the map and the experience of users are seriously affected.
Aiming at the problem of depth conflict, three common methods exist in the prior art:
(1) And closing the depth detection when the two-dimensional layers are rendered, so that the next rendered layer covers the previous rendered layer, thereby avoiding depth conflict. However, the two-dimensional layer itself has material properties, and the opaque material often needs to be subjected to depth writing and depth detection, so that this method can generate more logic judgment, and greatly increases the rendering difficulty of the map data.
(2) The distance between the near clipping surface (near) and the far clipping surface (far) of the camera is adjusted. By adjusting the values of the near clipping plane and the far clipping plane, the accuracy and coverage of the depth buffer can be changed, thereby reducing the occurrence of depth conflicts to some extent. But adjusting the values of the near clipping plane and the far clipping plane affects the accuracy and performance of rendering. The map tile vanishing problem occurs when the distance between the near clipping surface and the far clipping surface is adjusted too close, and the rendering performance problem occurs when the distance is too far, so that perfect balance is difficult to achieve.
(3) The world coordinates of the layers are depth shifted to avoid depth conflicts by adding a small depth shift. However, when the pitch angle of the camera is relatively large, for example, the pitch angle is close to 90 °, that is, when the three-dimensional space is observed from the view angle of the horizon, the upper layer may cover the layer below the upper layer due to the depth offset, which may still cause a depth conflict phenomenon, thereby resulting in an erroneous spatial position relationship. And, adjusting the world coordinate value also affects subsequent collision calculations.
Thus, the prior art methods for resolving depth conflicts have respective limitations. In the internet three-dimensional map visualization project, due to browser performance, correctly rendering spatial hierarchy still faces a great challenge.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to solve the problem that the depth conflict generated in the three-dimensional map space rendering cannot be effectively solved in the prior art.
In order to solve the technical problems, the invention provides a spatial hierarchy rendering method for an Internet three-dimensional map, which comprises the following steps:
Acquiring various data in an Internet three-dimensional map scene, correspondingly generating layers to be rendered, and recording the priority order of each layer, wherein the layers to be rendered comprise two-dimensional layers and three-dimensional models;
Performing space level calculation on each two-dimensional layer, calculating a target depth deviation value of each fragment in the two-dimensional layer according to the camera position, and performing depth deviation on each fragment of the two-dimensional layer in a fragment shader;
Performing space hierarchy calculation on the three-dimensional model, calculating a target depth deviation value of each fragment in the three-dimensional model according to the camera position, and performing depth deviation on each fragment of the three-dimensional model in a fragment shader;
and calling a drawing interface successively according to the priority order of the two-dimensional layer and the three-dimensional model, and rendering the three-dimensional map scene on a screen to obtain a rendering result diagram.
Preferably, the various data in the internet three-dimensional map scene comprise two-dimensional vector data, raster data and three-dimensional model data.
Preferably, the performing spatial hierarchy calculation on each two-dimensional layer, calculating a target depth deviation value of each tile in the two-dimensional layer according to the camera position, and performing depth deviation on each tile in the two-dimensional layer in a tile shader includes:
Calculating a first equidistant depth offset of the two-dimensional map layer according to the priority order of the two-dimensional map layer and the total map layer number of the three-dimensional map scene;
calculating a second equidistant depth offset of the two-dimensional layer according to the first equidistant depth offset of the two-dimensional layer and the camera height;
calculating a target depth deviation value of each fragment in the two-dimensional layer according to the camera angle and the second equidistant depth deviation value of the two-dimensional layer;
In a two-dimensional primitive shader, a depth offset is performed on the two-dimensional primitive in the primitive shader based on the target depth offset value for each primitive in the two-dimensional primitive.
Preferably, the calculating the first equidistant depth offset of the two-dimensional layer according to the priority order of the two-dimensional layer and the total layer number of the three-dimensional map scene includes:
when the camera is above the ground, the calculation formula of the first equidistant depth offset of the scene of the two-dimensional layer on the ground is as follows:
;
when the camera is under the ground surface, the calculation formula of the first equidistant depth offset of the two-dimensional layer in the underground scene is as follows:
;
Wherein, AndThe first equidistant depth offsets for the above-ground scene and the below-ground scene of the two-dimensional layer respectively,The total number of layers representing the three-dimensional map scene,Representing the priority order of the current two-dimensional layer,Representing the corrected empirical value of equidistant depth offset.
Preferably, the correction experience value of the equidistant depth offset is
Preferably, calculating the second equidistant depth offset of the two-dimensional layer from the first equidistant depth offset of the two-dimensional layer and the camera height comprises:
And multiplying the ratio of the vertical height of the camera to the camera height deviation correction value by the first equidistant depth offset of the two-dimensional layer to obtain the second equidistant depth offset of the two-dimensional layer.
Preferably, the value of the camera height deviation correction value is proportional to the vertical height of the camera.
Preferably, the performing spatial hierarchy calculation on the three-dimensional model, calculating a target depth deviation value of each voxel in the three-dimensional model according to the camera position, and performing depth deviation on each voxel of the three-dimensional model in a voxel shader, including:
In a vertex shader of the three-dimensional model, judging whether the fragment is above or below the ground surface according to the world coordinate z value of the fragment vertex of the three-dimensional model;
When the camera is above the ground surface, acquiring a second equidistant depth offset of the two-dimensional image layer with the lowest priority as a target equidistant depth offset, and when the camera is below the ground surface, acquiring a second equidistant depth offset of the two-dimensional image layer with the highest priority as a target equidistant depth offset;
Calculating a target depth deviation value of each fragment in the three-dimensional model according to the camera angle and the target equidistant depth deviation value;
in the three-dimensional model fragment shader, depth migration is performed on the three-dimensional model in the fragment shader according to the target depth migration value of each fragment in the three-dimensional model.
Preferably, calculating the target depth offset value for each tile from the camera angle and the equidistant depth offset comprises:
In the vertex shader, the fragment vertexes are converted into a world coordinate system through a preset model matrix to obtain world coordinates of the fragment vertexes, the world coordinates of the camera are subtracted from the world coordinates of the fragment vertexes, and then vector normalization processing is carried out to obtain direction vectors from the fragment to the camera;
performing dot product operation on the vector of the direction vector from the patch to the camera and the normal vector of the patch, and then inverting to obtain an initial depth deviation value of the current patch influenced by the camera angle;
and transmitting the equidistant depth offset into a fragment shader, and multiplying the equidistant depth offset by an initial depth offset value of each fragment affected by the camera angle to obtain a target depth offset value of each fragment due to the camera angle.
The invention also provides a space hierarchy rendering system facing the Internet three-dimensional map, which comprises the following steps:
the system comprises a layer sequence recording module, a layer sequence processing module and a layer sequence processing module, wherein the layer sequence recording module is used for acquiring various data in an Internet three-dimensional map scene, correspondingly generating layers to be rendered and recording the priority sequence of each layer;
The two-dimensional layer depth migration module is used for carrying out space hierarchy calculation on each two-dimensional layer, calculating a target depth migration value of each fragment in the two-dimensional layer according to the camera position, and carrying out depth migration on each fragment of the two-dimensional layer in the fragment shader;
The three-dimensional model depth migration module is used for carrying out space hierarchy calculation on the three-dimensional model, calculating a target depth migration value of each fragment in the three-dimensional model according to the camera position, and carrying out depth migration on each fragment of the three-dimensional model in the fragment shader;
And the rendering module is used for sequentially calling the drawing interfaces according to the priority orders of the two-dimensional layer and the three-dimensional model, and rendering the three-dimensional map scene on the screen to obtain a rendering result diagram.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
According to the space hierarchy rendering method for the Internet three-dimensional map, the relative positions between the cameras and the target fragments are also considered on the basis of equidistant depth offset, the target depth offset value of each fragment in the two-dimensional map layer and the three-dimensional model is calculated in real time according to the camera angle, the problem of remote tile depth conflict possibly occurring under the condition of large pitch angle is avoided, and the accuracy of the spatial relationship of the Internet three-dimensional map under any angle of the cameras is ensured.
Furthermore, the invention also considers the problem of depth precision error caused by the camera height in the process of calculating the equidistant depth offset, solves the problem of non-linearity of the depth precision caused by the camera height by introducing the camera height offset value, improves the detection precision of depth buffering, and realizes the correctness of the spatial relationship of the camera at any height.
Drawings
In order that the invention may be more readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings, in which:
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a prior art rendering result diagram;
fig. 3 is a rendering result diagram of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and specific examples, which are not intended to be limiting, so that those skilled in the art will better understand the invention and practice it.
Example 1
Referring to fig. 1, the invention provides a spatial hierarchy rendering method for an internet three-dimensional map, which comprises the following steps:
And step 1, acquiring various data in an Internet three-dimensional map scene, correspondingly generating layers to be rendered, and recording the priority order of each layer, wherein the layers to be rendered comprise two-dimensional layers and three-dimensional models.
Step 1 comprises the following sub-steps:
And 1.1, acquiring various data in the Internet three-dimensional map scene, including two-dimensional vector data, raster data, three-dimensional model data and the like.
The two-dimensional vector data comprise data such as city base map, road routes and lakes, and are in GeoJson data format. The raster data includes an image map. The three-dimensional model data comprises a three-dimensional building white model which is in GLB (GL Transmission Format Binary) data format.
And 1.2, sorting the priorities of various data in the three-dimensional map scene of the Internet according to the expected scene effect, sequentially adding the data into a queue to be rendered, generating layers to be rendered, and recording the priority order of each layer. The layer to be rendered comprises a two-dimensional layer and a three-dimensional model.
The image map is added as a two-dimensional background (background) map layer firstly, the priority order of the map layer is recorded as 0, the priority order of the two-dimensional city map layer is recorded as 1 when the city base map of the two-dimensional vector data is the city base map layer, the priority order of the two-dimensional lake surface map layer is recorded as 2 when the two-dimensional lake surface map layer is arranged above the two-dimensional city base map layer, the subsequent two-dimensional map layers are sequentially ordered and recorded, the two-dimensional road map layer with the highest priority order is the three-dimensional model, and the three-dimensional model with the highest priority order is obtained.
And 2, carrying out space hierarchy calculation on each two-dimensional layer, calculating a target depth deviation value of each fragment in the two-dimensional layer according to the camera position, and carrying out depth deviation on each fragment of the two-dimensional layer in a fragment shader to solve the problem of depth conflict between the two-dimensional layers.
Step 2 comprises the following sub-steps:
and 2.1, calculating a first equidistant depth offset of the two-dimensional map layer according to the priority order of the two-dimensional map layer and the total map layer number of the three-dimensional map scene.
The depth offset is calculated taking into account whether the camera is above or below the surface. When the camera is above the ground, i.e. the ground scene is displayed, each two-dimensional image layer needs to deviate towards the far end of the camera according to the priority order, and the first equidistant depth deviation amount of each two-dimensional image layer can be calculated through the priority order of the two-dimensional image layers and the total image layer number of the three-dimensional map scene. For example, the two-dimensional background layer with the priority order of 0 is added first, so that the first equidistant depth offset is the largest, and the first equidistant depth offset of the two-dimensional background layer added later is reduced linearly.
Conversely, when the camera is below the surface, i.e. showing a subsurface space scene, the two-dimensional background layer with priority order 0 needs to be added last, so its first equidistant depth offset is the smallest.
In order to scale the first equidistant depth offset to the depth range of the camera, i.e. the numerical space of 0-1, a corrected empirical value of equidistant depth offset is required. The corrected empirical value of the equidistant depth offset is adjusted continuously through experiments to achieve the expected effect. In this embodiment, the corrected empirical value of equidistant depth offset is calculated as
Thus, when the camera is above the ground, the first equidistant depth offset for a scene with a two-dimensional layer above the ground is calculated as follows:
;
when the camera is under the ground surface, the calculation formula of the first equidistant depth offset of the two-dimensional layer in the underground scene is as follows:
;
Wherein, AndThe first equidistant depth offsets for the above-ground scene and the below-ground scene of the two-dimensional layer respectively,The total number of layers representing the three-dimensional map scene,Representing the priority order of the current two-dimensional layer,Representing the corrected empirical value of equidistant depth offset.
And 2.2, calculating a second equidistant depth offset of the two-dimensional image layer according to the camera height.
The root cause of the depth conflict is that the non-linear nature of the depth values is another major factor in addition to the too close proximity between objects.
In the internet three-dimensional map, in order to simulate the visual effect of the real world, perspective projection transformation is generally adopted to realize the perspective effect of 'near-large and far-small'. After perspective projection transformation, vertex coordinates of the two-dimensional image layer are transferred to a clipping space (CLIP SPACE), and then transferred to a normalized equipment coordinate space (Normalized Device Coordinates, NDC) through perspective division, wherein the perspective division enables the relation between depth values and real distances to be a nonlinear relation, so that higher depth precision of closer objects is provided, and the depth values are more consistent with the perception of human eyes. However, this method also causes a phenomenon that the more likely it is to flicker at a remote location, the less near.
In order to solve the problem of non-linearity of depth precision caused by camera height, the invention introduces a camera height deviation correction value, wherein the camera height deviation correction value is used for calculating the depth deviation of a two-dimensional image layer due to camera height in real time. In particular, as the camera is farther from the two-dimensional layer, the offset of the two-dimensional layer should be increased accordingly due to limited depth accuracy of the camera at a distance. Conversely, when the camera is closer to the two-dimensional layer, the offset of the two-dimensional layer may be reduced accordingly, as the camera has sufficient depth precision space in the vicinity.
Therefore, the invention takes the deviation caused by the camera height into consideration, scales the equidistant offset, calculates the second equidistant depth offset of the two-dimensional layer due to the camera height, and has the following formula:
;
Wherein, For a second equidistant depth offset of the two-dimensional layer,Is the first equidistant depth offset of the two-dimensional layer,Representing the vertical height of the camera,Representing the camera height deviation correction value. The camera height deviation correction value increases with the increase of the vertical height of the camera and decreases with the decrease of the vertical height of the camera.
And 2.3, calculating a target depth deviation value of each fragment in the two-dimensional image layer according to the camera angle.
When the camera is tilted, each element in the two-dimensional image layer is far shifted to the camera according to the depth shift amount calculated in the step 2.2, and is not actually in the same plane. Particularly, when the camera is inclined at a relatively large angle, that is, when the three-dimensional map scene is viewed from a view angle close to the horizon, the distance between the far point of the two-dimensional map layer and the original plane may be relatively short after the point is shifted, so that the actual fragment depth value may be the same as the fragment depth values of other two-dimensional map layers, and the far two-dimensional map layer still has a depth conflict problem.
Based on the problem, the method calculates the depth deviation caused by the camera angle for each element in the two-dimensional image layer in real time. At this time, the depth offset of the two-dimensional layer is not equal-distance offset, but the depth offset of the patch is performed in real time according to the normal line of the patch and the direction vector from the patch to the camera. The normal to the patch is typically (0.0,0.0,1.0).
Specifically, the world coordinate position of the camera is passed into the two-dimensional layer shader as a global variable (unitorm). In the vertex shader of the two-dimensional layer, each fragment vertex in the two-dimensional layer is converted into a world coordinate system through a preset model matrix to obtain the world coordinate of each fragment vertex, the world coordinate of the camera is subtracted from the world coordinate of the fragment vertex, and then vector normalization processing is carried out to obtain the vector v from the fragment to the camera.
And carrying out dot product operation on a vector of a vector v of a pixel to a camera in the two-dimensional image layer and a vector n= (0.0,0.0,1.0) of a normal vector of the pixel, and then inverting to obtain an initial depth deviation value of the current pixel influenced by the angle of the camera.
And (2) taking the second equidistant depth offset of the two-dimensional layer in the step (2.2) as a variable (varying) to be transmitted into a fragment shader, and multiplying the variable by an initial depth offset value of each fragment in the two-dimensional layer, which is influenced by a camera angle, to obtain a target depth offset value of each fragment in the current two-dimensional layer, which is generated due to the camera angle.
According to the camera angle, calculating a target depth deviation value of each fragment in the two-dimensional layer in a fragment shader, wherein the calculation formula is as follows:
;
Wherein, Is the target depth deviation value of the patch in the two-dimensional layer,For a second equidistant depth offset of the two-dimensional layer,Representing the direction vector of the primitives in the two-dimensional layer to the camera,Representing the normal vector of the primitives in the two-dimensional layer,Representing the dot product of the pixel-to-camera direction vector and the normal vector of the pixel in the two-dimensional layer,Representing taking an absolute value for the dot product.
In particular, whenAt 1.0, the camera is shown oriented at a vertical viewing angle towards the tile, with the result still being equidistantly offset. When (when)When the depth of the two-dimensional image is greater than or equal to 0 and less than 1.0, a certain included angle exists between the camera and the pixel, and the offset is required to be amplified to distinguish other two-dimensional image layers, so that the depth conflict phenomenon is eliminated. When (when)If the depth deviation value is smaller than 0, the camera is under the ground surface, and the target depth deviation value is calculated according to the absolute value of the dot product.
In summary, the spatial hierarchy calculation is performed on each two-dimensional layer, and the target depth deviation value of the two-dimensional layer is calculated according to the camera height and the camera angle, where the formula includes:
when the camera is above the ground, the target depth bias value of each tile in the two-dimensional layer is:
;
when the camera is under the ground, the target depth deviation value of each fragment in the two-dimensional layer is:
;
Wherein, AndThe target depth deviation value for each tile in the two-dimensional map layer when the camera is above and below the surface,The total number of layers representing the three-dimensional map scene,Representing the priority order of the current two-dimensional layer,Representing the corrected empirical value of equidistant depth offset,Representing the vertical height of the camera,Representing the height deviation correction value of the camera,Representing the direction vector of the primitives in the two-dimensional layer to the camera,Representing the normal vector of the primitives in the two-dimensional layer,Representing the dot product of the pixel-to-camera direction vector and the normal vector of the pixel in the two-dimensional layer,Representing taking the absolute value of the dot product.
Step 2.4, in the two-dimensional graphics layer patch shader, according to the target depth deviation value of each patch in the two-dimensional graphics layer, carrying out depth deviation on each patch in the two-dimensional graphics layer in the patch shader, wherein the formula is as follows:
;
Wherein, Depth values representing coordinates of the primitives in the two-dimensional layer, ranging between 0,1,Is the target depth deviation value of the patch in the two-dimensional layer,Is the depth value of the patch in the two-dimensional layer.
In summary, step 2 first calculates a first equidistant depth offset according to the priority order of the two-dimensional layers, and dynamically scales the first equidistant depth offset according to the camera height to calculate a second equidistant depth offset according to the depth precision error problem generated by the camera distance. Secondly, in order to solve the problem of possible remote tile depth conflict under the condition of a large pitch angle, step 2 calculates the target depth deviation value of the tile in the two-dimensional layer in real time according to the second equidistant depth deviation amount and the relative position between the camera and the target tile, and finally solves the problem of depth conflict of the two-dimensional layer, and the position relation of the camera on the front and back space, the upper and lower space of the scene can be clearly and intuitively expressed no matter from which angle, so that the accurate spatial hierarchy expression is realized.
And 3, performing space hierarchy calculation on the three-dimensional model, calculating a target depth deviation value of each fragment in the three-dimensional model according to the camera position, and performing depth deviation on each fragment of the three-dimensional model in a fragment shader to solve the problem of space hierarchy disorder under a depth deviation strategy of the three-dimensional model.
Step 3 comprises the following sub-steps:
And 3.1, judging that the fragment is above or below the ground surface according to the world coordinate z value of the fragment vertex of the three-dimensional model in the vertex shader of the three-dimensional model.
In this implementation, it is assumed that the z value of the surface is equal to 0.0, and a fragment is considered to be below the surface if the world coordinate z value of the fragment is less than 0.0, whereas a fragment is considered to be above the surface if the world coordinate z value of the fragment is greater than or equal to 0.0.
And 3.2, when the camera is above the ground, namely when the ground scene is at a view angle, acquiring a second equidistant depth offset of the two-dimensional image layer with the lowest priority order as a target equidistant depth offset, when the camera is below the ground, acquiring the second equidistant depth offset of the two-dimensional image layer with the highest priority order as a target equidistant depth offset, and transmitting the acquired target equidistant depth offset into a three-dimensional model shader as a global variable (unimorph).
In this embodiment, the two-dimensional layer with the lowest priority is the two-dimensional background layer, and the two-dimensional layer with the highest priority is the two-dimensional road line layer.
And 3.3, calculating a target depth deviation value of each wafer in the three-dimensional model according to the camera angle and the target equidistant depth deviation.
When the camera is above the earth's surface, it is necessary to depth shift the elements of the subsurface portion of the three-dimensional model to prevent viewing of the subsurface portion that would otherwise be hidden from above-ground view. Conversely, if the camera is below the surface, depth shifting of the primitives of the aerial parts of the three-dimensional model is required to avoid the aerial parts from being improperly revealed at the subsurface point of view. The adjustment mainly solves the problem that the three-dimensional model part is improperly exposed when the camera view angle is switched, so as to ensure the accuracy of three-dimensional space hierarchy.
Therefore, in this step, as in step 2.3, the deviation of the camera angle is calculated in real time for the elements of the above-ground or below-ground part of the three-dimensional model.
Specifically, the world coordinate position of the camera is passed into the three-dimensional model shader as a global variable (unitorm). In the vertex shader of the three-dimensional model, each fragment vertex in the three-dimensional model is converted into a world coordinate system through a preset model matrix to obtain the world coordinate of the fragment vertex, the world coordinate of the camera is subtracted from the world coordinate of each fragment vertex, and then vector normalization processing is carried out to obtain the vector from the fragment to the camera
Direction vector from element to camera in three-dimensional modelNormal vector to patchAnd (3) carrying out dot product operation on the vector, and then inverting to obtain an initial depth deviation value of the current element influenced by the camera angle.
And (3) transmitting the target equidistant depth offset in the step (3.2) into a fragment shader as a variable (varying), and multiplying the variable by an initial depth offset value of each fragment in the three-dimensional model, which is influenced by the camera angle, so as to obtain a target depth offset value of each fragment in the current three-dimensional model, which is generated due to the camera angle.
According to the camera angle and the target equidistant depth offset, calculating a target depth offset value of each fragment in the three-dimensional model in a fragment shader, wherein the calculation formula is as follows:
;
Wherein, Is the target depth deviation value of the patch in the three-dimensional model,For the amount of the depth offset of the target,Representing the voxel-to-camera direction vector in the three-dimensional model,Representing the normal vector of the patch in the three-dimensional model,Representing the dot product of the patch-to-camera direction vector and the normal vector of the patch in the three-dimensional model,Representing taking the absolute value of the dot product.
Step 3.4, in the three-dimensional model fragment shader, according to the target depth deviation value of each fragment in the three-dimensional model, carrying out depth deviation on each fragment of the three-dimensional model in the fragment shader, wherein the formula is as follows:
;
Wherein, Depth values representing coordinates of the patch in the three-dimensional model, ranging between 0,1,Is the target depth deviation value of the patch in the three-dimensional model,Is the depth value of the patch in the three-dimensional model.
Because the Internet three-dimensional map scene has a three-dimensional model such as a building white model besides the two-dimensional map, the three-dimensional model is further subjected to depth migration in step 3. When the camera is above the ground, the three-dimensional model is not shifted, and therefore, a part below the ground is displayed on the ground by mistake, because the two-dimensional image layer is shifted in depth, and the three-dimensional model is not shifted in original real depth. In order to solve the problem of incorrect spatial position relation of the three-dimensional model, step 3 aims at the three-dimensional model to carry out depth migration on the elements of the three-dimensional model below the ground surface or above the ground surface, so as to ensure the correctness of the spatial relation of the three-dimensional map scene under any angle and height. No matter from which angle the camera is, the position relation of the scene on the front and back, upper and lower spaces can be clearly and intuitively expressed, the correct space hierarchical expression is realized, and a favorable technical support is provided for the three-dimensional city construction of the Internet.
And 4, calling a drawing interface successively according to the priority order of the two-dimensional layer and the three-dimensional model, and rendering the three-dimensional map scene on a screen to obtain a rendering result diagram.
Fig. 2 is a rendering result diagram in the prior art, and it can be seen that a map layer flashes after map rendering due to a depth conflict problem, and a tearing phenomenon exists. FIG. 3 is a rendering result diagram of the present invention, which clearly distinguishes spatial hierarchy between two-dimensional layers, with correct spatial relationship.
Example two
Based on the first embodiment, the spatial hierarchy rendering method for the three-dimensional map of the internet, the embodiment provides a spatial hierarchy rendering system for the three-dimensional map of the internet, which includes:
the system comprises a layer sequence recording module, a layer sequence processing module and a layer sequence processing module, wherein the layer sequence recording module is used for acquiring various data in an Internet three-dimensional map scene, correspondingly generating layers to be rendered and recording the priority sequence of each layer;
The two-dimensional layer depth migration module is used for carrying out space hierarchy calculation on each two-dimensional layer, calculating a target depth migration value of each fragment in the two-dimensional layer according to the camera position, and carrying out depth migration on each fragment of the two-dimensional layer in the fragment shader;
The three-dimensional model depth migration module is used for carrying out space hierarchy calculation on the three-dimensional model, calculating a target depth migration value of each fragment in the three-dimensional model according to the camera position, and carrying out depth migration on each fragment of the three-dimensional model in the fragment shader;
And the rendering module is used for sequentially calling the drawing interfaces according to the priority orders of the two-dimensional layer and the three-dimensional model, and rendering the three-dimensional map scene on the screen to obtain a rendering result diagram.
Compared with the prior art, the invention has the beneficial effects that:
1. According to the method, firstly, the equidistant depth offset of the two-dimensional layer is calculated according to the priority order of the layer, the equidistant depth offset of the two-dimensional layer is dynamically scaled according to the camera height, secondly, in order to solve the problem of possible remote tile depth conflict under the condition of large pitch angle, the target depth offset value of the elements in the two-dimensional layer is calculated in real time according to the scaled equidistant depth offset and the relative position between the camera and each element in the two-dimensional layer, the depth offset is carried out on the two-dimensional layer, and the problem of depth conflict of the two-dimensional layer in the Internet three-dimensional map is solved.
2. After the depth migration is carried out on the two-dimensional map layer, the situation that the camera is positioned above the ground surface and below the ground surface is considered, and the depth migration is further carried out on the elements of the three-dimensional model below the ground surface or above the ground surface by adopting the same method, so that the correctness of the spatial relationship of the three-dimensional map scene under any angle and height is ensured.
3. The invention also considers the relative position between the camera and the target fragment on the basis of equidistant depth offset, calculates the target depth offset value of each fragment in the two-dimensional image layer and the three-dimensional model in real time according to the camera angle, avoids the problem of remote tile depth conflict possibly occurring under the condition of large pitch angle, and ensures the accuracy of the spatial relationship of the Internet three-dimensional map under any angle of the camera. In addition, the method carries out depth migration based on the fragment coordinates, and compared with a depth migration algorithm under the world coordinates, the method can solve the collision calculation problem of the depth migration under the world coordinates.
4. Furthermore, the invention also considers the problem of depth precision error caused by the camera height in the process of calculating the equidistant depth offset, solves the problem of non-linearity of the depth precision caused by the camera height by introducing the camera height offset value, improves the detection precision of depth buffering, and realizes the correctness of the spatial relationship of the camera at any height.
5. The method is suitable for the scene with a plurality of two-dimensional layers in the Internet three-dimensional map, such as vector lines, surface layers and grid layers, which are mutually overlapped and rendered, and has the best effect display and interaction experience. Providing extremely high reference value for the case of two-dimensional layer depth conflict in a three-dimensional map.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations and modifications of the present invention will be apparent to those of ordinary skill in the art in light of the foregoing description. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.

Claims (10)

1. The space hierarchy rendering method for the Internet three-dimensional map is characterized by comprising the following steps of:
Acquiring various data in an Internet three-dimensional map scene, correspondingly generating layers to be rendered, and recording the priority order of each layer, wherein the layers to be rendered comprise two-dimensional layers and three-dimensional models;
Performing space level calculation on each two-dimensional layer, calculating a target depth deviation value of each fragment in the two-dimensional layer according to the camera position, and performing depth deviation on each fragment of the two-dimensional layer in a fragment shader;
Performing space hierarchy calculation on the three-dimensional model, calculating a target depth deviation value of each fragment in the three-dimensional model according to the camera position, and performing depth deviation on each fragment of the three-dimensional model in a fragment shader;
and calling a drawing interface successively according to the priority order of the two-dimensional layer and the three-dimensional model, and rendering the three-dimensional map scene on a screen to obtain a rendering result diagram.
2. The spatial hierarchy rendering method for the internet three-dimensional map according to claim 1, wherein various data in the internet three-dimensional map scene comprises two-dimensional vector data, raster data and three-dimensional model data.
3. The spatial hierarchy rendering method for an internet-oriented three-dimensional map according to claim 1, wherein the performing spatial hierarchy computation on each two-dimensional layer, computing a target depth deviation value of each tile in the two-dimensional layer according to a camera position, performing depth deviation on each tile in the two-dimensional layer in a tile shader, includes:
Calculating a first equidistant depth offset of the two-dimensional map layer according to the priority order of the two-dimensional map layer and the total map layer number of the three-dimensional map scene;
calculating a second equidistant depth offset of the two-dimensional layer according to the first equidistant depth offset of the two-dimensional layer and the camera height;
calculating a target depth deviation value of each fragment in the two-dimensional layer according to the camera angle and the second equidistant depth deviation value of the two-dimensional layer;
In a two-dimensional primitive shader, a depth offset is performed on the two-dimensional primitive in the primitive shader based on the target depth offset value for each primitive in the two-dimensional primitive.
4. The method for rendering a three-dimensional map on an internet-oriented spatial hierarchy according to claim 3, wherein the calculating the first equidistant depth offset of the two-dimensional map layer according to the priority order of the two-dimensional map layer and the total number of layers of the three-dimensional map scene comprises:
when the camera is above the ground, the calculation formula of the first equidistant depth offset of the scene of the two-dimensional layer on the ground is as follows:
;
when the camera is under the ground surface, the calculation formula of the first equidistant depth offset of the two-dimensional layer in the underground scene is as follows:
;
Wherein, AndThe first equidistant depth offsets for the above-ground scene and the below-ground scene of the two-dimensional layer respectively,The total number of layers representing the three-dimensional map scene,Representing the priority order of the current two-dimensional layer,Representing the corrected empirical value of equidistant depth offset.
5. The spatial hierarchy rendering method for an internet-oriented three-dimensional map according to claim 4, wherein the value of the corrected experience value of equidistant depth offset is
6. The method for rendering a spatial hierarchy of an internet-oriented three-dimensional map according to claim 3, wherein calculating a second equidistant depth offset of the two-dimensional map from the first equidistant depth offset of the two-dimensional map and the camera height comprises:
And multiplying the ratio of the vertical height of the camera to the camera height deviation correction value by the first equidistant depth offset of the two-dimensional layer to obtain the second equidistant depth offset of the two-dimensional layer.
7. The method for rendering the spatial hierarchy of the three-dimensional map on the internet as recited in claim 6, wherein the value of the camera height deviation correction value is proportional to the vertical height of the camera.
8. The method for rendering a spatial hierarchy of an internet-oriented three-dimensional map according to claim 3, wherein the performing spatial hierarchy calculation on the three-dimensional model, calculating a target depth deviation value of each primitive in the three-dimensional model according to a camera position, performing depth deviation on each primitive in the three-dimensional model in a primitive shader, comprises:
In a vertex shader of the three-dimensional model, judging whether the fragment is above or below the ground surface according to the world coordinate z value of the fragment vertex of the three-dimensional model;
When the camera is above the ground surface, acquiring a second equidistant depth offset of the two-dimensional image layer with the lowest priority as a target equidistant depth offset, and when the camera is below the ground surface, acquiring a second equidistant depth offset of the two-dimensional image layer with the highest priority as a target equidistant depth offset;
Calculating a target depth deviation value of each fragment in the three-dimensional model according to the camera angle and the target equidistant depth deviation value;
in the three-dimensional model fragment shader, depth migration is performed on the three-dimensional model in the fragment shader according to the target depth migration value of each fragment in the three-dimensional model.
9. The spatial hierarchy rendering method for an internet-oriented three-dimensional map according to any one of claims 3 or 8, wherein calculating the target depth deviation value of each tile according to the camera angle and the equidistant depth deviation amount comprises:
In the vertex shader, the fragment vertexes are converted into a world coordinate system through a preset model matrix to obtain world coordinates of the fragment vertexes, the world coordinates of the camera are subtracted from the world coordinates of the fragment vertexes, and then vector normalization processing is carried out to obtain direction vectors from the fragment to the camera;
performing dot product operation on the vector of the direction vector from the patch to the camera and the normal vector of the patch, and then inverting to obtain an initial depth deviation value of the current patch influenced by the camera angle;
and transmitting the equidistant depth offset into a fragment shader, and multiplying the equidistant depth offset by an initial depth offset value of each fragment affected by the camera angle to obtain a target depth offset value of each fragment due to the camera angle.
10. An internet three-dimensional map-oriented spatial hierarchy rendering system, comprising:
the system comprises a layer sequence recording module, a layer sequence processing module and a layer sequence processing module, wherein the layer sequence recording module is used for acquiring various data in an Internet three-dimensional map scene, correspondingly generating layers to be rendered and recording the priority sequence of each layer;
The two-dimensional layer depth migration module is used for carrying out space hierarchy calculation on each two-dimensional layer, calculating a target depth migration value of each fragment in the two-dimensional layer according to the camera position, and carrying out depth migration on each fragment of the two-dimensional layer in the fragment shader;
The three-dimensional model depth migration module is used for carrying out space hierarchy calculation on the three-dimensional model, calculating a target depth migration value of each fragment in the three-dimensional model according to the camera position, and carrying out depth migration on each fragment of the three-dimensional model in the fragment shader;
And the rendering module is used for sequentially calling the drawing interfaces according to the priority orders of the two-dimensional layer and the three-dimensional model, and rendering the three-dimensional map scene on the screen to obtain a rendering result diagram.
CN202411643561.3A 2024-11-18 2024-11-18 A spatial layer rendering method and system for Internet three-dimensional maps Active CN119152102B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411643561.3A CN119152102B (en) 2024-11-18 2024-11-18 A spatial layer rendering method and system for Internet three-dimensional maps

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411643561.3A CN119152102B (en) 2024-11-18 2024-11-18 A spatial layer rendering method and system for Internet three-dimensional maps

Publications (2)

Publication Number Publication Date
CN119152102A true CN119152102A (en) 2024-12-17
CN119152102B CN119152102B (en) 2025-06-03

Family

ID=93813075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411643561.3A Active CN119152102B (en) 2024-11-18 2024-11-18 A spatial layer rendering method and system for Internet three-dimensional maps

Country Status (1)

Country Link
CN (1) CN119152102B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866967A (en) * 2019-11-15 2020-03-06 深圳市瑞立视多媒体科技有限公司 Water ripple rendering method, device, equipment and storage medium
WO2021260266A1 (en) * 2020-06-23 2021-12-30 Nokia Technologies Oy A method, an apparatus and a computer program product for volumetric video coding
CN117788726A (en) * 2022-09-22 2024-03-29 腾讯科技(深圳)有限公司 Map data rendering method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866967A (en) * 2019-11-15 2020-03-06 深圳市瑞立视多媒体科技有限公司 Water ripple rendering method, device, equipment and storage medium
WO2021260266A1 (en) * 2020-06-23 2021-12-30 Nokia Technologies Oy A method, an apparatus and a computer program product for volumetric video coding
CN117788726A (en) * 2022-09-22 2024-03-29 腾讯科技(深圳)有限公司 Map data rendering method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN119152102B (en) 2025-06-03

Similar Documents

Publication Publication Date Title
US10262238B2 (en) Panoramic camera systems
US9183666B2 (en) System and method for overlaying two-dimensional map data on a three-dimensional scene
US9275493B2 (en) Rendering vector maps in a geographic information system
CN100587722C (en) map display device
US9286712B2 (en) System and method for approximating cartographic projections by linear transformation
US20140267273A1 (en) System and method for overlaying two-dimensional map elements over terrain geometry
US10783170B2 (en) Geotagging a landscape photograph
KR102701851B1 (en) Apparatus and method for determining LOD(level Of detail) for texturing cube map
US20140043322A1 (en) Method and apparatus for displaying interface elements
US9437034B1 (en) Multiview texturing for three-dimensional models
US9691175B2 (en) 3-D models as a navigable container for 2-D raster images
US20200027261A1 (en) Rendering 360 depth content
US10127711B2 (en) Method and apparatus rendering caustics
US8643678B1 (en) Shadow generation
US10652514B2 (en) Rendering 360 depth content
CN114663632B (en) Method and device for displaying virtual objects based on spatial position illumination
US10403028B2 (en) System and method for geometric warping correction in projection mapping
CN114967170B (en) Display processing method and device thereof based on flexible naked-eye three-dimensional display device
US7362335B2 (en) System and method for image-based rendering with object proxies
CN119152102B (en) A spatial layer rendering method and system for Internet three-dimensional maps
CN111773706B (en) Game scene rendering method and device
CN116704098B (en) Method, device, electronic device and storage medium for generating directed distance field
KR100848687B1 (en) 3D graphics processing device and its operation method
CN114842125A (en) Building rendering method, device, electronic device and program product
CN112819929A (en) Water surface rendering method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant