Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to solve the problem that the depth conflict generated in the three-dimensional map space rendering cannot be effectively solved in the prior art.
In order to solve the technical problems, the invention provides a spatial hierarchy rendering method for an Internet three-dimensional map, which comprises the following steps:
Acquiring various data in an Internet three-dimensional map scene, correspondingly generating layers to be rendered, and recording the priority order of each layer, wherein the layers to be rendered comprise two-dimensional layers and three-dimensional models;
Performing space level calculation on each two-dimensional layer, calculating a target depth deviation value of each fragment in the two-dimensional layer according to the camera position, and performing depth deviation on each fragment of the two-dimensional layer in a fragment shader;
Performing space hierarchy calculation on the three-dimensional model, calculating a target depth deviation value of each fragment in the three-dimensional model according to the camera position, and performing depth deviation on each fragment of the three-dimensional model in a fragment shader;
and calling a drawing interface successively according to the priority order of the two-dimensional layer and the three-dimensional model, and rendering the three-dimensional map scene on a screen to obtain a rendering result diagram.
Preferably, the various data in the internet three-dimensional map scene comprise two-dimensional vector data, raster data and three-dimensional model data.
Preferably, the performing spatial hierarchy calculation on each two-dimensional layer, calculating a target depth deviation value of each tile in the two-dimensional layer according to the camera position, and performing depth deviation on each tile in the two-dimensional layer in a tile shader includes:
Calculating a first equidistant depth offset of the two-dimensional map layer according to the priority order of the two-dimensional map layer and the total map layer number of the three-dimensional map scene;
calculating a second equidistant depth offset of the two-dimensional layer according to the first equidistant depth offset of the two-dimensional layer and the camera height;
calculating a target depth deviation value of each fragment in the two-dimensional layer according to the camera angle and the second equidistant depth deviation value of the two-dimensional layer;
In a two-dimensional primitive shader, a depth offset is performed on the two-dimensional primitive in the primitive shader based on the target depth offset value for each primitive in the two-dimensional primitive.
Preferably, the calculating the first equidistant depth offset of the two-dimensional layer according to the priority order of the two-dimensional layer and the total layer number of the three-dimensional map scene includes:
when the camera is above the ground, the calculation formula of the first equidistant depth offset of the scene of the two-dimensional layer on the ground is as follows:
;
when the camera is under the ground surface, the calculation formula of the first equidistant depth offset of the two-dimensional layer in the underground scene is as follows:
;
Wherein, AndThe first equidistant depth offsets for the above-ground scene and the below-ground scene of the two-dimensional layer respectively,The total number of layers representing the three-dimensional map scene,Representing the priority order of the current two-dimensional layer,Representing the corrected empirical value of equidistant depth offset.
Preferably, the correction experience value of the equidistant depth offset is。
Preferably, calculating the second equidistant depth offset of the two-dimensional layer from the first equidistant depth offset of the two-dimensional layer and the camera height comprises:
And multiplying the ratio of the vertical height of the camera to the camera height deviation correction value by the first equidistant depth offset of the two-dimensional layer to obtain the second equidistant depth offset of the two-dimensional layer.
Preferably, the value of the camera height deviation correction value is proportional to the vertical height of the camera.
Preferably, the performing spatial hierarchy calculation on the three-dimensional model, calculating a target depth deviation value of each voxel in the three-dimensional model according to the camera position, and performing depth deviation on each voxel of the three-dimensional model in a voxel shader, including:
In a vertex shader of the three-dimensional model, judging whether the fragment is above or below the ground surface according to the world coordinate z value of the fragment vertex of the three-dimensional model;
When the camera is above the ground surface, acquiring a second equidistant depth offset of the two-dimensional image layer with the lowest priority as a target equidistant depth offset, and when the camera is below the ground surface, acquiring a second equidistant depth offset of the two-dimensional image layer with the highest priority as a target equidistant depth offset;
Calculating a target depth deviation value of each fragment in the three-dimensional model according to the camera angle and the target equidistant depth deviation value;
in the three-dimensional model fragment shader, depth migration is performed on the three-dimensional model in the fragment shader according to the target depth migration value of each fragment in the three-dimensional model.
Preferably, calculating the target depth offset value for each tile from the camera angle and the equidistant depth offset comprises:
In the vertex shader, the fragment vertexes are converted into a world coordinate system through a preset model matrix to obtain world coordinates of the fragment vertexes, the world coordinates of the camera are subtracted from the world coordinates of the fragment vertexes, and then vector normalization processing is carried out to obtain direction vectors from the fragment to the camera;
performing dot product operation on the vector of the direction vector from the patch to the camera and the normal vector of the patch, and then inverting to obtain an initial depth deviation value of the current patch influenced by the camera angle;
and transmitting the equidistant depth offset into a fragment shader, and multiplying the equidistant depth offset by an initial depth offset value of each fragment affected by the camera angle to obtain a target depth offset value of each fragment due to the camera angle.
The invention also provides a space hierarchy rendering system facing the Internet three-dimensional map, which comprises the following steps:
the system comprises a layer sequence recording module, a layer sequence processing module and a layer sequence processing module, wherein the layer sequence recording module is used for acquiring various data in an Internet three-dimensional map scene, correspondingly generating layers to be rendered and recording the priority sequence of each layer;
The two-dimensional layer depth migration module is used for carrying out space hierarchy calculation on each two-dimensional layer, calculating a target depth migration value of each fragment in the two-dimensional layer according to the camera position, and carrying out depth migration on each fragment of the two-dimensional layer in the fragment shader;
The three-dimensional model depth migration module is used for carrying out space hierarchy calculation on the three-dimensional model, calculating a target depth migration value of each fragment in the three-dimensional model according to the camera position, and carrying out depth migration on each fragment of the three-dimensional model in the fragment shader;
And the rendering module is used for sequentially calling the drawing interfaces according to the priority orders of the two-dimensional layer and the three-dimensional model, and rendering the three-dimensional map scene on the screen to obtain a rendering result diagram.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
According to the space hierarchy rendering method for the Internet three-dimensional map, the relative positions between the cameras and the target fragments are also considered on the basis of equidistant depth offset, the target depth offset value of each fragment in the two-dimensional map layer and the three-dimensional model is calculated in real time according to the camera angle, the problem of remote tile depth conflict possibly occurring under the condition of large pitch angle is avoided, and the accuracy of the spatial relationship of the Internet three-dimensional map under any angle of the cameras is ensured.
Furthermore, the invention also considers the problem of depth precision error caused by the camera height in the process of calculating the equidistant depth offset, solves the problem of non-linearity of the depth precision caused by the camera height by introducing the camera height offset value, improves the detection precision of depth buffering, and realizes the correctness of the spatial relationship of the camera at any height.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and specific examples, which are not intended to be limiting, so that those skilled in the art will better understand the invention and practice it.
Example 1
Referring to fig. 1, the invention provides a spatial hierarchy rendering method for an internet three-dimensional map, which comprises the following steps:
And step 1, acquiring various data in an Internet three-dimensional map scene, correspondingly generating layers to be rendered, and recording the priority order of each layer, wherein the layers to be rendered comprise two-dimensional layers and three-dimensional models.
Step 1 comprises the following sub-steps:
And 1.1, acquiring various data in the Internet three-dimensional map scene, including two-dimensional vector data, raster data, three-dimensional model data and the like.
The two-dimensional vector data comprise data such as city base map, road routes and lakes, and are in GeoJson data format. The raster data includes an image map. The three-dimensional model data comprises a three-dimensional building white model which is in GLB (GL Transmission Format Binary) data format.
And 1.2, sorting the priorities of various data in the three-dimensional map scene of the Internet according to the expected scene effect, sequentially adding the data into a queue to be rendered, generating layers to be rendered, and recording the priority order of each layer. The layer to be rendered comprises a two-dimensional layer and a three-dimensional model.
The image map is added as a two-dimensional background (background) map layer firstly, the priority order of the map layer is recorded as 0, the priority order of the two-dimensional city map layer is recorded as 1 when the city base map of the two-dimensional vector data is the city base map layer, the priority order of the two-dimensional lake surface map layer is recorded as 2 when the two-dimensional lake surface map layer is arranged above the two-dimensional city base map layer, the subsequent two-dimensional map layers are sequentially ordered and recorded, the two-dimensional road map layer with the highest priority order is the three-dimensional model, and the three-dimensional model with the highest priority order is obtained.
And 2, carrying out space hierarchy calculation on each two-dimensional layer, calculating a target depth deviation value of each fragment in the two-dimensional layer according to the camera position, and carrying out depth deviation on each fragment of the two-dimensional layer in a fragment shader to solve the problem of depth conflict between the two-dimensional layers.
Step 2 comprises the following sub-steps:
and 2.1, calculating a first equidistant depth offset of the two-dimensional map layer according to the priority order of the two-dimensional map layer and the total map layer number of the three-dimensional map scene.
The depth offset is calculated taking into account whether the camera is above or below the surface. When the camera is above the ground, i.e. the ground scene is displayed, each two-dimensional image layer needs to deviate towards the far end of the camera according to the priority order, and the first equidistant depth deviation amount of each two-dimensional image layer can be calculated through the priority order of the two-dimensional image layers and the total image layer number of the three-dimensional map scene. For example, the two-dimensional background layer with the priority order of 0 is added first, so that the first equidistant depth offset is the largest, and the first equidistant depth offset of the two-dimensional background layer added later is reduced linearly.
Conversely, when the camera is below the surface, i.e. showing a subsurface space scene, the two-dimensional background layer with priority order 0 needs to be added last, so its first equidistant depth offset is the smallest.
In order to scale the first equidistant depth offset to the depth range of the camera, i.e. the numerical space of 0-1, a corrected empirical value of equidistant depth offset is required. The corrected empirical value of the equidistant depth offset is adjusted continuously through experiments to achieve the expected effect. In this embodiment, the corrected empirical value of equidistant depth offset is calculated as。
Thus, when the camera is above the ground, the first equidistant depth offset for a scene with a two-dimensional layer above the ground is calculated as follows:
;
when the camera is under the ground surface, the calculation formula of the first equidistant depth offset of the two-dimensional layer in the underground scene is as follows:
;
Wherein, AndThe first equidistant depth offsets for the above-ground scene and the below-ground scene of the two-dimensional layer respectively,The total number of layers representing the three-dimensional map scene,Representing the priority order of the current two-dimensional layer,Representing the corrected empirical value of equidistant depth offset.
And 2.2, calculating a second equidistant depth offset of the two-dimensional image layer according to the camera height.
The root cause of the depth conflict is that the non-linear nature of the depth values is another major factor in addition to the too close proximity between objects.
In the internet three-dimensional map, in order to simulate the visual effect of the real world, perspective projection transformation is generally adopted to realize the perspective effect of 'near-large and far-small'. After perspective projection transformation, vertex coordinates of the two-dimensional image layer are transferred to a clipping space (CLIP SPACE), and then transferred to a normalized equipment coordinate space (Normalized Device Coordinates, NDC) through perspective division, wherein the perspective division enables the relation between depth values and real distances to be a nonlinear relation, so that higher depth precision of closer objects is provided, and the depth values are more consistent with the perception of human eyes. However, this method also causes a phenomenon that the more likely it is to flicker at a remote location, the less near.
In order to solve the problem of non-linearity of depth precision caused by camera height, the invention introduces a camera height deviation correction value, wherein the camera height deviation correction value is used for calculating the depth deviation of a two-dimensional image layer due to camera height in real time. In particular, as the camera is farther from the two-dimensional layer, the offset of the two-dimensional layer should be increased accordingly due to limited depth accuracy of the camera at a distance. Conversely, when the camera is closer to the two-dimensional layer, the offset of the two-dimensional layer may be reduced accordingly, as the camera has sufficient depth precision space in the vicinity.
Therefore, the invention takes the deviation caused by the camera height into consideration, scales the equidistant offset, calculates the second equidistant depth offset of the two-dimensional layer due to the camera height, and has the following formula:
;
Wherein, For a second equidistant depth offset of the two-dimensional layer,Is the first equidistant depth offset of the two-dimensional layer,Representing the vertical height of the camera,Representing the camera height deviation correction value. The camera height deviation correction value increases with the increase of the vertical height of the camera and decreases with the decrease of the vertical height of the camera.
And 2.3, calculating a target depth deviation value of each fragment in the two-dimensional image layer according to the camera angle.
When the camera is tilted, each element in the two-dimensional image layer is far shifted to the camera according to the depth shift amount calculated in the step 2.2, and is not actually in the same plane. Particularly, when the camera is inclined at a relatively large angle, that is, when the three-dimensional map scene is viewed from a view angle close to the horizon, the distance between the far point of the two-dimensional map layer and the original plane may be relatively short after the point is shifted, so that the actual fragment depth value may be the same as the fragment depth values of other two-dimensional map layers, and the far two-dimensional map layer still has a depth conflict problem.
Based on the problem, the method calculates the depth deviation caused by the camera angle for each element in the two-dimensional image layer in real time. At this time, the depth offset of the two-dimensional layer is not equal-distance offset, but the depth offset of the patch is performed in real time according to the normal line of the patch and the direction vector from the patch to the camera. The normal to the patch is typically (0.0,0.0,1.0).
Specifically, the world coordinate position of the camera is passed into the two-dimensional layer shader as a global variable (unitorm). In the vertex shader of the two-dimensional layer, each fragment vertex in the two-dimensional layer is converted into a world coordinate system through a preset model matrix to obtain the world coordinate of each fragment vertex, the world coordinate of the camera is subtracted from the world coordinate of the fragment vertex, and then vector normalization processing is carried out to obtain the vector v from the fragment to the camera.
And carrying out dot product operation on a vector of a vector v of a pixel to a camera in the two-dimensional image layer and a vector n= (0.0,0.0,1.0) of a normal vector of the pixel, and then inverting to obtain an initial depth deviation value of the current pixel influenced by the angle of the camera.
And (2) taking the second equidistant depth offset of the two-dimensional layer in the step (2.2) as a variable (varying) to be transmitted into a fragment shader, and multiplying the variable by an initial depth offset value of each fragment in the two-dimensional layer, which is influenced by a camera angle, to obtain a target depth offset value of each fragment in the current two-dimensional layer, which is generated due to the camera angle.
According to the camera angle, calculating a target depth deviation value of each fragment in the two-dimensional layer in a fragment shader, wherein the calculation formula is as follows:
;
Wherein, Is the target depth deviation value of the patch in the two-dimensional layer,For a second equidistant depth offset of the two-dimensional layer,Representing the direction vector of the primitives in the two-dimensional layer to the camera,Representing the normal vector of the primitives in the two-dimensional layer,Representing the dot product of the pixel-to-camera direction vector and the normal vector of the pixel in the two-dimensional layer,Representing taking an absolute value for the dot product.
In particular, whenAt 1.0, the camera is shown oriented at a vertical viewing angle towards the tile, with the result still being equidistantly offset. When (when)When the depth of the two-dimensional image is greater than or equal to 0 and less than 1.0, a certain included angle exists between the camera and the pixel, and the offset is required to be amplified to distinguish other two-dimensional image layers, so that the depth conflict phenomenon is eliminated. When (when)If the depth deviation value is smaller than 0, the camera is under the ground surface, and the target depth deviation value is calculated according to the absolute value of the dot product.
In summary, the spatial hierarchy calculation is performed on each two-dimensional layer, and the target depth deviation value of the two-dimensional layer is calculated according to the camera height and the camera angle, where the formula includes:
when the camera is above the ground, the target depth bias value of each tile in the two-dimensional layer is:
;
when the camera is under the ground, the target depth deviation value of each fragment in the two-dimensional layer is:
;
Wherein, AndThe target depth deviation value for each tile in the two-dimensional map layer when the camera is above and below the surface,The total number of layers representing the three-dimensional map scene,Representing the priority order of the current two-dimensional layer,Representing the corrected empirical value of equidistant depth offset,Representing the vertical height of the camera,Representing the height deviation correction value of the camera,Representing the direction vector of the primitives in the two-dimensional layer to the camera,Representing the normal vector of the primitives in the two-dimensional layer,Representing the dot product of the pixel-to-camera direction vector and the normal vector of the pixel in the two-dimensional layer,Representing taking the absolute value of the dot product.
Step 2.4, in the two-dimensional graphics layer patch shader, according to the target depth deviation value of each patch in the two-dimensional graphics layer, carrying out depth deviation on each patch in the two-dimensional graphics layer in the patch shader, wherein the formula is as follows:
;
Wherein, Depth values representing coordinates of the primitives in the two-dimensional layer, ranging between 0,1,Is the target depth deviation value of the patch in the two-dimensional layer,Is the depth value of the patch in the two-dimensional layer.
In summary, step 2 first calculates a first equidistant depth offset according to the priority order of the two-dimensional layers, and dynamically scales the first equidistant depth offset according to the camera height to calculate a second equidistant depth offset according to the depth precision error problem generated by the camera distance. Secondly, in order to solve the problem of possible remote tile depth conflict under the condition of a large pitch angle, step 2 calculates the target depth deviation value of the tile in the two-dimensional layer in real time according to the second equidistant depth deviation amount and the relative position between the camera and the target tile, and finally solves the problem of depth conflict of the two-dimensional layer, and the position relation of the camera on the front and back space, the upper and lower space of the scene can be clearly and intuitively expressed no matter from which angle, so that the accurate spatial hierarchy expression is realized.
And 3, performing space hierarchy calculation on the three-dimensional model, calculating a target depth deviation value of each fragment in the three-dimensional model according to the camera position, and performing depth deviation on each fragment of the three-dimensional model in a fragment shader to solve the problem of space hierarchy disorder under a depth deviation strategy of the three-dimensional model.
Step 3 comprises the following sub-steps:
And 3.1, judging that the fragment is above or below the ground surface according to the world coordinate z value of the fragment vertex of the three-dimensional model in the vertex shader of the three-dimensional model.
In this implementation, it is assumed that the z value of the surface is equal to 0.0, and a fragment is considered to be below the surface if the world coordinate z value of the fragment is less than 0.0, whereas a fragment is considered to be above the surface if the world coordinate z value of the fragment is greater than or equal to 0.0.
And 3.2, when the camera is above the ground, namely when the ground scene is at a view angle, acquiring a second equidistant depth offset of the two-dimensional image layer with the lowest priority order as a target equidistant depth offset, when the camera is below the ground, acquiring the second equidistant depth offset of the two-dimensional image layer with the highest priority order as a target equidistant depth offset, and transmitting the acquired target equidistant depth offset into a three-dimensional model shader as a global variable (unimorph).
In this embodiment, the two-dimensional layer with the lowest priority is the two-dimensional background layer, and the two-dimensional layer with the highest priority is the two-dimensional road line layer.
And 3.3, calculating a target depth deviation value of each wafer in the three-dimensional model according to the camera angle and the target equidistant depth deviation.
When the camera is above the earth's surface, it is necessary to depth shift the elements of the subsurface portion of the three-dimensional model to prevent viewing of the subsurface portion that would otherwise be hidden from above-ground view. Conversely, if the camera is below the surface, depth shifting of the primitives of the aerial parts of the three-dimensional model is required to avoid the aerial parts from being improperly revealed at the subsurface point of view. The adjustment mainly solves the problem that the three-dimensional model part is improperly exposed when the camera view angle is switched, so as to ensure the accuracy of three-dimensional space hierarchy.
Therefore, in this step, as in step 2.3, the deviation of the camera angle is calculated in real time for the elements of the above-ground or below-ground part of the three-dimensional model.
Specifically, the world coordinate position of the camera is passed into the three-dimensional model shader as a global variable (unitorm). In the vertex shader of the three-dimensional model, each fragment vertex in the three-dimensional model is converted into a world coordinate system through a preset model matrix to obtain the world coordinate of the fragment vertex, the world coordinate of the camera is subtracted from the world coordinate of each fragment vertex, and then vector normalization processing is carried out to obtain the vector from the fragment to the camera。
Direction vector from element to camera in three-dimensional modelNormal vector to patchAnd (3) carrying out dot product operation on the vector, and then inverting to obtain an initial depth deviation value of the current element influenced by the camera angle.
And (3) transmitting the target equidistant depth offset in the step (3.2) into a fragment shader as a variable (varying), and multiplying the variable by an initial depth offset value of each fragment in the three-dimensional model, which is influenced by the camera angle, so as to obtain a target depth offset value of each fragment in the current three-dimensional model, which is generated due to the camera angle.
According to the camera angle and the target equidistant depth offset, calculating a target depth offset value of each fragment in the three-dimensional model in a fragment shader, wherein the calculation formula is as follows:
;
Wherein, Is the target depth deviation value of the patch in the three-dimensional model,For the amount of the depth offset of the target,Representing the voxel-to-camera direction vector in the three-dimensional model,Representing the normal vector of the patch in the three-dimensional model,Representing the dot product of the patch-to-camera direction vector and the normal vector of the patch in the three-dimensional model,Representing taking the absolute value of the dot product.
Step 3.4, in the three-dimensional model fragment shader, according to the target depth deviation value of each fragment in the three-dimensional model, carrying out depth deviation on each fragment of the three-dimensional model in the fragment shader, wherein the formula is as follows:
;
Wherein, Depth values representing coordinates of the patch in the three-dimensional model, ranging between 0,1,Is the target depth deviation value of the patch in the three-dimensional model,Is the depth value of the patch in the three-dimensional model.
Because the Internet three-dimensional map scene has a three-dimensional model such as a building white model besides the two-dimensional map, the three-dimensional model is further subjected to depth migration in step 3. When the camera is above the ground, the three-dimensional model is not shifted, and therefore, a part below the ground is displayed on the ground by mistake, because the two-dimensional image layer is shifted in depth, and the three-dimensional model is not shifted in original real depth. In order to solve the problem of incorrect spatial position relation of the three-dimensional model, step 3 aims at the three-dimensional model to carry out depth migration on the elements of the three-dimensional model below the ground surface or above the ground surface, so as to ensure the correctness of the spatial relation of the three-dimensional map scene under any angle and height. No matter from which angle the camera is, the position relation of the scene on the front and back, upper and lower spaces can be clearly and intuitively expressed, the correct space hierarchical expression is realized, and a favorable technical support is provided for the three-dimensional city construction of the Internet.
And 4, calling a drawing interface successively according to the priority order of the two-dimensional layer and the three-dimensional model, and rendering the three-dimensional map scene on a screen to obtain a rendering result diagram.
Fig. 2 is a rendering result diagram in the prior art, and it can be seen that a map layer flashes after map rendering due to a depth conflict problem, and a tearing phenomenon exists. FIG. 3 is a rendering result diagram of the present invention, which clearly distinguishes spatial hierarchy between two-dimensional layers, with correct spatial relationship.
Example two
Based on the first embodiment, the spatial hierarchy rendering method for the three-dimensional map of the internet, the embodiment provides a spatial hierarchy rendering system for the three-dimensional map of the internet, which includes:
the system comprises a layer sequence recording module, a layer sequence processing module and a layer sequence processing module, wherein the layer sequence recording module is used for acquiring various data in an Internet three-dimensional map scene, correspondingly generating layers to be rendered and recording the priority sequence of each layer;
The two-dimensional layer depth migration module is used for carrying out space hierarchy calculation on each two-dimensional layer, calculating a target depth migration value of each fragment in the two-dimensional layer according to the camera position, and carrying out depth migration on each fragment of the two-dimensional layer in the fragment shader;
The three-dimensional model depth migration module is used for carrying out space hierarchy calculation on the three-dimensional model, calculating a target depth migration value of each fragment in the three-dimensional model according to the camera position, and carrying out depth migration on each fragment of the three-dimensional model in the fragment shader;
And the rendering module is used for sequentially calling the drawing interfaces according to the priority orders of the two-dimensional layer and the three-dimensional model, and rendering the three-dimensional map scene on the screen to obtain a rendering result diagram.
Compared with the prior art, the invention has the beneficial effects that:
1. According to the method, firstly, the equidistant depth offset of the two-dimensional layer is calculated according to the priority order of the layer, the equidistant depth offset of the two-dimensional layer is dynamically scaled according to the camera height, secondly, in order to solve the problem of possible remote tile depth conflict under the condition of large pitch angle, the target depth offset value of the elements in the two-dimensional layer is calculated in real time according to the scaled equidistant depth offset and the relative position between the camera and each element in the two-dimensional layer, the depth offset is carried out on the two-dimensional layer, and the problem of depth conflict of the two-dimensional layer in the Internet three-dimensional map is solved.
2. After the depth migration is carried out on the two-dimensional map layer, the situation that the camera is positioned above the ground surface and below the ground surface is considered, and the depth migration is further carried out on the elements of the three-dimensional model below the ground surface or above the ground surface by adopting the same method, so that the correctness of the spatial relationship of the three-dimensional map scene under any angle and height is ensured.
3. The invention also considers the relative position between the camera and the target fragment on the basis of equidistant depth offset, calculates the target depth offset value of each fragment in the two-dimensional image layer and the three-dimensional model in real time according to the camera angle, avoids the problem of remote tile depth conflict possibly occurring under the condition of large pitch angle, and ensures the accuracy of the spatial relationship of the Internet three-dimensional map under any angle of the camera. In addition, the method carries out depth migration based on the fragment coordinates, and compared with a depth migration algorithm under the world coordinates, the method can solve the collision calculation problem of the depth migration under the world coordinates.
4. Furthermore, the invention also considers the problem of depth precision error caused by the camera height in the process of calculating the equidistant depth offset, solves the problem of non-linearity of the depth precision caused by the camera height by introducing the camera height offset value, improves the detection precision of depth buffering, and realizes the correctness of the spatial relationship of the camera at any height.
5. The method is suitable for the scene with a plurality of two-dimensional layers in the Internet three-dimensional map, such as vector lines, surface layers and grid layers, which are mutually overlapped and rendered, and has the best effect display and interaction experience. Providing extremely high reference value for the case of two-dimensional layer depth conflict in a three-dimensional map.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations and modifications of the present invention will be apparent to those of ordinary skill in the art in light of the foregoing description. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.