[go: up one dir, main page]

CN114241101A - 3D scene rendering method, system, device and storage medium - Google Patents

3D scene rendering method, system, device and storage medium Download PDF

Info

Publication number
CN114241101A
CN114241101A CN202111304819.3A CN202111304819A CN114241101A CN 114241101 A CN114241101 A CN 114241101A CN 202111304819 A CN202111304819 A CN 202111304819A CN 114241101 A CN114241101 A CN 114241101A
Authority
CN
China
Prior art keywords
scene
rendering
camera
texture picture
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111304819.3A
Other languages
Chinese (zh)
Other versions
CN114241101B (en
Inventor
朱林生
李多
曾江佑
王海民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Booway New Technology Co ltd
Original Assignee
Jiangxi Booway New Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Booway New Technology Co ltd filed Critical Jiangxi Booway New Technology Co ltd
Priority to CN202111304819.3A priority Critical patent/CN114241101B/en
Publication of CN114241101A publication Critical patent/CN114241101A/en
Application granted granted Critical
Publication of CN114241101B publication Critical patent/CN114241101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明属于计算机视觉技术领域,具体涉及一种三维场景渲染方法、系统、装置及存储介质。该方法包括:划分三维模型场景数据;创建三维场景相机;场景渲染。其中,定义当前渲染场景为S,将当前渲染场景S划分为固定部分场景Sf和变化部分场景,所述变化部分场景拆分为透明部分场景St与不透明部分场景Sc;创建相机C,作为当前渲染场景S的主相机,在主相机C下创建三个从属相机Cf、Cc、Ct分别加载场景Sf、Sc、St的场景数据。本发明通过缓存当前主体的渲染数据,在后面的每一帧场景渲染中复用并合成动态修改的部分,可以大大的降低当前帧场景的渲染内容,提高交互的流畅性。

Figure 202111304819

The invention belongs to the technical field of computer vision, and in particular relates to a three-dimensional scene rendering method, system, device and storage medium. The method includes: dividing three-dimensional model scene data; creating a three-dimensional scene camera; and scene rendering. Wherein, the current rendering scene is defined as S, the current rendering scene S is divided into a fixed partial scene S f and a changing partial scene, and the changing partial scene is divided into a transparent partial scene S t and an opaque partial scene S c; create a camera C, As the main camera currently rendering scene S, create three subordinate cameras C f , C c , and C t under the main camera C to load scene data of scenes S f , S c , and S t respectively. The present invention can greatly reduce the rendering content of the current frame scene and improve the smoothness of interaction by caching the rendering data of the current subject, and reusing and synthesizing the dynamically modified part in each subsequent frame of scene rendering.

Figure 202111304819

Description

Three-dimensional scene rendering method, system, device and storage medium
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a three-dimensional scene rendering method, a three-dimensional scene rendering system, a three-dimensional scene rendering device and a storage medium.
Background
With the continuous development and deep application of three-dimensional design technology, more and more zero engineering designs are designed and stored in the form of three-dimensional models. In industrial CAD/CAE design, a design project often contains multiple three-dimensional models of business interest. Taking CAD design software as an example, in the common interactions of primitive drawing and scene layout, if the scene scale is large, since the vertices of all models need to be added to opengl (open Graphics library) for rendering in each frame of scene rendering, the rendering performance is poor.
All three-dimensional models are composed of three-dimensional points. One point generates a point, two points generate a line, and three or more points generate a triangular patch or a polygonal surface. The drawing provided by OpenGL also confirms the drawing content through a vertex array, the vertex is a point set representing the current three-dimensional model, and the drawing of a point line surface can be realized by matching with a link mode. However, through analysis, when the primitive drawing and the scene layout are carried out, the scene body is not changed, and only the current dynamic drawing part is changed.
Therefore, it is desirable to provide an efficient rendering method for a three-dimensional scene, so that the rendering content of the current frame scene can be greatly reduced by caching the rendering data of the current main body, multiplexing and synthesizing a dynamically modified part in each subsequent frame scene rendering.
Disclosure of Invention
In order to solve the problem that in the prior art, the rendering performance is poor because the vertexes of all models are required to be added into OpenGL for rendering in each frame of scene rendering, the invention provides a three-dimensional scene rendering method, a three-dimensional scene rendering system, a three-dimensional scene rendering device and a three-dimensional scene rendering storage medium.
The invention is realized by adopting the following technical scheme:
a method of three-dimensional scene rendering, comprising:
firstly, dividing three-dimensional model scene data;
defining a current rendering scene as S, and dividing the current rendering scene S into fixed partial scenes SfAnd a varying partial scene, wherein the varying partial scene is split into transparent partial scenes StWith opaque parts of the scene Sc
Secondly, creating a three-dimensional scene camera;
creating a camera C as a master camera for currently rendering a scene S, creating three slave cameras C under the master camera Cf、Cc、CtLoad scene S separatelyf、Sc、StCreating two texture pictures for rendering and using in the current window pixel size for each camera;
thirdly, rendering a scene;
rendering a scene of a data change part into a texture picture of the change part for caching; will fix part of scene SfRendering to a frame scene cache object, and multiplexing texture pictures of the previous frame scene; and mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result, submitting the rendering result to an interface for rendering, and repeating the scene rendering operation to start the rendering of the next frame of scene.
As a further scheme of the invention, in the divided three-dimensional model scene data, the defined current rendering scene S and the divided fixed part scene SfTransparent partial scene StAnd an opaque part of the scene ScSatisfies the following relationship: s ═ Sf∪Sc∪StWhere U represents the union operation.
Further, in the divided three-dimensional model scene data, each member model in the scene is called as an element, and the fixed partial scene Sf(Fix Scene) represents a generally fixed element set in a Scene, which is usually model data of the Scene, and modification is triggered only when the model data is added or deleted;
said opaque part of the scene Sc(Changing Scene) represents a collection of frequently Changing elements in a Scene, generallyIn the normal case, data are drawn dynamically by JIG, and the model changes in real time depending on the current mouse position;
the transparent partial scene St(Transparent Scene) represents a collection of elements with transparency in a Scene, typically rendering auxiliary data, and primitives with transparency.
As a further aspect of the present invention, when creating a three-dimensional scene camera, the camera matrix of the master camera C and the slave camera C are created during rendering each frame of scenefSlave camera CcSlave camera CtThe camera matrices of (a) are consistent.
Further, when creating a three-dimensional scene camera, creating two texture pictures for rendering use of the current window pixel size for each camera includes texture picture ct (color texture) and texture picture dt (depth texture).
Further, when creating the three-dimensional scene camera, the method further includes: under the primary camera C, a rectangular HUD camera C is created again which fills the screen all the timehThe system comprises a rectangular screen, a texture filling module, a data processing module and a data processing module, wherein the rectangular screen is used for carrying out texture filling on the rectangular screen to realize rendering of scene data; wherein, HUD camera ChThe creation mode is as follows: according to a fixed view port matrix MvAnd a projection matrix MpAt HUD Camera ChAdding a corresponding rectangle to obtain a rectangle vertex list;
the fixed viewport matrix MvIs composed of
Figure BDA0003339808370000031
The projection matrix MpIs composed of
Figure BDA0003339808370000041
The list of vertices of the rectangle is (-1, -1, 0), (-1, 1, 0), (1, -1, 0).
As a further scheme of the present invention, during scene rendering, opengl (open Graphics library) is invoked to perform scene rendering, and when the scene is drawn to a specified camera node, the scene is processed in different cases; the method for rendering the scene with the changed data part into the texture picture with the changed part for caching comprises the following steps:
updating the camera pose of the master camera C to the slave camera CcAnd a slave camera CtIn the camera pose of (a); the camera pose consists of a viewport transformation matrix, a projection matrix and a viewport matrix and is used for determining the observation direction and the imaging size of the current camera;
slave camera C is connected through the glBindFrameBufferEXT method of OpenGLcAnd a slave camera CtThe depth cache and the color cache are respectively rendered to the texture picture CTcTexture picture DTcAnd texture picture CTtTexture picture DTtWherein the texture picture CTcTexture picture DTcFor slave camera CcCorresponding opaque part scene ScCorresponding texture picture CT and texture picture DT, said texture picture CTtTexture picture DTtFor slave camera CtCorresponding transparent partial scene StCorresponding texture picture CT and texture picture DT.
Further, the partial scene S is fixedfRendering to a frame scene cache object, and multiplexing texture pictures of a previous frame scene, wherein the method comprises the following steps:
recording the currently dependent camera CfIs compared with the main camera C, records whether the change occurs or not, and records the variable Bc
Comparing the current scene element with the previous scene element to determine whether the current scene element changes, and recording a variable Bs
According to the variable BcAnd variable BsDeciding whether re-rendering of the slave camera C is requiredfThe depth cache and the color cache are respectively rendered to the texture picture CTfTexture picture DTfOtherwise, multiplexing the texture of the previous frame.
Further, the method for obtaining a rendering result by mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part includes:
CT based on obtained texture picturecTexture picture DTcTexture picture CTtTexture mapSheet DTtTexture picture CTfAnd texture picture DTfBinds all the maps to the HUD camera C by calling the glBindTexture function of OpenGLhMiddle, HUD Camera ChThe lower rectangle is rendered one by adopting a custom rendering pipeline (shader) of OpenGL, wherein the rendering content is split into pixels (rasterized) one by one in the OpenGL rendering pipeline, and each fragment represents a corresponding pixel.
Further, the rendering of one fragment by one fragment adopts a fragment shader, the fragment shader determines the currently rendered texture coordinate (x, y) through gl _ TexCoord, and provides subsequent application by calculating the fragment color value of the current texture coordinate and filling the fragment color value into the FrameBuffer.
Further, the algorithmic expression of the fragment shader is as follows:
Figure BDA0003339808370000051
CT (x, y) represents to acquire a color value RGBA of an x coordinate and a y coordinate of the current color texture, DT (x, y) represents to acquire a depth value of the x coordinate and the y coordinate in the current depth texture, and mix represents to mix the two RGBA colors; subscripts f, C, t in CT (x, y) and DT (x, y) represent slave camera C, respectivelyfSlave camera CcSlave camera CtAnd (5) corresponding texture pictures.
As a further scheme of the present invention, the rendering result is obtained by performing scene rendering through a glSwapBuffer provided by OpenGL, and the rendering result is submitted to interface rendering, and the scene rendering operation is repeated to start rendering of a next frame.
The invention also comprises a three-dimensional scene rendering system, wherein the three-dimensional scene rendering system adopts the three-dimensional scene rendering method to realize the rendering content of the current frame scene; the three-dimensional scene rendering system includes:
a scene division module for defining the current rendering scene as S and dividing the current rendering scene S into a fixed part scene SfAnd a change part scene, whereinSplitting the changed part scene into transparent part scene StWith opaque parts of the scene Sc
A camera creation module for creating a camera C as a master camera for currently rendering a scene S, and creating three slave cameras C under the master camera Cf、Cc、CtLoad scene S separatelyf、Sc、StCreating two texture pictures for rendering and using in the current window pixel size for each camera; and
the scene rendering module is used for rendering the scene of the data change part into the texture picture of the change part for caching; will fix part of scene SfRendering to a frame scene cache object, and multiplexing texture pictures of the previous frame scene; and mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result, submitting the rendering result to an interface for rendering, and repeating the scene rendering operation to start the rendering of the next frame of scene.
The invention also includes a computer device comprising a memory storing a computer program and a processor implementing the steps of the method for rendering a three-dimensional scene when executing the computer program.
The invention also comprises a storage medium storing a computer program which, when executed by a processor, performs the steps of a method of rendering a three-dimensional scene.
The technical scheme provided by the invention has the following beneficial effects:
the method is analyzed from the perspective of three-dimensional design, a scene only needs to be completely rendered when a viewport changes, and in most cases, only a changed part needs to be processed, taking drawing in the three-dimensional design as an example, most of the scene is unchanged, only the drawn part needs to be redrawn, and the currently drawn part is only a small part of the scene; by caching the rendering data of the current main body and multiplexing and synthesizing the dynamically modified part in the rendering of each frame of scene later, the rendering content of the current frame of scene can be greatly reduced, and the interaction fluency can be improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of a three-dimensional scene rendering method according to embodiment 1 of the present invention.
Fig. 2 is a rendering flowchart of a three-dimensional scene rendering method in embodiment 1 of the present invention.
Fig. 3 is a flowchart of scene rendering in a three-dimensional scene rendering method according to embodiment 1 of the present invention.
Fig. 4 is a flowchart of a rendering result obtained in the three-dimensional scene rendering method in embodiment 1 of the present invention.
Fig. 5 is a system block diagram of a three-dimensional scene rendering system in embodiment 2 of the present invention.
Fig. 6 is a schematic diagram of a scenario in which the present invention is not changed when the guard room example is placed in the entire substation scenario.
Fig. 7 is a schematic diagram of a scenario that changes when a guard room example is placed in an overall substation scenario, according to the present invention.
Fig. 8 is a schematic diagram of the final outcome of the invention when applied to an example of a guard room placed in a whole substation scenario.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a three-dimensional scene rendering method, a three-dimensional scene rendering system, a three-dimensional scene rendering device and a storage medium, aiming at the problem that in the common interaction of primitive rendering and scene layout, under the condition of large scene scale, the rendering performance is poor because the vertexes of all models are required to be added into OpenGL for rendering in each frame of scene rendering.
Example 1
As shown in fig. 1 and fig. 2, the present embodiment provides a three-dimensional scene rendering method, which includes the following steps:
and S1, dividing the three-dimensional model scene data.
In this embodiment, a current rendered scene is defined as S, and the current rendered scene S is divided into fixed partial scenes SfAnd a varying partial scene, wherein the varying partial scene is split into transparent partial scenes StWith opaque parts of the scene Sc. After being divided, the defined current rendering scene S and the divided fixed part scene S are in the divided three-dimensional model scene datafTransparent partial scene StAnd an opaque part of the scene ScSatisfies the following relationship: s ═ Sf∪Sc∪StWhere U represents the union operation.
In the divided three-dimensional model scene data, each component model in the scene is called as an element, and the fixed part scene Sf(Fix Scene) represents a generally fixed element set in a Scene, which is usually model data of the Scene, and modification is triggered only when the model data is added or deleted;
said opaque part of the scene Sc(Changing Scene) represents a frequently Changing element set in a Scene, usually JIG dynamic rendering data, and a model can change in real time depending on the current mouse position;
the transparent partial scene St(Transparent Scene) represents a collection of elements with transparency in a Scene, typically rendering auxiliary data, and primitives with transparency.
And S2, creating a three-dimensional scene camera.
In the present embodiment, a camera C is created as the current renderingA master camera of scene S, under which three slave cameras C are createdf、Cc、CtLoad scene S separatelyf、Sc、StAnd creating two texture pictures for rendering use for each camera for the current window pixel size. When the three-dimensional scene camera is created, two texture pictures for rendering use, which are created for each camera at the current window pixel size, include a texture picture ct (color texture) and a texture picture dt (depth texture).
When the three-dimensional scene camera is created, the camera matrix of the master camera C and the slave camera C are created when each frame of scene is renderedfSlave camera CcSlave camera CtThe camera matrices of (a) are consistent.
When the three-dimensional scene camera is created, the method further comprises the following steps: under the primary camera C, a rectangular HUD camera C is created again which fills the screen all the timehThe system comprises a rectangular screen, a texture filling module, a data processing module and a data processing module, wherein the rectangular screen is used for carrying out texture filling on the rectangular screen to realize rendering of scene data; wherein, HUD camera ChThe creation mode is as follows: according to a fixed view port matrix MvAnd a projection matrix MpAt HUD Camera ChAdding a corresponding rectangle to obtain a rectangle vertex list; the rectangle is a model of the current scene, and then is always as large as the size of the screen, so that the texture filled on the rectangle is data seen by a user, the projection refers to the coordinate position of a point in a three-dimensional space projected onto the screen, and the projection matrix is used for performing the operation, namely the three-dimensional coordinate point x the projection matrix is the screen coordinate point.
The fixed viewport matrix MvIs composed of
Figure BDA0003339808370000091
The projection matrix MpIs composed of
Figure BDA0003339808370000092
The list of vertices of the rectangle is (-1, -1, 0), (-1, 1, 0), (1, -1, 0).
In this embodiment, the texture of the texture picture in computer graphics includes both the texture of the object surface in the general sense, i.e. the object surface exhibits uneven grooves, and also color patterns on the smooth surface of the object, which we generally refer to more as motifs. The pictures that need to be filled in on the model are generally referred to as textures. The purpose of texture filling here is to save the frame images of other scenes as one picture for composition.
In other rendering operations, not all rendering is implemented by texture, and the implementation uses texture pictures for subsequent rendering.
And S3, rendering the scene.
In this embodiment, OpenGL is invoked to perform scene rendering based on the divided camera information. Referring to fig. 3, the case division processing is performed when drawing to a designated camera node, and the steps are as follows:
s301, rendering the scene of the data change part to the texture picture of the change part for caching.
In the present embodiment, the camera pose of the master camera C is updated to the slave camera CcAnd a slave camera CtIn the camera pose of (a); the camera pose consists of a viewport transformation matrix, a projection matrix and a viewport matrix and is used for determining the observation direction and the imaging size of the current camera.
Slave camera C is connected through the glBindFrameBufferEXT method of OpenGLcAnd a slave camera CtThe depth cache and the color cache are respectively rendered to the texture picture CTcTexture picture DTcAnd texture picture CTtTexture picture DTtWherein the texture picture CTcTexture picture DTcFor slave camera CcCorresponding opaque part scene ScCorresponding texture picture CT and texture picture DT, said texture picture CTtTexture picture DTtFor slave camera CtCorresponding transparent partial scene StCorresponding texture picture CT and texture picture DT.
S302, fixing part of scene SfRendering to a frame scene cache object, and multiplexing texture pictures of the previous frame scene.
In the present embodiment, referring to fig. 4, the partial scene S is fixedfThe method for rendering to a frame scene cache object and multiplexing the texture picture of the previous frame scene comprises the following steps:
s3021, recording the current slave camera CfIs compared with the main camera C, records whether the change occurs or not, and records the variable Bc
S3022, comparing the current scene element with the previous scene element to determine whether the current scene element changes, and recording a variable Bs
S3023 according to the variable BcAnd variable BsDeciding whether re-rendering of the slave camera C is requiredfThe depth cache and the color cache are respectively rendered to the texture picture CTfTexture picture DTfOtherwise, multiplexing the texture of the previous frame.
And S303, mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result.
In the present embodiment, CT is performed based on the obtained texture picturecTexture picture DTcTexture picture CTtTexture picture DTtTexture picture CTfAnd texture picture DTfBinds all the maps to the HUD camera C by calling the glBindTexture function of OpenGLhMiddle, HUD Camera ChThe lower rectangle is rendered one by adopting a custom rendering pipeline (shader) of OpenGL, wherein the rendering content is split into pixels (rasterized) one by one in the OpenGL rendering pipeline, and each fragment represents a corresponding pixel.
And the fragment shader is used for rendering fragment by fragment, the fragment shader determines the currently rendered texture coordinate (x, y) through gl _ Texchamd, and the fragment color value returned to the current texture coordinate is calculated and filled into the FrameBuffer to provide subsequent application.
The algorithmic expression of the fragment shader is as follows:
Figure BDA0003339808370000111
CT (x, y) represents to acquire a color value RGBA of an x coordinate and a y coordinate of the current color texture, DT (x, y) represents to acquire a depth value of the x coordinate and the y coordinate in the current depth texture, and mix represents to mix the two RGBA colors; subscripts f, C, t in CT (x, y) and DT (x, y) represent slave camera C, respectivelyfSlave camera CcSlave camera CtAnd (5) corresponding texture pictures.
And S304, submitting the rendering result to an interface for rendering, and repeating the scene rendering operation to start the rendering of the next frame of scene.
In this embodiment, the rendering result is obtained by performing scene rendering through a glSwapBuffer provided by OpenGL, submitting the rendering result to interface rendering, and repeating the scene rendering operation to start rendering of a next frame.
According to the embodiment, the rendering data of the current main body is cached, and the dynamically modified part is multiplexed and synthesized in the rendering of each frame of scene in the back, so that the rendering content of the current frame of scene can be greatly reduced, and the interaction fluency can be improved.
Example 2
As shown in fig. 5, a three-dimensional scene rendering system provided in an embodiment of the present invention includes a scene division module 100, a camera creation module 200, and a scene rendering module 300.
A scene dividing module 100, configured to define a current rendered scene S and divide the current rendered scene S into fixed partial scenes SfAnd a varying partial scene, wherein the varying partial scene is split into transparent partial scenes StWith opaque parts of the scene Sc
A camera creation module 200 for creating a camera C as a master camera for currently rendering a scene S, under which three slave cameras C are createdf、Cc、CtLoad scene S separatelyf、Sc、StCreating two texture pictures for rendering and using in the current window pixel size for each camera; and
a scene rendering module 300, configured to render a scene of a data change portion into a texture picture of the change portion for caching; will fix part of scene SfRendering to a frame scene cache object, and multiplexing texture pictures of the previous frame scene; and mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result, submitting the rendering result to an interface for rendering, and repeating the scene rendering operation to start the rendering of the next frame of scene.
In the scene partitioning module 100, in the partitioned three-dimensional model scene data, each component model in the scene is referred to as an element, and the fixed part of the scene S is referred to as an elementf(Fix Scene) represents a generally fixed element set in a Scene, which is usually model data of the Scene, and modification is triggered only when the model data is added or deleted; said opaque part of the scene Sc(Changing Scene) represents a frequently Changing element set in a Scene, usually JIG dynamic rendering data, and a model can change in real time depending on the current mouse position; the transparent partial scene St(Transparent Scene) represents a collection of elements with transparency in a Scene, typically rendering auxiliary data, and primitives with transparency.
The camera creating module 200 creates a camera matrix of the master camera C and the slave camera C when each frame of scene is renderedfSlave camera CcSlave camera CtThe camera matrix of (a) is consistent, further comprising: under the primary camera C, a rectangular HUD camera C is created again which fills the screen all the timehFor texture filling to a rectangular screen, rendering of scene data, HUD camera ChIs created according to a fixed viewport matrix MvAnd a projection matrix MpAt HUD Camera ChAnd adding a corresponding rectangle next to obtain a rectangle vertex list.
The scene rendering module 300 calls opengl (open Graphics library) to perform scene rendering, performs case-by-case processing when drawing to a designated camera node, and updates the camera pose of the master camera C to the slave camera CcAnd a slave camera CtIn the camera pose of (a); the camera pose consists of a viewport transformation matrix, a projection matrix and a viewport matrix and is used for determining the observation direction and the imaging size of the current camera. Slave camera C is connected through the glBindFrameBufferEXT method of OpenGLcAnd a slave camera CtThe depth cache and the color cache are respectively rendered to the texture picture CTcTexture picture DTcAnd texture picture CTtTexture picture DTtWherein the texture picture CTcTexture picture DTcFor slave camera CcCorresponding opaque part scene ScCorresponding texture picture CT and texture picture DT, said texture picture CTtTexture picture DTtFor slave camera CtCorresponding transparent partial scene StCorresponding texture picture CT and texture picture DT.
CT based on obtained texture picturecTexture picture DTcTexture picture CTtTexture picture DTtTexture picture CTfAnd texture picture DTfBinds all the maps to the HUD camera C by calling the glBindTexture function of OpenGLhMiddle, HUD Camera ChThe lower rectangle is rendered one by adopting a custom rendering pipeline (shader) of OpenGL, wherein the rendering content is split into pixels (rasterized) one by one in the OpenGL rendering pipeline, and each fragment represents a corresponding pixel.
The three-dimensional scene rendering system adopts the steps of the three-dimensional scene rendering method when being executed, and therefore, the operation process of the three-dimensional scene rendering system in the embodiment is not described in detail.
Example 3
In an embodiment of the present invention, there is provided a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the steps in the above method embodiment 1 when executing the computer program:
firstly, dividing three-dimensional model scene data;
defining the current rendering scene as S, and dividing the current rendering scene into SDivided into fixed part scenes SfAnd a varying partial scene, wherein the varying partial scene is split into transparent partial scenes StWith opaque parts of the scene Sc
Secondly, creating a three-dimensional scene camera;
creating a camera C as a master camera for currently rendering a scene S, creating three slave cameras C under the master camera Cf、Cc、CtLoad scene S separatelyf、Sc、StCreating two texture pictures for rendering and using in the current window pixel size for each camera;
thirdly, rendering a scene;
rendering a scene of a data change part into a texture picture of the change part for caching; will fix part of scene SfRendering to a frame scene cache object, and multiplexing texture pictures of the previous frame scene; and mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result, submitting the rendering result to an interface for rendering, and repeating the scene rendering operation to start the rendering of the next frame of scene.
Example 4
In an embodiment of the present invention, a storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the above-mentioned method embodiments:
firstly, dividing three-dimensional model scene data;
defining a current rendering scene as S, and dividing the current rendering scene S into fixed partial scenes SfAnd a varying partial scene, wherein the varying partial scene is split into transparent partial scenes StWith opaque parts of the scene Sc
Secondly, creating a three-dimensional scene camera;
creating a camera C as a master camera for currently rendering a scene S, creating three slave cameras C under the master camera Cf、Cc、CtLoad scene S separatelyf、Sc、StAnd for each scene dataThe method comprises the steps that two texture pictures with the current window pixel size are created by a camera and used for rendering;
thirdly, rendering a scene;
rendering a scene of a data change part into a texture picture of the change part for caching; will fix part of scene SfRendering to a frame scene cache object, and multiplexing texture pictures of the previous frame scene; and mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result, submitting the rendering result to an interface for rendering, and repeating the scene rendering operation to start the rendering of the next frame of scene.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory.
Taking the placement of the guard rooms in the whole transformer substation scene as an example, the model of the transformer substation is an unchangeable model, the placed guard rooms are changed models, the guard rooms are only required to be rendered again during placement, and then the scenes are mixed to obtain a final result. Fig. 6 is a scene in which a substation is not changed, and fig. 7 is a scene in which a substation is changed. The substation is divided into a constant scene (substation) and a variable scene (guard room). And respectively rendering the corresponding texture pictures. Fig. 8 shows the final result.
By taking the 750kV transformer substation scale and the component number of over a million level as an example, the frame rate of common interactive rendering such as primitive drawing and scene arrangement can be increased to over 60 frames, and the interactive fluency can be ensured.
In summary, the present invention is analyzed from the perspective of three-dimensional design, a scene needs to be completely rendered only when a viewport changes, and in most cases, only a changed portion needs to be processed, taking the drawing in the three-dimensional design as an example, most of the scene is unchanged, only the drawn portion needs to be redrawn, and the currently drawn portion is generally only a small portion of the scene. Based on the analysis, the method can realize the efficient rendering of most three-dimensional scenes.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A three-dimensional scene rendering method; the three-dimensional scene rendering method is characterized by comprising the following steps:
firstly, dividing three-dimensional model scene data;
defining a current rendering scene as S, and dividing the current rendering scene S into fixed partial scenes SfAnd a varying partial scene, wherein the varying partial scene is split into transparent partial scenes StWith opaque parts of the scene Sc
Secondly, creating a three-dimensional scene camera;
creating a camera C as a master camera for currently rendering a scene S, creating three slave cameras C under the master camera Cf、Cc、CtLoad scene S separatelyf、Sc、StCreating two texture pictures for rendering and using in the current window pixel size for each camera;
thirdly, rendering a scene;
rendering a scene of a data change part into a texture picture of the change part for caching; will fix part of scene SfRendering to a frame scene cache object, and multiplexing texture pictures of the previous frame scene; and mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result, submitting the rendering result to an interface for rendering, and repeating the scene rendering operation to start the rendering of the next frame of scene.
2. A method of rendering a three-dimensional scene as defined in claim 1, characterized by: in the divided three-dimensional model scene data, a defined current rendering scene S and a divided fixed part scene SfTransparent partial scene StAnd an opaque part of the scene ScSatisfies the following relationship: s ═ Sf∪Sc∪StWhere U represents the union operation.
3. A method of rendering a three-dimensional scene as defined in claim 2, characterized by: when the three-dimensional scene camera is created, two texture pictures with the current window pixel size for rendering use are created for each camera, wherein the two texture pictures comprise a texture picture CT and a texture picture DT.
4. A method of rendering a three-dimensional scene as defined in claim 3, characterized by: when the three-dimensional scene camera is created, the method further comprises the following steps: under the primary camera C, a rectangular HUD camera C is created again which fills the screen all the timehThe system comprises a rectangular screen, a texture filling module, a data processing module and a data processing module, wherein the rectangular screen is used for carrying out texture filling on the rectangular screen to realize rendering of scene data; wherein, HUD camera ChThe creation mode is as follows: according to a fixed view port matrix MvAnd a projection matrix MpAt HUD Camera ChAdding a corresponding rectangle to obtain a rectangle vertex list;
the fixed viewport matrix MvComprises the following steps:
Figure FDA0003339808360000021
the projection matrix MpComprises the following steps:
Figure FDA0003339808360000022
the list of vertices of the rectangle is (-1, -1, 0), (-1, 1, 0), (1, -1, 0).
5. A method of rendering a three-dimensional scene as defined in claim 4, wherein: during scene rendering, calling OpenGL to perform scene rendering, and performing situation-based processing when the scene is drawn to a specified camera node; the method for rendering the scene with the changed data part into the texture picture with the changed part for caching comprises the following steps:
updating the camera pose of the master camera C to the slave camera CcAnd a slave camera CtIn the camera pose of (a); wherein the camera pose consists of a viewport transformation matrix, a projection matrix, and a viewport matrix;
slave camera C is connected through the glBindFrameBufferEXT method of OpenGLcAnd a slave camera CtThe depth cache and the color cache are respectively rendered to the texture picture CTcTexture picture DTcAnd texture picture CTtTexture picture DTtWherein the texture picture CTcTexture picture DTcFor slave camera CcCorresponding opaque part scene ScCorresponding texture picture CT and texture picture DT, said texture picture CTtTexture picture DTtFor slave camera CtCorresponding transparent partial scene StCorresponding texture picture CT and texture picture DT.
6. A method of rendering a three-dimensional scene as defined in claim 5, wherein: the partial scene S is to be fixedfRendering to a frame scene cache object, and multiplexing texture pictures of a previous frame scene, wherein the method comprises the following steps:
recording the currently dependent camera CfIs compared with the main camera C, records whether the change occurs or not, and records the variable Bc
Comparing the current scene element with the previous scene element to determine whether the current scene element changes, and recording a variable Bs
According to the variable BcAnd variable BsDeciding whether re-rendering of the slave camera C is requiredfThe depth cache and the color cache are respectively rendered to the texture picture CTfTexture picture DTfOtherwise, the last one is multiplexedThe texture of the frame.
7. A method of rendering a three-dimensional scene as defined in claim 6, wherein: mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result, wherein the rendering result comprises the following steps:
CT based on obtained texture picturecTexture picture DTcTexture picture CTtTexture picture DTtTexture picture CTfAnd texture picture DTfBinds all the maps to the HUD camera C by calling the glBindTexture function of OpenGLhMiddle, HUD Camera ChThe lower rectangle is rendered one by adopting a custom rendering pipeline of OpenGL, wherein the rendering content is split into pixels one by one in an OpenGL rendering pipeline, and each fragment represents a corresponding pixel.
8. A three-dimensional scene rendering system, characterized by: the three-dimensional scene rendering system adopts the three-dimensional scene rendering method of any one of claims 1 to 7 to realize the rendering content of the current frame scene; the three-dimensional scene rendering system includes:
a scene division module for defining the current rendering scene as S and dividing the current rendering scene S into a fixed part scene SfAnd a varying partial scene, wherein the varying partial scene is split into transparent partial scenes StWith opaque parts of the scene Sc
A camera creation module for creating a camera C as a master camera for currently rendering a scene S, and creating three slave cameras C under the master camera Cf、Cc、CtLoad scene S separatelyf、Sc、StCreating two texture pictures for rendering and using in the current window pixel size for each camera; and
the scene rendering module is used for rendering the scene of the data change part into the texture picture of the change part for caching; will fix part of scene SfRendering to frame scenesCaching an object, and multiplexing a texture picture of a previous frame of scene; and mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result, submitting the rendering result to an interface for rendering, and repeating the scene rendering operation to start the rendering of the next frame of scene.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A storage medium storing a computer program, characterized in that the computer program, when being executed by a processor, realizes the steps of the method of any one of claims 1 to 7.
CN202111304819.3A 2021-11-05 2021-11-05 Three-dimensional scene rendering method, system, device and storage medium Active CN114241101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111304819.3A CN114241101B (en) 2021-11-05 2021-11-05 Three-dimensional scene rendering method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111304819.3A CN114241101B (en) 2021-11-05 2021-11-05 Three-dimensional scene rendering method, system, device and storage medium

Publications (2)

Publication Number Publication Date
CN114241101A true CN114241101A (en) 2022-03-25
CN114241101B CN114241101B (en) 2024-11-22

Family

ID=80748495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111304819.3A Active CN114241101B (en) 2021-11-05 2021-11-05 Three-dimensional scene rendering method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN114241101B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063538A (en) * 2022-07-13 2022-09-16 北京恒泰实达科技股份有限公司 Three-dimensional visual scene organization method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150178977A1 (en) * 2013-05-14 2015-06-25 Google Inc. Rendering Vector Maps in a Geographic Information System
CN108460823A (en) * 2018-02-11 2018-08-28 浙江科澜信息技术有限公司 A kind of display methods and system of rendering three-dimensional scenes model
CN113112581A (en) * 2021-05-13 2021-07-13 广东三维家信息科技有限公司 Texture map generation method, device and equipment for three-dimensional model and storage medium
WO2021174659A1 (en) * 2020-03-04 2021-09-10 杭州群核信息技术有限公司 Webgl-based progressive real-time rendering method for editable large scene
CN113379886A (en) * 2021-07-05 2021-09-10 中煤航测遥感集团有限公司 Three-dimensional rendering method, device and equipment of geographic information system and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150178977A1 (en) * 2013-05-14 2015-06-25 Google Inc. Rendering Vector Maps in a Geographic Information System
CN108460823A (en) * 2018-02-11 2018-08-28 浙江科澜信息技术有限公司 A kind of display methods and system of rendering three-dimensional scenes model
WO2021174659A1 (en) * 2020-03-04 2021-09-10 杭州群核信息技术有限公司 Webgl-based progressive real-time rendering method for editable large scene
CN113112581A (en) * 2021-05-13 2021-07-13 广东三维家信息科技有限公司 Texture map generation method, device and equipment for three-dimensional model and storage medium
CN113379886A (en) * 2021-07-05 2021-09-10 中煤航测遥感集团有限公司 Three-dimensional rendering method, device and equipment of geographic information system and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063538A (en) * 2022-07-13 2022-09-16 北京恒泰实达科技股份有限公司 Three-dimensional visual scene organization method

Also Published As

Publication number Publication date
CN114241101B (en) 2024-11-22

Similar Documents

Publication Publication Date Title
US12347016B2 (en) Image rendering method and apparatus, device, medium, and computer program product
CN112270756B (en) Data rendering method applied to BIM model file
Schneider GPU-friendly high-quality terrain rendering
US8698809B2 (en) Creation and rendering of hierarchical digital multimedia data
US9275493B2 (en) Rendering vector maps in a geographic information system
CN112184575A (en) Image rendering method and device
US9799134B2 (en) Method and system for high-performance real-time adjustment of one or more elements in a playing video, interactive 360° content or image
US20090195541A1 (en) Rendering dynamic objects using geometry level-of-detail in a graphics processing unit
CN109035383B (en) Volume cloud drawing method and device and computer readable storage medium
JP2005228320A (en) High-speed visualization method, apparatus, and program for 3D graphic data based on depth image
KR102659643B1 (en) Residency Map Descriptor
KR102442488B1 (en) Graphics processing system and graphics processor
CN114842127B (en) Terrain rendering method and device, electronic equipment, medium and product
US20060139361A1 (en) Adaptive image interpolation for volume rendering
KR20160068204A (en) Data processing method for mesh geometry and computer readable storage medium of recording the same
JP2019207450A (en) Volume rendering apparatus
CN114241101B (en) Three-dimensional scene rendering method, system, device and storage medium
JP4047421B2 (en) Efficient rendering method and apparatus using user-defined rooms and windows
CN120259520A (en) Element rendering method, device, equipment, storage medium and program product
US11869123B2 (en) Anti-aliasing two-dimensional vector graphics using a compressed vertex buffer
Schütz¹ et al. Splatshop: Efficiently Editing Large Gaussian Splat Models
JPH11296696A (en) Three-dimensional image processor
US20250069319A1 (en) Multi-channel disocclusion mask for interpolated frame recertification
US20250191120A1 (en) Motion vector field generation for frame interpolation
Okuya et al. Reproduction of perspective in cel animation 2D composition for real-time 3D rendering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant