[go: up one dir, main page]

CN118429487B - A method and system for real-time rendering of animation scenes - Google Patents

A method and system for real-time rendering of animation scenes Download PDF

Info

Publication number
CN118429487B
CN118429487B CN202410519307.6A CN202410519307A CN118429487B CN 118429487 B CN118429487 B CN 118429487B CN 202410519307 A CN202410519307 A CN 202410519307A CN 118429487 B CN118429487 B CN 118429487B
Authority
CN
China
Prior art keywords
real
time
rendering
scene
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410519307.6A
Other languages
Chinese (zh)
Other versions
CN118429487A (en
Inventor
姚凤丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Leyi Animation Culture Co ltd
Original Assignee
Guangzhou Leyi Animation Culture Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Leyi Animation Culture Co ltd filed Critical Guangzhou Leyi Animation Culture Co ltd
Priority to CN202410519307.6A priority Critical patent/CN118429487B/en
Publication of CN118429487A publication Critical patent/CN118429487A/en
Application granted granted Critical
Publication of CN118429487B publication Critical patent/CN118429487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

本发明提供一种动漫场景实时渲染方法及系统,涉及场景渲染技术领域。该方法包括获取动漫场景基础数据,并对头部视角进行实时跟踪,形成视角跟踪实时信息;根据视角跟踪实时信息,对实时视角范围进行基于注视特征的分区处理,形成视角渲染分区数据;根据视角渲染分区数据对实时视角范围进行场景渲染,形成实时场景渲染数据。该方法能够针对视角范围的场景进行合理的渲染分区以取得高效、节约资源并且充分保证画面体验感的动漫场景渲染效果。

The present invention provides a method and system for real-time rendering of an animation scene, and relates to the technical field of scene rendering. The method comprises obtaining basic data of an animation scene, and tracking the head perspective in real time to form perspective tracking real-time information; performing a partitioning process based on gaze features on the real-time perspective range according to the perspective tracking real-time information to form perspective rendering partition data; performing scene rendering on the real-time perspective range according to the perspective rendering partition data to form real-time scene rendering data. The method can perform reasonable rendering partitioning for scenes in the perspective range to obtain an animation scene rendering effect that is efficient, resource-saving, and fully guarantees the picture experience.

Description

Cartoon scene real-time rendering method and system
Technical Field
The invention relates to the technical field of scene rendering, in particular to a method and a system for rendering cartoon scenes in real time.
Background
Cartoon rendering is one of the key links of cartoon making. The animation design is converted into a specific image through the combined setting of elements such as a model, light rays, materials, shadows and the like. The cartoon rendering mode is also various, and the cost and visual sense requirements are considered, so that the rendering can be the rendering of the whole scene or the rendering aiming at the gazing characteristic.
Currently, the form of rendering a animation is mostly to provide real-time rendering within the range of the scene covered by the viewing angle. But basically, the rendering of the view angle range is the rendering of the same depth without the target overall range, on one hand, the workload of the rendering is large, and the rendering of a large number of areas without gaze features wastes the resources required by the rendering greatly, so that the cost of scene rendering is increased. On the other hand, because the workload of rendering is larger, higher requirements are put forward on the performance of equipment under real-time rendering, and meanwhile, the real-time performance of scene picture display is reduced, so that the experience of the scene picture is not improved.
Therefore, designing a real-time animation scene rendering method and system, which can reasonably render and partition the scene in the view angle range to obtain the animation scene rendering effect with high efficiency, resource saving and full guarantee of the picture experience sense, is a problem to be solved at present.
Disclosure of Invention
The invention aims to provide a real-time animation scene rendering method, which is used for determining the real-time visual angle range of an animation scene covered by a head visual angle in a real-time state by tracking the real-time change condition of the head visual angle, further rendering the scene in a scene area limited by the determined real-time visual angle range, greatly reducing the rendering workload relative to the whole rendering of the scene, and effectively ensuring the picture experience of the animation scene in the visual angle range. Meanwhile, reasonable partition processing is carried out by considering the gazing features of the scene area in the real-time visual angle range, partition data aiming at different feature conditions are formed, and important partition information is provided for subsequent targeted reasonable and efficient partition rendering. On the one hand, the effect of rendering the objects with the gazing features in the range of the real-time visual angles can be highlighted, the effect of improving the picture feeling of the cartoon scene is achieved, on the other hand, the unnecessary consumption of rendering resources caused by unified depth rendering is avoided, the rendering resources are effectively saved under the condition of fully meeting the rendering requirements, the rendering effect is further improved, and the timeliness and the effect of real-time scene display are ensured.
The invention also aims to provide a cartoon scene real-time rendering system which effectively stores scene basic data for rendering through a scene data storage unit. The view angle tracking analysis unit performs real-time tracking of view angle positions on the basis of the integrated scene basic data, accurately determines the positions of the head view angles in a real-time state, and further determines scene areas defined by the head view angles in real time based on scene data information. And then, the determined real-time visual angle area is subjected to partition processing of rendering by using a scene rendering partition processing unit, so that different types of rendering areas are formed. And utilizing the real-time rendering processing unit to conduct targeted rendering on the divided different rendering areas. Each unit of the whole rendering system has an efficient and specific data processing function, and the units are combined with each other to form a closely related scene real-time rendering processing system. On one hand, the method can effectively ensure efficient and timely rendering of the scene in the real-time visual angle range, and on the other hand, an important material basis is provided for real-time rendering of the cartoon scene.
The invention provides a cartoon scene real-time rendering method, which comprises the steps of obtaining cartoon scene basic data, carrying out real-time tracking on a head view to form view tracking real-time information, carrying out partition processing based on gaze characteristics on a real-time view range according to the view tracking real-time information to form view rendering partition data, and carrying out scene rendering on the real-time view range according to the view rendering partition data to form real-time scene rendering data.
According to the method, the real-time view angle range of the cartoon scene covered by the head view angle in the real-time state is determined by tracking the real-time change condition of the head view angle in the cartoon scene, so that the scene is rendered in the scene area defined by the determined real-time view angle range, the workload of rendering can be greatly reduced relative to the whole scene, and the picture experience sense of the cartoon scene in the view angle range is effectively ensured. Meanwhile, reasonable partition processing is carried out by considering the gazing features of the scene area in the real-time visual angle range, partition data aiming at different feature conditions are formed, and important partition information is provided for subsequent targeted reasonable and efficient partition rendering. On the one hand, the effect of rendering the objects with the gazing features in the range of the real-time visual angles can be highlighted, the effect of improving the picture feeling of the cartoon scene is achieved, on the other hand, the unnecessary consumption of rendering resources caused by unified depth rendering is avoided, the rendering resources are effectively saved under the condition of fully meeting the rendering requirements, the rendering effect is further improved, and the timeliness and the effect of real-time scene display are ensured.
As one possible implementation mode, acquiring the animation scene basic data and carrying out real-time tracking on the head view angle to form view angle tracking real-time information, wherein the method comprises the steps of acquiring the animation scene basic data, determining the initial position and the initial view angle of the head view angle in a scene, acquiring the real-time moving distance of the head view angle, determining the real-time position of the head view angle in the scene by combining the initial position, acquiring the real-time change value of the view angle, determining the real-time view angle of the head view angle in the scene by combining the initial view angle, and determining the real-time view angle range of the head view angle in the scene by combining the real-time position and the real-time view angle.
In the invention, the real-time tracking of the head view angle in the cartoon scene to acquire the tracking information in the real-time state mainly acquires the data information in two aspects. One is the position data of the head view angle in the cartoon scene, and the different positions of the head view angle in the scene considering the information of the depth of field directly affect the scene view angle range defined by the head view angle. The other is view angle change data of the head view angle, and the view angle change of the head view angle can directly influence the change of the scene under the view angle range. Based on the position data and the view angle change data of the head view angle, the real-time view angle range limited by the head view angle in the real-time state can be determined by combining the initial position data and the view angle data of the head view angle and the scene information, so that the scene area range needing to be rendered in the current real-time state is determined, and a basic and accurate rendering range reference is provided for real-time rendering of the scene.
As a possible implementation mode, the method comprises the steps of carrying out partition processing based on fixation characteristics on a real-time visual angle range according to visual angle tracking real-time information to form visual angle rendering partition data, wherein the visual angle rendering partition data comprises the steps of determining fixation characteristic objects of scenes in the real-time visual angle range, obtaining scene depth information, determining scene areas covered by the fixation characteristic objects in combination with positions of head visual angles relative to the fixation characteristic objects to form real-time fixation characteristic areas, carrying out expansion analysis based on rendering efficiency on the real-time fixation characteristic areas to determine real-time fixation characteristic expansion areas, determining real-time non-fixation characteristic areas in combination with the real-time visual angle range, and forming visual angle rendering partition data in combination with the real-time fixation characteristic areas, the real-time fixation characteristic expansion areas and the real-time non-fixation characteristic areas.
In the invention, the real-time processing actual condition is fully considered by carrying out the partition processing based on the fixation characteristics in the real-time visual angle range, and the scene in the real-time visual angle range is divided into a real-time fixation characteristic region, a real-time fixation characteristic expansion region and a real-time non-fixation characteristic region. The real-time gaze feature region is a scene region covered by the gaze feature object at the viewing angle of the head viewing angle. The real-time non-gazing characteristic region is the scene region remaining in the real-time visual angle range after the real-time gazing characteristic region is excluded. The real-time gaze feature expansion area is a set of areas formed by the area occlusion of the next step which is possible in the case of fully predictive analysis of the position change of the gaze feature object itself in the scene and the position change of the head view angle in the case of real-time rendering. The area range is basically a larger area range formed around the gazing feature than the current gazing feature area, is mainly rendered in a real-time rendering state in advance, and provides preparation data for rendering results formed after shielding is provided by directly and correspondingly after the shielding area change of the gazing feature object is accurately determined in a subsequent real-time state, so that timeliness of real-time rendering operation is ensured.
As a possible implementation manner, the method for determining the real-time gaze feature expansion area by performing expansion analysis based on rendering efficiency on the real-time gaze feature area comprises the steps of extracting boundary information of the real-time gaze feature area to form a real-time gaze feature boundary, acquiring a visual angle change speed limit V view of a head visual angle and a scene effective rendering rate V rend, and determining an expansion width H according to the following formula: the method comprises the steps of obtaining a real-time fixation feature boundary, wherein alpha represents a width adjustment coefficient, expanding the real-time fixation feature boundary according to an expansion width H to form the real-time fixation feature expansion boundary, and determining a region between the real-time fixation feature boundary and the real-time fixation feature expansion boundary as a real-time fixation feature expansion region.
In the present invention, a manner of determining the region width size of the real-time gaze feature expansion region is provided herein. Considering that the real-time gaze feature expansion area timely provides a part of pre-rendering data for new scene shielding possibly formed by the gaze feature object in the current state so as to ensure timeliness of rendering effects, and therefore, the rendering update of other positions synchronously performed after the part of pre-rendering scene data is provided needs to be considered to keep up with the speed to be displayed by rendering, so that the width of the expansion area needs to be determined according to the rendering speed and the angle change speed of the head view angle, and after all, the rendering of the part of pre-rendering scene shielding is the result of the combined action of the angle change of the head view angle and the position change of the gaze feature object. The viewing angle change speed limit is determined by the number of pixels across which the maximum angular change speed allowed by the viewing angle can be in the linear direction, and the rendering speed is determined by the number of pixels that can be rendered in the linear direction per unit time. The width adjustment coefficient is determined according to actual rendering requirements, so that the effect of adjusting the rendering amount is achieved, and timeliness of rendering effect display is fully guaranteed.
As one possible implementation, the angular change speed limit V view of the head view angle is obtained by determining the maximum movement speed V mov of the head view angle, determining the minimum depth of field L within the real-time view angle range, determining the maximum view angle change speed V ang of the head view angle, and determining the angular change speed limit V view from the maximum movement speed V mov, the minimum depth of field L, and the maximum view angle change speed V ang, wherein V view=Vmov+L*tan(Vang.
In the present invention, since the maximum moving speed of the head angle of view is also related to the position of the head angle of view in the scene, the angular change speed limit for the head angle of view should be a speed limit for the real-time angle of view range. In other words, the maximum rotation speed of the given head visual angle is scaled based on the triangle proportion of the depth of field under the condition of considering the depth of field, and the actual speed of the scene rendering under the visual angle fixation in the real-time visual angle range is obtained.
As a possible implementation manner, scene rendering is carried out on a real-time view angle range according to view angle rendering partition data to form real-time scene rendering data, wherein the method comprises the steps of obtaining a real-time gazing feature area, carrying out de-duplication depth rendering treatment on the real-time gazing feature area to form real-time gazing feature area rendering data, obtaining a real-time gazing feature expansion area, carrying out de-duplication depth rendering on the real-time gazing feature expansion area to form real-time gazing feature expansion area rendering data, obtaining a real-time non-gazing feature area, carrying out rendering gradient-based depth rendering on the real-time non-gazing feature area to form real-time non-gazing feature area rendering data, pre-storing the real-time gazing feature expansion area rendering data, and combining the real-time gazing feature area rendering data and the real-time non-gazing feature area rendering data to form the real-time view angle scene rendering data.
In the invention, after the rendering partition data of the real-time visual angle range is obtained, the rendering of the real-time gaze feature region is the depth rendering of the gaze feature object on the one hand so as to ensure that the gaze feature object can meet the requirement of visual picture feel, and on the other hand, the de-duplication of the shielding part of the scene is carried out so as to reduce the workload of rendering and improve the rendering efficiency. The rendering process of the real-time gaze feature extended region is a deduplication rendering process performed when the position of the gaze feature object is predicted. The rendering processing of the real-time non-gaze feature region considers the rendering processing with the gradient change of the rendering depth so as to obtain the best visual picture feel, ensure the rationality of the rendering workload and save the resources consumed by rendering. Here, the rendering of the real-time gaze extension area is implemented under the subsequent position prediction of the gaze feature object, so that the rendering result of the real-time gaze extension area is not directly displayed in the scene in the current real-time state, but is temporarily stored as the preliminary data of the subsequent scene change, the initial rendering data mainly including the gaze feature object is provided for the subsequent rendering, and the timeliness of the subsequent scene rendering is improved.
As a possible implementation mode, the real-time gazing feature area is obtained, and the real-time gazing feature area is subjected to de-duplication depth rendering processing to form real-time gazing feature area rendering data, wherein the real-time gazing feature area rendering data comprises the steps of determining a scene rear end area shielded by real-time gazing feature objects according to a real-time visual angle range, removing scene information of the scene rear end area, and performing depth rendering on the corresponding real-time gazing feature objects on the real-time gazing feature area to form real-time gazing feature area rendering data.
In the invention, the de-duplication rendering of the real-time gazing feature area is to screen out the scene at the rear end and perform the depth rendering of the gazing feature object under the condition of determining the scene area blocked behind the gazing feature object.
The method comprises the steps of obtaining a real-time gaze feature expansion area, performing de-duplication depth rendering on the real-time gaze feature expansion area to form real-time gaze feature expansion area rendering data, performing prediction change analysis on a real-time view angle range to form different prediction view angle ranges, performing shielding analysis on gaze feature objects based on prediction changes of the different prediction view angle ranges in the real-time gaze feature expansion area to form different prediction change shielding ranges, performing de-duplication analysis on each prediction change shielding range on the real-time gaze feature expansion area to form corresponding expansion shielding subareas and expansion scene expression subareas in the prediction view angle ranges, performing depth rendering on the expansion shielding subareas to form expansion feature subarea rendering data corresponding to the prediction view angle ranges, performing depth rendering on the expansion scene expression subareas corresponding to the prediction view angle ranges on each prediction view angle range to form expansion scene expression subarea data, associating the expansion gaze feature rendering subarea data and the expansion scene expression subarea data, and forming prediction gaze feature expansion rendering data by combining the prediction view characteristic expansion data.
In the invention, the rendering consideration of the real-time gaze feature expansion area is performed under the condition of predicting the subsequent position change of the gaze feature object, and the subsequent position change possibility of the gaze feature object is multiple, so that a plurality of different shielding situations in the real-time gaze feature expansion area can be formed. Therefore, in the case of performing the rendering processing, the depth rendering is performed in the predicted position for the gaze feature object for occlusion in the case of performing the deduplication for each occlusion case.
The method comprises the steps of setting a rendering gradient, starting to perform depth rendering outwards according to the rendering gradient from the boundary of the real-time gaze feature expansion region until the rendering is extended to the boundary of a real-time visual angle range or is stopped when the rendering is intersected with an expansion range of the depth rendering outwards according to the rendering gradient from the boundary of other real-time gaze feature expansion region, and completing the depth rendering when the expansion of the depth rendering reaches the whole range of the real-time non-gaze feature region.
In the invention, the range ratio of the real-time non-gazing characteristic region is considered to be the largest, and the scene regions are not the most important positions which can cause the visual angle, so that the rendering of reasonable depth can be performed relative to the gazing characteristic region, wherein the viewing angle has the range regional property, so that the gradient change value of the rendering depth is set to perform the rendering mode that the rendering depth gradually decreases from the boundary close to the real-time gazing characteristic region to the boundary far from the real-time gazing characteristic region, so that the rendering of the real-time non-gazing characteristic region is realized. On one hand, the method can reduce the rendering workload, reasonably utilize the rendering resources, and on the other hand, the method also fully ensures the reasonable supplement of the region rendering to the picture feel.
The invention provides a cartoon scene real-time rendering system, which is applied to the cartoon scene real-time rendering method of the first aspect, and comprises a scene data storage unit, a view angle tracking analysis unit, a scene rendering partition processing unit and a real-time rendering processing unit, wherein the scene data storage unit is used for storing cartoon scene basic data to be rendered, the view angle tracking analysis unit is used for retrieving the cartoon scene basic data stored by the scene data storage unit and tracking a head view in real time to form view angle tracking real-time information under the cartoon scene, the scene rendering partition processing unit is used for acquiring the view angle tracking real-time information formed by the view angle tracking analysis unit and performing partition processing based on the view angle tracking real-time information to form view angle rendering partition data, and the real-time rendering processing unit is used for acquiring the view angle rendering analysis data formed by the scene rendering partition processing unit and performing real-time rendering processing on the cartoon scene according to the view angle rendering partition data.
In the present invention, the system effectively stores scene basic data for rendering through a scene data storage unit. The view angle tracking analysis unit performs real-time tracking of view angle positions on the basis of the integrated scene basic data, accurately determines the positions of the head view angles in a real-time state, and further determines scene areas defined by the head view angles in real time based on scene data information. And then, the determined real-time visual angle area is subjected to partition processing of rendering by using a scene rendering partition processing unit, so that different types of rendering areas are formed. And utilizing the real-time rendering processing unit to conduct targeted rendering on the divided different rendering areas. Each unit of the whole rendering system has an efficient and specific data processing function, and the units are combined with each other to form a closely related scene real-time rendering processing system. On one hand, the method can effectively ensure efficient and timely rendering of the scene in the real-time visual angle range, and on the other hand, an important material basis is provided for real-time rendering of the cartoon scene.
The animation scene real-time rendering method and system provided by the invention have the beneficial effects that:
According to the method, the real-time view angle range of the cartoon scene covered by the head view angle in the real-time state is determined by tracking the real-time change condition of the head view angle in the cartoon scene, and then the scene is rendered in the scene area defined by the determined real-time view angle range, so that the workload of rendering can be greatly reduced compared with the whole scene, and the picture experience of the cartoon scene in the view angle range is effectively ensured. Meanwhile, reasonable partition processing is carried out by considering the gazing features of the scene area in the real-time visual angle range, partition data aiming at different feature conditions are formed, and important partition information is provided for subsequent targeted reasonable and efficient partition rendering. On the one hand, the effect of rendering the objects with the gazing features in the range of the real-time visual angles can be highlighted, the effect of improving the picture feeling of the cartoon scene is achieved, on the other hand, the unnecessary consumption of rendering resources caused by unified depth rendering is avoided, the rendering resources are effectively saved under the condition of fully meeting the rendering requirements, the rendering effect is further improved, and the timeliness and the effect of real-time scene display are ensured.
The system effectively stores scene basic data for rendering through a scene data storage unit. The view angle tracking analysis unit performs real-time tracking of view angle positions on the basis of the integrated scene basic data, accurately determines the positions of the head view angles in a real-time state, and further determines scene areas defined by the head view angles in real time based on scene data information. And then, the determined real-time visual angle area is subjected to partition processing of rendering by using a scene rendering partition processing unit, so that different types of rendering areas are formed. And utilizing the real-time rendering processing unit to conduct targeted rendering on the divided different rendering areas. Each unit of the whole rendering system has an efficient and specific data processing function, and the units are combined with each other to form a closely related scene real-time rendering processing system. On one hand, the method can effectively ensure efficient and timely rendering of the scene in the real-time visual angle range, and on the other hand, an important material basis is provided for real-time rendering of the cartoon scene.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments of the present invention will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and should not be considered as limiting the scope, and other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a step diagram of a real-time animation scene rendering method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the accompanying drawings in the embodiments of the present invention.
Cartoon rendering is one of the key links of cartoon making. The animation design is converted into a specific image through the combined setting of elements such as a model, light rays, materials, shadows and the like. The cartoon rendering mode is also various, and the cost and visual sense requirements are considered, so that the rendering can be the rendering of the whole scene or the rendering aiming at the gazing characteristic.
Currently, the form of rendering a animation is mostly to provide real-time rendering within the range of the scene covered by the viewing angle. But basically, the rendering of the view angle range is the rendering of the same depth without the target overall range, on one hand, the workload of the rendering is large, and the rendering of a large number of areas without gaze features wastes the resources required by the rendering greatly, so that the cost of scene rendering is increased. On the other hand, because the workload of rendering is larger, higher requirements are put forward on the performance of equipment under real-time rendering, and meanwhile, the real-time performance of scene picture display is reduced, so that the experience of the scene picture is not improved.
Referring to fig. 1, an embodiment of the present invention provides a real-time animation scene rendering method, which determines a real-time viewing angle range of an animation scene covered by a head viewing angle in a real-time state by tracking a real-time variation condition of the head viewing angle in the animation scene, further renders the scene in a scene area defined by the determined real-time viewing angle range, greatly reduces the workload of rendering compared with the whole rendering, and effectively ensures the picture experience of the animation scene in the viewing angle range. Meanwhile, reasonable partition processing is carried out by considering the gazing features of the scene area in the real-time visual angle range, partition data aiming at different feature conditions are formed, and important partition information is provided for subsequent targeted reasonable and efficient partition rendering. On the one hand, the effect of rendering the objects with the gazing features in the range of the real-time visual angles can be highlighted, the effect of improving the picture feeling of the cartoon scene is achieved, on the other hand, the unnecessary consumption of rendering resources caused by unified depth rendering is avoided, the rendering resources are effectively saved under the condition of fully meeting the rendering requirements, the rendering effect is further improved, and the timeliness and the effect of real-time scene display are ensured.
The animation scene real-time rendering method specifically comprises the following steps:
s1, acquiring animation scene basic data, and tracking the head visual angle in real time to form visual angle tracking real-time information.
The method comprises the steps of obtaining animation scene basic data, carrying out real-time tracking on a head view angle to form view angle tracking real-time information, and obtaining the animation scene basic data, determining an initial position and an initial view angle of the head view angle in a scene, collecting a real-time moving distance of the head view angle, combining the initial position, determining a real-time position of the head view angle in the scene, collecting a real-time change value of the view angle, combining the initial view angle, determining a real-time view angle of the head view angle in the scene, combining the real-time position and the real-time view angle, and determining a real-time view angle range of the head view angle in the scene.
The real-time tracking of the head view angle in the cartoon scene to acquire the tracking information in the real-time state is mainly to acquire data information in two aspects. One is the position data of the head view angle in the cartoon scene, and the different positions of the head view angle in the scene considering the information of the depth of field directly affect the scene view angle range defined by the head view angle. The other is view angle change data of the head view angle, and the view angle change of the head view angle can directly influence the change of the scene under the view angle range. Based on the position data and the view angle change data of the head view angle, the real-time view angle range limited by the head view angle in the real-time state can be determined by combining the initial position data and the view angle data of the head view angle and the scene information, so that the scene area range needing to be rendered in the current real-time state is determined, and a basic and accurate rendering range reference is provided for real-time rendering of the scene.
And S2, carrying out partition processing based on the gaze characteristics on the real-time view angle range according to the view angle tracking real-time information to form view angle rendering partition data.
The method comprises the steps of carrying out partition processing based on fixation characteristics on a real-time visual angle range according to visual angle tracking real-time information to form visual angle rendering partition data, wherein the visual angle rendering partition data comprise the steps of determining fixation characteristic objects of scenes in the real-time visual angle range, obtaining scene depth information, determining scene areas covered by the fixation characteristic objects by combining positions of head visual angles relative to the fixation characteristic objects to form real-time fixation characteristic areas, carrying out expansion analysis based on rendering efficiency on the real-time fixation characteristic areas to determine real-time fixation characteristic expansion areas, determining real-time non-fixation characteristic areas according to the real-time fixation characteristic areas and combining the real-time visual angle range, and combining the real-time fixation characteristic areas, the real-time fixation characteristic expansion areas and the real-time non-fixation characteristic areas to form visual angle rendering partition data.
The real-time view angle range is partitioned based on the gazing features, the actual situation of the real-time processing is fully considered, and the scene in the real-time view angle range is divided into a real-time gazing feature area, a real-time gazing feature expansion area and a real-time non-gazing feature area. The real-time gaze feature region is a scene region covered by the gaze feature object at the viewing angle of the head viewing angle. The real-time non-gazing characteristic region is the scene region remaining in the real-time visual angle range after the real-time gazing characteristic region is excluded. The real-time gaze feature expansion area is a set of areas formed by the area occlusion of the next step which is possible in the case of fully predictive analysis of the position change of the gaze feature object itself in the scene and the position change of the head view angle in the case of real-time rendering. The area range is basically a larger area range formed around the gazing feature than the current gazing feature area, is mainly rendered in a real-time rendering state in advance, and provides preparation data for rendering results formed after shielding is provided by directly and correspondingly after the shielding area change of the gazing feature object is accurately determined in a subsequent real-time state, so that timeliness of real-time rendering operation is ensured.
The method comprises the steps of extracting boundary information of the real-time gazing feature region to form a real-time gazing feature boundary, obtaining a visual angle change speed limit V view of a head visual angle and a scene effective rendering rate V rend, and determining an expansion width H according to the following formula: the method comprises the steps of obtaining a real-time fixation feature boundary, wherein alpha represents a width adjustment coefficient, expanding the real-time fixation feature boundary according to an expansion width H to form the real-time fixation feature expansion boundary, and determining a region between the real-time fixation feature boundary and the real-time fixation feature expansion boundary as a real-time fixation feature expansion region.
A manner of determining the region width size of the real-time gaze feature expansion region is provided herein. Considering that the real-time gaze feature expansion area timely provides a part of pre-rendering data for new scene shielding possibly formed by the gaze feature object in the current state so as to ensure timeliness of rendering effects, and therefore, the rendering update of other positions synchronously performed after the part of pre-rendering scene data is provided needs to be considered to keep up with the speed to be displayed by rendering, so that the width of the expansion area needs to be determined according to the rendering speed and the angle change speed of the head view angle, and after all, the rendering of the part of pre-rendering scene shielding is the result of the combined action of the angle change of the head view angle and the position change of the gaze feature object. The viewing angle change speed limit is determined by the number of pixels across which the maximum angular change speed allowed by the viewing angle can be in the linear direction, and the rendering speed is determined by the number of pixels that can be rendered in the linear direction per unit time. The width adjustment coefficient is determined according to actual rendering requirements, so that the effect of adjusting the rendering amount is achieved, and timeliness of rendering effect display is fully guaranteed.
The angular change speed limit V view of the head view angle is obtained by determining the maximum moving speed V mov of the head view angle, determining the minimum depth of field L within the real-time view angle range, determining the maximum view angle change speed V ang of the head view angle, and determining the angular change speed limit V view from the maximum moving speed V mov, the minimum depth of field L, and the maximum view angle change speed V ang, wherein V view=Vmov+L*tan(Vang.
Since the maximum movement speed of the head view angle is also related to the position of the head view angle in the scene, the angular change speed limit for the head view angle should be a speed limit for the real-time view angle range. In other words, the maximum rotation speed of the given head visual angle is scaled based on the triangle proportion of the depth of field under the condition of considering the depth of field, and the actual speed of the scene rendering under the visual angle fixation in the real-time visual angle range is obtained.
And S3, performing scene rendering on the real-time view angle range according to the view angle rendering partition data to form real-time scene rendering data.
The method comprises the steps of performing scene rendering on a real-time view angle range according to view angle rendering partition data to form real-time scene rendering data, wherein the method comprises the steps of obtaining a real-time gazing feature area, performing de-duplication depth rendering on the real-time gazing feature area to form real-time gazing feature area rendering data, obtaining a real-time gazing feature expansion area, performing de-duplication depth rendering on the real-time gazing feature expansion area to form real-time gazing feature expansion area rendering data, obtaining a real-time non-gazing feature area, performing rendering gradient-based depth rendering on the real-time non-gazing feature area to form real-time non-gazing feature area rendering data, pre-storing the real-time gazing feature expansion area rendering data, and combining the real-time gazing feature area rendering data and the real-time non-gazing feature area rendering data to form the real-time view angle scene rendering data.
After the rendering partition data of the real-time visual angle range are obtained, the rendering of the real-time gaze feature area is on the one hand the depth rendering of the gaze feature object so as to ensure that the gaze feature object can meet the requirement of visual picture feel, and on the other hand the de-duplication of the shielding part of the scene is carried out so as to reduce the workload of rendering and improve the rendering efficiency. The rendering process of the real-time gaze feature extended region is a deduplication rendering process performed when the position of the gaze feature object is predicted. The rendering processing of the real-time non-gaze feature region considers the rendering processing with the gradient change of the rendering depth so as to obtain the best visual picture feel, ensure the rationality of the rendering workload and save the resources consumed by rendering. Here, the rendering of the real-time gaze extension area is implemented under the subsequent position prediction of the gaze feature object, so that the rendering result of the real-time gaze extension area is not directly displayed in the scene in the current real-time state, but is temporarily stored as the preliminary data of the subsequent scene change, the initial rendering data mainly including the gaze feature object is provided for the subsequent rendering, and the timeliness of the subsequent scene rendering is improved.
The method comprises the steps of obtaining a real-time gazing feature area, performing de-duplication depth rendering on the real-time gazing feature area to form real-time gazing feature area rendering data, and determining a scene rear end area shielded by real-time gazing feature objects according to a real-time view angle range, removing scene information of the scene rear end area, and performing depth rendering on the real-time gazing feature objects corresponding to the real-time gazing feature area to form real-time gazing feature area rendering data.
The de-duplication rendering of the real-time gazing feature area is to screen out the scene at the rear end under the condition that the scene area blocked behind the gazing feature object is determined, and the depth rendering of the gazing feature object is performed.
The method comprises the steps of obtaining a real-time gazing feature expansion area, performing de-duplication depth rendering on the real-time gazing feature expansion area to form real-time gazing feature expansion area rendering data, performing prediction change analysis on a real-time view angle range to form different prediction view angle ranges, performing shielding analysis on gazing feature objects based on prediction changes of the different prediction view angle ranges in the real-time gazing feature expansion area to form different prediction change shielding ranges, performing de-duplication analysis on each prediction change shielding range on the real-time gazing feature expansion area to form corresponding expansion shielding subareas and expansion scene expression subareas under the prediction view angle ranges, removing scene information shielded by the expansion shielding subareas in the real-time gazing feature expansion area for each prediction view angle range, performing depth rendering on the expansion shielding subareas to form expansion feature subarea rendering data corresponding to the prediction view angle ranges, performing depth rendering on expansion scene expression subareas to form expansion scene expression subareas corresponding to the prediction view angle ranges, associating the expansion feature subarea data and the expansion scene expression rendering subareas under the prediction view angle ranges to form prediction expansion data, and combining all the prediction feature expansion subarea rendering data to form real-time rendering data.
The rendering consideration of the real-time gaze feature expansion area is performed under the condition of predicting the subsequent position change of the gaze feature object, and the subsequent position change possibility of the gaze feature object is multiple, so that a plurality of different shielding situations in the real-time gaze feature expansion area can be formed. Therefore, in the case of performing the rendering processing, the depth rendering is performed in the predicted position for the gaze feature object for occlusion in the case of performing the deduplication for each occlusion case.
The method comprises the steps of obtaining a real-time non-gaze characteristic region, carrying out depth rendering based on rendering gradients on the real-time non-gaze characteristic region to form real-time non-gaze characteristic region rendering data, setting the rendering gradients, and carrying out depth rendering outwards according to the rendering gradients from the boundary of the real-time gaze characteristic expansion region until the rendering is extended to the boundary of a real-time visual angle range or is stopped when the rendering is intersected with the expansion range of the depth rendering outwards according to the rendering gradients from the boundary of other real-time gaze characteristic expansion region, wherein the depth rendering is completed when the expansion of the depth rendering reaches the whole range of the real-time non-gaze characteristic region.
Considering that the range ratio of the real-time non-gazing characteristic region is maximum, and the scene regions are not the most main positions which can cause the viewing angle key points, so that the rendering of reasonable depth can be performed relative to the gazing characteristic region, wherein the viewing angle is considered to have range regionality, so that the gradient change value of the rendering depth is set to perform the rendering mode that the rendering depth gradually decreases from the boundary close to the real-time gazing characteristic region to the boundary far from the real-time gazing characteristic region, and the rendering of the real-time non-gazing characteristic region is realized. On one hand, the method can reduce the rendering workload, reasonably utilize the rendering resources, and on the other hand, the method also fully ensures the reasonable supplement of the region rendering to the picture feel.
The invention also provides a cartoon scene real-time rendering system which adopts the cartoon scene real-time rendering method provided by the invention and comprises a scene data storage unit, a visual angle tracking analysis unit, a scene rendering partition processing unit and a real-time rendering processing unit, wherein the scene data storage unit is used for storing cartoon scene basic data to be rendered, the visual angle tracking analysis unit is used for retrieving the cartoon scene basic data stored by the scene data storage unit and tracking a head visual angle in real time to form visual angle tracking real-time information under the cartoon scene, the scene rendering partition processing unit is used for acquiring the visual angle tracking real-time information formed by the visual angle tracking analysis unit and performing partition processing based on the visual angle tracking real-time information to form visual angle rendering partition data, and the real-time rendering processing unit is used for acquiring the visual angle rendering analysis data formed by the scene rendering partition processing unit and performing real-time rendering processing on the cartoon scene according to the visual angle rendering partition data.
The system effectively stores scene basic data for rendering through a scene data storage unit. The view angle tracking analysis unit performs real-time tracking of view angle positions on the basis of the integrated scene basic data, accurately determines the positions of the head view angles in a real-time state, and further determines scene areas defined by the head view angles in real time based on scene data information. And then, the determined real-time visual angle area is subjected to partition processing of rendering by using a scene rendering partition processing unit, so that different types of rendering areas are formed. And utilizing the real-time rendering processing unit to conduct targeted rendering on the divided different rendering areas. Each unit of the whole rendering system has an efficient and specific data processing function, and the units are combined with each other to form a closely related scene real-time rendering processing system. On one hand, the method can effectively ensure efficient and timely rendering of the scene in the real-time visual angle range, and on the other hand, an important material basis is provided for real-time rendering of the cartoon scene.
In summary, the animation scene real-time rendering method and device provided by the embodiment of the invention have the beneficial effects that:
According to the method, the real-time view angle range of the cartoon scene covered by the head view angle in the real-time state is determined by tracking the real-time change condition of the head view angle in the cartoon scene, and then the scene is rendered in the scene area defined by the determined real-time view angle range, so that the workload of rendering can be greatly reduced compared with the whole scene, and the picture experience of the cartoon scene in the view angle range is effectively ensured. Meanwhile, reasonable partition processing is carried out by considering the gazing features of the scene area in the real-time visual angle range, partition data aiming at different feature conditions are formed, and important partition information is provided for subsequent targeted reasonable and efficient partition rendering. On the one hand, the effect of rendering the objects with the gazing features in the range of the real-time visual angles can be highlighted, the effect of improving the picture feeling of the cartoon scene is achieved, on the other hand, the unnecessary consumption of rendering resources caused by unified depth rendering is avoided, the rendering resources are effectively saved under the condition of fully meeting the rendering requirements, the rendering effect is further improved, and the timeliness and the effect of real-time scene display are ensured.
The system effectively stores scene basic data for rendering through a scene data storage unit. The view angle tracking analysis unit performs real-time tracking of view angle positions on the basis of the integrated scene basic data, accurately determines the positions of the head view angles in a real-time state, and further determines scene areas defined by the head view angles in real time based on scene data information. And then, the determined real-time visual angle area is subjected to partition processing of rendering by using a scene rendering partition processing unit, so that different types of rendering areas are formed. And utilizing the real-time rendering processing unit to conduct targeted rendering on the divided different rendering areas. Each unit of the whole rendering system has an efficient and specific data processing function, and the units are combined with each other to form a closely related scene real-time rendering processing system. On one hand, the method can effectively ensure efficient and timely rendering of the scene in the real-time visual angle range, and on the other hand, an important material basis is provided for real-time rendering of the cartoon scene.
In the present invention, "at least one" means one or more, and "a plurality" means two or more. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (a, b, or c) of a, b, c, a-b, a-c, b-c, or a-b-c may be represented, wherein a, b, c may be single or plural.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
Those of ordinary skill in the art will appreciate that the elements and method steps of the examples described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or as a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (7)

1. The real-time animation scene rendering method is characterized by comprising the following steps of:
acquiring animation scene basic data, and tracking the head visual angle in real time to form visual angle tracking real-time information;
According to the visual angle tracking real-time information, carrying out partition processing on the real-time visual angle range based on the gaze characteristics to form visual angle rendering partition data;
Performing scene rendering on the real-time view angle range according to the view angle rendering partition data to form real-time scene rendering data;
According to the view tracking real-time information, partitioning processing based on gaze characteristics is performed on a real-time view range to form view rendering partition data, including:
Determining a fixation characteristic object of a scene in the real-time visual angle range;
Acquiring scene depth information, and determining a scene area covered by the gazing feature object by combining the position of the head view angle relative to the gazing feature object to form a real-time gazing feature area;
performing expansion analysis based on rendering efficiency on the real-time gazing feature area to determine a real-time gazing feature expansion area;
Determining a real-time non-gazing characteristic region according to the real-time gazing characteristic region and combining the real-time visual angle range;
Combining the real-time gazing feature area, the real-time gazing feature expansion area and the real-time non-gazing feature area to form the view rendering partition data;
Performing scene rendering on the real-time view angle range according to the view angle rendering partition data to form real-time scene rendering data, including:
acquiring the real-time gazing feature area, and performing de-duplication depth rendering treatment on the real-time gazing feature area to form real-time gazing feature area rendering data;
acquiring the real-time gazing feature expansion area, and performing de-duplication depth rendering on the real-time gazing feature expansion area to form real-time gazing feature expansion area rendering data;
Acquiring the real-time non-gazing characteristic region, and performing depth rendering on the real-time non-gazing characteristic region based on rendering gradients to form real-time non-gazing characteristic region rendering data;
Pre-storing the real-time gazing feature extension area rendering data, and combining the real-time gazing feature area rendering data and the real-time non-gazing feature area rendering data to form real-time visual angle scene rendering data;
acquiring the real-time gazing feature extension area, performing de-duplication depth rendering on the real-time gazing feature extension area to form real-time gazing feature extension area rendering data, and comprising:
Performing predictive change analysis on the real-time visual angle range to form different predictive visual angle ranges;
Performing shielding analysis of the gaze feature object in the real-time gaze feature expansion area based on the prediction changes of different prediction view angle ranges to form different prediction change shielding ranges;
performing de-duplication analysis on each prediction variation shielding range aiming at the real-time gazing feature expansion region to form an expansion shielding sub-region and an expansion scene expression sub-region which correspond to each prediction viewing angle range;
Removing scene information shielded by the extended shielding subarea in the real-time gazing feature expansion area for each prediction view angle range, and performing depth rendering of the gazing feature object on the extended shielding subarea to form corresponding extended gazing feature subarea rendering data under the prediction view angle range;
Performing depth rendering of the extended scene representation subareas for each predicted view angle range to form extended scene representation subarea rendering data corresponding to the predicted view angle range;
Associating the corresponding extended gazing feature sub-region rendering data and extended scene expression sub-region rendering data under the predicted view angle range to form predicted extended rendering data;
And combining all the prediction extension rendering data to form the real-time gaze feature extension region rendering data.
2. The method for real-time rendering of a cartoon scene according to claim 1, wherein the steps of obtaining the cartoon scene basic data, and tracking the head view in real time to form view tracking real-time information comprise:
acquiring the animation scene basic data, and determining the initial position and the initial view angle of the head view angle in the scene;
Acquiring a real-time moving distance of the head visual angle, and determining a real-time position of the head visual angle in a scene by combining the initial position;
collecting a real-time change value of the view angle, and determining the real-time view angle of the head view angle in a scene by combining the initial view angle;
And determining the real-time view angle range of the head view in a scene by combining the real-time position and the real-time view angle.
3. The method for real-time rendering of a cartoon scene according to claim 2, wherein said performing an extended analysis based on rendering efficiency on the real-time gaze feature area, determining a real-time gaze feature extended area, comprises:
extracting boundary information of the real-time gazing feature area to form a real-time gazing feature boundary;
Obtaining a viewing angle change speed limit value of the head viewing angle And scene effective rendering rateAnd determines the expansion width H according to the following formula: The method comprises the steps of obtaining a real-time fixation feature boundary, wherein alpha represents a width adjustment coefficient, expanding the real-time fixation feature boundary according to the expansion width H to form a real-time fixation feature expansion boundary, and determining a region between the real-time fixation feature boundary and the real-time fixation feature expansion boundary as the real-time fixation feature expansion region.
4. The animation scene real-time rendering method according to claim 3, wherein the angular change speed limit of the head viewing angleObtained by determining the maximum moving speed of the head view angle;
Determining the minimum depth of field L in the real-time visual angle range, and determining the maximum visual angle change speed of the head visual angleAccording to the maximum moving speedThe minimum depth of field L and the maximum viewing angle change speedDetermining the angular change speed limitWherein:
5. the method of claim 4, wherein the obtaining the real-time gaze feature region and performing a de-duplication depth rendering process on the real-time gaze feature region to form real-time gaze feature region rendering data comprises:
Determining a scene rear end area shielded by the real-time gaze feature object according to the real-time view angle range, and removing scene information of the scene rear end area;
and performing depth rendering on the real-time gazing feature objects corresponding to the real-time gazing feature areas to form real-time gazing feature area rendering data.
6. The method of claim 5, wherein the obtaining the real-time non-gaze feature region and performing depth rendering based on a rendering gradient on the real-time non-gaze feature region to form real-time non-gaze feature region rendering data comprises:
setting a rendering gradient, and starting to perform depth rendering outwards according to the rendering gradient from the boundary of the real-time gaze feature expansion area until the rendering is extended to the boundary of the real-time visual angle range or is stopped when the rendering is intersected with the extending range of the depth rendering outwards according to the rendering gradient from the boundary of the other real-time gaze feature expansion area, and completing the depth rendering when the extending of the depth rendering reaches the whole range of the real-time non-gaze feature area.
7. A real-time animation scene rendering system, which adopts the real-time animation scene rendering method according to any one of claims 1-6, and is characterized by comprising:
the scene data storage unit is used for storing animation scene basic data to be rendered;
the visual angle tracking analysis unit is used for calling the animation scene basic data stored by the scene data storage unit, tracking the head visual angle in real time and forming visual angle tracking real-time information under the animation scene;
The scene rendering partition processing unit is used for acquiring the visual angle tracking real-time information formed by the visual angle tracking analysis unit, and performing partition processing based on the visual angle tracking real-time information to form visual angle rendering partition data;
The real-time rendering processing unit is used for acquiring the visual angle rendering analysis data formed by the scene rendering partition processing unit and performing real-time rendering processing on the cartoon scene according to the visual angle rendering partition data.
CN202410519307.6A 2024-04-28 2024-04-28 A method and system for real-time rendering of animation scenes Active CN118429487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410519307.6A CN118429487B (en) 2024-04-28 2024-04-28 A method and system for real-time rendering of animation scenes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410519307.6A CN118429487B (en) 2024-04-28 2024-04-28 A method and system for real-time rendering of animation scenes

Publications (2)

Publication Number Publication Date
CN118429487A CN118429487A (en) 2024-08-02
CN118429487B true CN118429487B (en) 2025-03-21

Family

ID=92326034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410519307.6A Active CN118429487B (en) 2024-04-28 2024-04-28 A method and system for real-time rendering of animation scenes

Country Status (1)

Country Link
CN (1) CN118429487B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109766011A (en) * 2019-01-16 2019-05-17 北京七鑫易维信息技术有限公司 A kind of image rendering method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11521349B2 (en) * 2017-09-21 2022-12-06 Faro Technologies, Inc. Virtual reality system for viewing point cloud volumes while maintaining a high point cloud graphical resolution
US11488345B2 (en) * 2020-10-22 2022-11-01 Varjo Technologies Oy Display apparatuses and rendering servers incorporating prioritized re-rendering
CN116563445B (en) * 2023-04-14 2024-03-19 深圳崇德动漫股份有限公司 Cartoon scene rendering method and device based on virtual reality
CN117830487A (en) * 2024-01-05 2024-04-05 咪咕文化科技有限公司 Virtual object rendering method, device, equipment and medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109766011A (en) * 2019-01-16 2019-05-17 北京七鑫易维信息技术有限公司 A kind of image rendering method and device

Also Published As

Publication number Publication date
CN118429487A (en) 2024-08-02

Similar Documents

Publication Publication Date Title
US11954759B2 (en) Tile-based graphics
US9142044B2 (en) Apparatus, systems and methods for layout of scene graphs using node bounding areas
US9881391B2 (en) Procedurally defined texture maps
US8233006B2 (en) Texture level tracking, feedback, and clamping system for graphics processors
US20100164986A1 (en) Dynamic Collage for Visualizing Large Photograph Collections
CN104657417B (en) The processing method and system of thermodynamic chart
CN111429331B (en) Tile-Based Scheduling
CN112652046A (en) Game picture generation method, device, equipment and storage medium
CN107392990B (en) Global illumination to render 3D scenes
JPH07302336A (en) Method and apparatus for making feature of image ambiguous
CN114564630B (en) A method, system and medium for Web3D visualization of graph data
KR20200096267A (en) Real-time rendering method of giga-pixel image
US6226009B1 (en) Display techniques for three dimensional virtual reality
CN118429487B (en) A method and system for real-time rendering of animation scenes
CN113407087B (en) Picture processing method, computing device and readable storage medium
Banterle et al. Real-Time High Fidelity Inverse Tone Mapping for Low Dynamic Range Content.
CN115330919A (en) Rendering of persistent particle trajectories for dynamic displays
CN115330920A (en) Rendering of permanent particle trajectories for dynamic displays
CN120492679B (en) Optimization method, device and storage medium for large-scale graph data visualization rendering
Anglada et al. Dynamic sampling rate: harnessing frame coherence in graphics applications for energy-efficient GPUs
CN118537450B (en) Animation scene rendering method and system
CN116778064B (en) Virtual model rendering method, device, computing device and computer storage medium
US20250209721A1 (en) Hybrid hash function for access locality
CN114949848A (en) Image rendering method and device, electronic device, storage medium
Ye et al. Real Time Display Method of Complex 3D Model under B/S Architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant