Disclosure of Invention
The invention aims to provide a real-time animation scene rendering method, which is used for determining the real-time visual angle range of an animation scene covered by a head visual angle in a real-time state by tracking the real-time change condition of the head visual angle, further rendering the scene in a scene area limited by the determined real-time visual angle range, greatly reducing the rendering workload relative to the whole rendering of the scene, and effectively ensuring the picture experience of the animation scene in the visual angle range. Meanwhile, reasonable partition processing is carried out by considering the gazing features of the scene area in the real-time visual angle range, partition data aiming at different feature conditions are formed, and important partition information is provided for subsequent targeted reasonable and efficient partition rendering. On the one hand, the effect of rendering the objects with the gazing features in the range of the real-time visual angles can be highlighted, the effect of improving the picture feeling of the cartoon scene is achieved, on the other hand, the unnecessary consumption of rendering resources caused by unified depth rendering is avoided, the rendering resources are effectively saved under the condition of fully meeting the rendering requirements, the rendering effect is further improved, and the timeliness and the effect of real-time scene display are ensured.
The invention also aims to provide a cartoon scene real-time rendering system which effectively stores scene basic data for rendering through a scene data storage unit. The view angle tracking analysis unit performs real-time tracking of view angle positions on the basis of the integrated scene basic data, accurately determines the positions of the head view angles in a real-time state, and further determines scene areas defined by the head view angles in real time based on scene data information. And then, the determined real-time visual angle area is subjected to partition processing of rendering by using a scene rendering partition processing unit, so that different types of rendering areas are formed. And utilizing the real-time rendering processing unit to conduct targeted rendering on the divided different rendering areas. Each unit of the whole rendering system has an efficient and specific data processing function, and the units are combined with each other to form a closely related scene real-time rendering processing system. On one hand, the method can effectively ensure efficient and timely rendering of the scene in the real-time visual angle range, and on the other hand, an important material basis is provided for real-time rendering of the cartoon scene.
The invention provides a cartoon scene real-time rendering method, which comprises the steps of obtaining cartoon scene basic data, carrying out real-time tracking on a head view to form view tracking real-time information, carrying out partition processing based on gaze characteristics on a real-time view range according to the view tracking real-time information to form view rendering partition data, and carrying out scene rendering on the real-time view range according to the view rendering partition data to form real-time scene rendering data.
According to the method, the real-time view angle range of the cartoon scene covered by the head view angle in the real-time state is determined by tracking the real-time change condition of the head view angle in the cartoon scene, so that the scene is rendered in the scene area defined by the determined real-time view angle range, the workload of rendering can be greatly reduced relative to the whole scene, and the picture experience sense of the cartoon scene in the view angle range is effectively ensured. Meanwhile, reasonable partition processing is carried out by considering the gazing features of the scene area in the real-time visual angle range, partition data aiming at different feature conditions are formed, and important partition information is provided for subsequent targeted reasonable and efficient partition rendering. On the one hand, the effect of rendering the objects with the gazing features in the range of the real-time visual angles can be highlighted, the effect of improving the picture feeling of the cartoon scene is achieved, on the other hand, the unnecessary consumption of rendering resources caused by unified depth rendering is avoided, the rendering resources are effectively saved under the condition of fully meeting the rendering requirements, the rendering effect is further improved, and the timeliness and the effect of real-time scene display are ensured.
As one possible implementation mode, acquiring the animation scene basic data and carrying out real-time tracking on the head view angle to form view angle tracking real-time information, wherein the method comprises the steps of acquiring the animation scene basic data, determining the initial position and the initial view angle of the head view angle in a scene, acquiring the real-time moving distance of the head view angle, determining the real-time position of the head view angle in the scene by combining the initial position, acquiring the real-time change value of the view angle, determining the real-time view angle of the head view angle in the scene by combining the initial view angle, and determining the real-time view angle range of the head view angle in the scene by combining the real-time position and the real-time view angle.
In the invention, the real-time tracking of the head view angle in the cartoon scene to acquire the tracking information in the real-time state mainly acquires the data information in two aspects. One is the position data of the head view angle in the cartoon scene, and the different positions of the head view angle in the scene considering the information of the depth of field directly affect the scene view angle range defined by the head view angle. The other is view angle change data of the head view angle, and the view angle change of the head view angle can directly influence the change of the scene under the view angle range. Based on the position data and the view angle change data of the head view angle, the real-time view angle range limited by the head view angle in the real-time state can be determined by combining the initial position data and the view angle data of the head view angle and the scene information, so that the scene area range needing to be rendered in the current real-time state is determined, and a basic and accurate rendering range reference is provided for real-time rendering of the scene.
As a possible implementation mode, the method comprises the steps of carrying out partition processing based on fixation characteristics on a real-time visual angle range according to visual angle tracking real-time information to form visual angle rendering partition data, wherein the visual angle rendering partition data comprises the steps of determining fixation characteristic objects of scenes in the real-time visual angle range, obtaining scene depth information, determining scene areas covered by the fixation characteristic objects in combination with positions of head visual angles relative to the fixation characteristic objects to form real-time fixation characteristic areas, carrying out expansion analysis based on rendering efficiency on the real-time fixation characteristic areas to determine real-time fixation characteristic expansion areas, determining real-time non-fixation characteristic areas in combination with the real-time visual angle range, and forming visual angle rendering partition data in combination with the real-time fixation characteristic areas, the real-time fixation characteristic expansion areas and the real-time non-fixation characteristic areas.
In the invention, the real-time processing actual condition is fully considered by carrying out the partition processing based on the fixation characteristics in the real-time visual angle range, and the scene in the real-time visual angle range is divided into a real-time fixation characteristic region, a real-time fixation characteristic expansion region and a real-time non-fixation characteristic region. The real-time gaze feature region is a scene region covered by the gaze feature object at the viewing angle of the head viewing angle. The real-time non-gazing characteristic region is the scene region remaining in the real-time visual angle range after the real-time gazing characteristic region is excluded. The real-time gaze feature expansion area is a set of areas formed by the area occlusion of the next step which is possible in the case of fully predictive analysis of the position change of the gaze feature object itself in the scene and the position change of the head view angle in the case of real-time rendering. The area range is basically a larger area range formed around the gazing feature than the current gazing feature area, is mainly rendered in a real-time rendering state in advance, and provides preparation data for rendering results formed after shielding is provided by directly and correspondingly after the shielding area change of the gazing feature object is accurately determined in a subsequent real-time state, so that timeliness of real-time rendering operation is ensured.
As a possible implementation manner, the method for determining the real-time gaze feature expansion area by performing expansion analysis based on rendering efficiency on the real-time gaze feature area comprises the steps of extracting boundary information of the real-time gaze feature area to form a real-time gaze feature boundary, acquiring a visual angle change speed limit V view of a head visual angle and a scene effective rendering rate V rend, and determining an expansion width H according to the following formula: the method comprises the steps of obtaining a real-time fixation feature boundary, wherein alpha represents a width adjustment coefficient, expanding the real-time fixation feature boundary according to an expansion width H to form the real-time fixation feature expansion boundary, and determining a region between the real-time fixation feature boundary and the real-time fixation feature expansion boundary as a real-time fixation feature expansion region.
In the present invention, a manner of determining the region width size of the real-time gaze feature expansion region is provided herein. Considering that the real-time gaze feature expansion area timely provides a part of pre-rendering data for new scene shielding possibly formed by the gaze feature object in the current state so as to ensure timeliness of rendering effects, and therefore, the rendering update of other positions synchronously performed after the part of pre-rendering scene data is provided needs to be considered to keep up with the speed to be displayed by rendering, so that the width of the expansion area needs to be determined according to the rendering speed and the angle change speed of the head view angle, and after all, the rendering of the part of pre-rendering scene shielding is the result of the combined action of the angle change of the head view angle and the position change of the gaze feature object. The viewing angle change speed limit is determined by the number of pixels across which the maximum angular change speed allowed by the viewing angle can be in the linear direction, and the rendering speed is determined by the number of pixels that can be rendered in the linear direction per unit time. The width adjustment coefficient is determined according to actual rendering requirements, so that the effect of adjusting the rendering amount is achieved, and timeliness of rendering effect display is fully guaranteed.
As one possible implementation, the angular change speed limit V view of the head view angle is obtained by determining the maximum movement speed V mov of the head view angle, determining the minimum depth of field L within the real-time view angle range, determining the maximum view angle change speed V ang of the head view angle, and determining the angular change speed limit V view from the maximum movement speed V mov, the minimum depth of field L, and the maximum view angle change speed V ang, wherein V view=Vmov+L*tan(Vang.
In the present invention, since the maximum moving speed of the head angle of view is also related to the position of the head angle of view in the scene, the angular change speed limit for the head angle of view should be a speed limit for the real-time angle of view range. In other words, the maximum rotation speed of the given head visual angle is scaled based on the triangle proportion of the depth of field under the condition of considering the depth of field, and the actual speed of the scene rendering under the visual angle fixation in the real-time visual angle range is obtained.
As a possible implementation manner, scene rendering is carried out on a real-time view angle range according to view angle rendering partition data to form real-time scene rendering data, wherein the method comprises the steps of obtaining a real-time gazing feature area, carrying out de-duplication depth rendering treatment on the real-time gazing feature area to form real-time gazing feature area rendering data, obtaining a real-time gazing feature expansion area, carrying out de-duplication depth rendering on the real-time gazing feature expansion area to form real-time gazing feature expansion area rendering data, obtaining a real-time non-gazing feature area, carrying out rendering gradient-based depth rendering on the real-time non-gazing feature area to form real-time non-gazing feature area rendering data, pre-storing the real-time gazing feature expansion area rendering data, and combining the real-time gazing feature area rendering data and the real-time non-gazing feature area rendering data to form the real-time view angle scene rendering data.
In the invention, after the rendering partition data of the real-time visual angle range is obtained, the rendering of the real-time gaze feature region is the depth rendering of the gaze feature object on the one hand so as to ensure that the gaze feature object can meet the requirement of visual picture feel, and on the other hand, the de-duplication of the shielding part of the scene is carried out so as to reduce the workload of rendering and improve the rendering efficiency. The rendering process of the real-time gaze feature extended region is a deduplication rendering process performed when the position of the gaze feature object is predicted. The rendering processing of the real-time non-gaze feature region considers the rendering processing with the gradient change of the rendering depth so as to obtain the best visual picture feel, ensure the rationality of the rendering workload and save the resources consumed by rendering. Here, the rendering of the real-time gaze extension area is implemented under the subsequent position prediction of the gaze feature object, so that the rendering result of the real-time gaze extension area is not directly displayed in the scene in the current real-time state, but is temporarily stored as the preliminary data of the subsequent scene change, the initial rendering data mainly including the gaze feature object is provided for the subsequent rendering, and the timeliness of the subsequent scene rendering is improved.
As a possible implementation mode, the real-time gazing feature area is obtained, and the real-time gazing feature area is subjected to de-duplication depth rendering processing to form real-time gazing feature area rendering data, wherein the real-time gazing feature area rendering data comprises the steps of determining a scene rear end area shielded by real-time gazing feature objects according to a real-time visual angle range, removing scene information of the scene rear end area, and performing depth rendering on the corresponding real-time gazing feature objects on the real-time gazing feature area to form real-time gazing feature area rendering data.
In the invention, the de-duplication rendering of the real-time gazing feature area is to screen out the scene at the rear end and perform the depth rendering of the gazing feature object under the condition of determining the scene area blocked behind the gazing feature object.
The method comprises the steps of obtaining a real-time gaze feature expansion area, performing de-duplication depth rendering on the real-time gaze feature expansion area to form real-time gaze feature expansion area rendering data, performing prediction change analysis on a real-time view angle range to form different prediction view angle ranges, performing shielding analysis on gaze feature objects based on prediction changes of the different prediction view angle ranges in the real-time gaze feature expansion area to form different prediction change shielding ranges, performing de-duplication analysis on each prediction change shielding range on the real-time gaze feature expansion area to form corresponding expansion shielding subareas and expansion scene expression subareas in the prediction view angle ranges, performing depth rendering on the expansion shielding subareas to form expansion feature subarea rendering data corresponding to the prediction view angle ranges, performing depth rendering on the expansion scene expression subareas corresponding to the prediction view angle ranges on each prediction view angle range to form expansion scene expression subarea data, associating the expansion gaze feature rendering subarea data and the expansion scene expression subarea data, and forming prediction gaze feature expansion rendering data by combining the prediction view characteristic expansion data.
In the invention, the rendering consideration of the real-time gaze feature expansion area is performed under the condition of predicting the subsequent position change of the gaze feature object, and the subsequent position change possibility of the gaze feature object is multiple, so that a plurality of different shielding situations in the real-time gaze feature expansion area can be formed. Therefore, in the case of performing the rendering processing, the depth rendering is performed in the predicted position for the gaze feature object for occlusion in the case of performing the deduplication for each occlusion case.
The method comprises the steps of setting a rendering gradient, starting to perform depth rendering outwards according to the rendering gradient from the boundary of the real-time gaze feature expansion region until the rendering is extended to the boundary of a real-time visual angle range or is stopped when the rendering is intersected with an expansion range of the depth rendering outwards according to the rendering gradient from the boundary of other real-time gaze feature expansion region, and completing the depth rendering when the expansion of the depth rendering reaches the whole range of the real-time non-gaze feature region.
In the invention, the range ratio of the real-time non-gazing characteristic region is considered to be the largest, and the scene regions are not the most important positions which can cause the visual angle, so that the rendering of reasonable depth can be performed relative to the gazing characteristic region, wherein the viewing angle has the range regional property, so that the gradient change value of the rendering depth is set to perform the rendering mode that the rendering depth gradually decreases from the boundary close to the real-time gazing characteristic region to the boundary far from the real-time gazing characteristic region, so that the rendering of the real-time non-gazing characteristic region is realized. On one hand, the method can reduce the rendering workload, reasonably utilize the rendering resources, and on the other hand, the method also fully ensures the reasonable supplement of the region rendering to the picture feel.
The invention provides a cartoon scene real-time rendering system, which is applied to the cartoon scene real-time rendering method of the first aspect, and comprises a scene data storage unit, a view angle tracking analysis unit, a scene rendering partition processing unit and a real-time rendering processing unit, wherein the scene data storage unit is used for storing cartoon scene basic data to be rendered, the view angle tracking analysis unit is used for retrieving the cartoon scene basic data stored by the scene data storage unit and tracking a head view in real time to form view angle tracking real-time information under the cartoon scene, the scene rendering partition processing unit is used for acquiring the view angle tracking real-time information formed by the view angle tracking analysis unit and performing partition processing based on the view angle tracking real-time information to form view angle rendering partition data, and the real-time rendering processing unit is used for acquiring the view angle rendering analysis data formed by the scene rendering partition processing unit and performing real-time rendering processing on the cartoon scene according to the view angle rendering partition data.
In the present invention, the system effectively stores scene basic data for rendering through a scene data storage unit. The view angle tracking analysis unit performs real-time tracking of view angle positions on the basis of the integrated scene basic data, accurately determines the positions of the head view angles in a real-time state, and further determines scene areas defined by the head view angles in real time based on scene data information. And then, the determined real-time visual angle area is subjected to partition processing of rendering by using a scene rendering partition processing unit, so that different types of rendering areas are formed. And utilizing the real-time rendering processing unit to conduct targeted rendering on the divided different rendering areas. Each unit of the whole rendering system has an efficient and specific data processing function, and the units are combined with each other to form a closely related scene real-time rendering processing system. On one hand, the method can effectively ensure efficient and timely rendering of the scene in the real-time visual angle range, and on the other hand, an important material basis is provided for real-time rendering of the cartoon scene.
The animation scene real-time rendering method and system provided by the invention have the beneficial effects that:
According to the method, the real-time view angle range of the cartoon scene covered by the head view angle in the real-time state is determined by tracking the real-time change condition of the head view angle in the cartoon scene, and then the scene is rendered in the scene area defined by the determined real-time view angle range, so that the workload of rendering can be greatly reduced compared with the whole scene, and the picture experience of the cartoon scene in the view angle range is effectively ensured. Meanwhile, reasonable partition processing is carried out by considering the gazing features of the scene area in the real-time visual angle range, partition data aiming at different feature conditions are formed, and important partition information is provided for subsequent targeted reasonable and efficient partition rendering. On the one hand, the effect of rendering the objects with the gazing features in the range of the real-time visual angles can be highlighted, the effect of improving the picture feeling of the cartoon scene is achieved, on the other hand, the unnecessary consumption of rendering resources caused by unified depth rendering is avoided, the rendering resources are effectively saved under the condition of fully meeting the rendering requirements, the rendering effect is further improved, and the timeliness and the effect of real-time scene display are ensured.
The system effectively stores scene basic data for rendering through a scene data storage unit. The view angle tracking analysis unit performs real-time tracking of view angle positions on the basis of the integrated scene basic data, accurately determines the positions of the head view angles in a real-time state, and further determines scene areas defined by the head view angles in real time based on scene data information. And then, the determined real-time visual angle area is subjected to partition processing of rendering by using a scene rendering partition processing unit, so that different types of rendering areas are formed. And utilizing the real-time rendering processing unit to conduct targeted rendering on the divided different rendering areas. Each unit of the whole rendering system has an efficient and specific data processing function, and the units are combined with each other to form a closely related scene real-time rendering processing system. On one hand, the method can effectively ensure efficient and timely rendering of the scene in the real-time visual angle range, and on the other hand, an important material basis is provided for real-time rendering of the cartoon scene.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the accompanying drawings in the embodiments of the present invention.
Cartoon rendering is one of the key links of cartoon making. The animation design is converted into a specific image through the combined setting of elements such as a model, light rays, materials, shadows and the like. The cartoon rendering mode is also various, and the cost and visual sense requirements are considered, so that the rendering can be the rendering of the whole scene or the rendering aiming at the gazing characteristic.
Currently, the form of rendering a animation is mostly to provide real-time rendering within the range of the scene covered by the viewing angle. But basically, the rendering of the view angle range is the rendering of the same depth without the target overall range, on one hand, the workload of the rendering is large, and the rendering of a large number of areas without gaze features wastes the resources required by the rendering greatly, so that the cost of scene rendering is increased. On the other hand, because the workload of rendering is larger, higher requirements are put forward on the performance of equipment under real-time rendering, and meanwhile, the real-time performance of scene picture display is reduced, so that the experience of the scene picture is not improved.
Referring to fig. 1, an embodiment of the present invention provides a real-time animation scene rendering method, which determines a real-time viewing angle range of an animation scene covered by a head viewing angle in a real-time state by tracking a real-time variation condition of the head viewing angle in the animation scene, further renders the scene in a scene area defined by the determined real-time viewing angle range, greatly reduces the workload of rendering compared with the whole rendering, and effectively ensures the picture experience of the animation scene in the viewing angle range. Meanwhile, reasonable partition processing is carried out by considering the gazing features of the scene area in the real-time visual angle range, partition data aiming at different feature conditions are formed, and important partition information is provided for subsequent targeted reasonable and efficient partition rendering. On the one hand, the effect of rendering the objects with the gazing features in the range of the real-time visual angles can be highlighted, the effect of improving the picture feeling of the cartoon scene is achieved, on the other hand, the unnecessary consumption of rendering resources caused by unified depth rendering is avoided, the rendering resources are effectively saved under the condition of fully meeting the rendering requirements, the rendering effect is further improved, and the timeliness and the effect of real-time scene display are ensured.
The animation scene real-time rendering method specifically comprises the following steps:
s1, acquiring animation scene basic data, and tracking the head visual angle in real time to form visual angle tracking real-time information.
The method comprises the steps of obtaining animation scene basic data, carrying out real-time tracking on a head view angle to form view angle tracking real-time information, and obtaining the animation scene basic data, determining an initial position and an initial view angle of the head view angle in a scene, collecting a real-time moving distance of the head view angle, combining the initial position, determining a real-time position of the head view angle in the scene, collecting a real-time change value of the view angle, combining the initial view angle, determining a real-time view angle of the head view angle in the scene, combining the real-time position and the real-time view angle, and determining a real-time view angle range of the head view angle in the scene.
The real-time tracking of the head view angle in the cartoon scene to acquire the tracking information in the real-time state is mainly to acquire data information in two aspects. One is the position data of the head view angle in the cartoon scene, and the different positions of the head view angle in the scene considering the information of the depth of field directly affect the scene view angle range defined by the head view angle. The other is view angle change data of the head view angle, and the view angle change of the head view angle can directly influence the change of the scene under the view angle range. Based on the position data and the view angle change data of the head view angle, the real-time view angle range limited by the head view angle in the real-time state can be determined by combining the initial position data and the view angle data of the head view angle and the scene information, so that the scene area range needing to be rendered in the current real-time state is determined, and a basic and accurate rendering range reference is provided for real-time rendering of the scene.
And S2, carrying out partition processing based on the gaze characteristics on the real-time view angle range according to the view angle tracking real-time information to form view angle rendering partition data.
The method comprises the steps of carrying out partition processing based on fixation characteristics on a real-time visual angle range according to visual angle tracking real-time information to form visual angle rendering partition data, wherein the visual angle rendering partition data comprise the steps of determining fixation characteristic objects of scenes in the real-time visual angle range, obtaining scene depth information, determining scene areas covered by the fixation characteristic objects by combining positions of head visual angles relative to the fixation characteristic objects to form real-time fixation characteristic areas, carrying out expansion analysis based on rendering efficiency on the real-time fixation characteristic areas to determine real-time fixation characteristic expansion areas, determining real-time non-fixation characteristic areas according to the real-time fixation characteristic areas and combining the real-time visual angle range, and combining the real-time fixation characteristic areas, the real-time fixation characteristic expansion areas and the real-time non-fixation characteristic areas to form visual angle rendering partition data.
The real-time view angle range is partitioned based on the gazing features, the actual situation of the real-time processing is fully considered, and the scene in the real-time view angle range is divided into a real-time gazing feature area, a real-time gazing feature expansion area and a real-time non-gazing feature area. The real-time gaze feature region is a scene region covered by the gaze feature object at the viewing angle of the head viewing angle. The real-time non-gazing characteristic region is the scene region remaining in the real-time visual angle range after the real-time gazing characteristic region is excluded. The real-time gaze feature expansion area is a set of areas formed by the area occlusion of the next step which is possible in the case of fully predictive analysis of the position change of the gaze feature object itself in the scene and the position change of the head view angle in the case of real-time rendering. The area range is basically a larger area range formed around the gazing feature than the current gazing feature area, is mainly rendered in a real-time rendering state in advance, and provides preparation data for rendering results formed after shielding is provided by directly and correspondingly after the shielding area change of the gazing feature object is accurately determined in a subsequent real-time state, so that timeliness of real-time rendering operation is ensured.
The method comprises the steps of extracting boundary information of the real-time gazing feature region to form a real-time gazing feature boundary, obtaining a visual angle change speed limit V view of a head visual angle and a scene effective rendering rate V rend, and determining an expansion width H according to the following formula: the method comprises the steps of obtaining a real-time fixation feature boundary, wherein alpha represents a width adjustment coefficient, expanding the real-time fixation feature boundary according to an expansion width H to form the real-time fixation feature expansion boundary, and determining a region between the real-time fixation feature boundary and the real-time fixation feature expansion boundary as a real-time fixation feature expansion region.
A manner of determining the region width size of the real-time gaze feature expansion region is provided herein. Considering that the real-time gaze feature expansion area timely provides a part of pre-rendering data for new scene shielding possibly formed by the gaze feature object in the current state so as to ensure timeliness of rendering effects, and therefore, the rendering update of other positions synchronously performed after the part of pre-rendering scene data is provided needs to be considered to keep up with the speed to be displayed by rendering, so that the width of the expansion area needs to be determined according to the rendering speed and the angle change speed of the head view angle, and after all, the rendering of the part of pre-rendering scene shielding is the result of the combined action of the angle change of the head view angle and the position change of the gaze feature object. The viewing angle change speed limit is determined by the number of pixels across which the maximum angular change speed allowed by the viewing angle can be in the linear direction, and the rendering speed is determined by the number of pixels that can be rendered in the linear direction per unit time. The width adjustment coefficient is determined according to actual rendering requirements, so that the effect of adjusting the rendering amount is achieved, and timeliness of rendering effect display is fully guaranteed.
The angular change speed limit V view of the head view angle is obtained by determining the maximum moving speed V mov of the head view angle, determining the minimum depth of field L within the real-time view angle range, determining the maximum view angle change speed V ang of the head view angle, and determining the angular change speed limit V view from the maximum moving speed V mov, the minimum depth of field L, and the maximum view angle change speed V ang, wherein V view=Vmov+L*tan(Vang.
Since the maximum movement speed of the head view angle is also related to the position of the head view angle in the scene, the angular change speed limit for the head view angle should be a speed limit for the real-time view angle range. In other words, the maximum rotation speed of the given head visual angle is scaled based on the triangle proportion of the depth of field under the condition of considering the depth of field, and the actual speed of the scene rendering under the visual angle fixation in the real-time visual angle range is obtained.
And S3, performing scene rendering on the real-time view angle range according to the view angle rendering partition data to form real-time scene rendering data.
The method comprises the steps of performing scene rendering on a real-time view angle range according to view angle rendering partition data to form real-time scene rendering data, wherein the method comprises the steps of obtaining a real-time gazing feature area, performing de-duplication depth rendering on the real-time gazing feature area to form real-time gazing feature area rendering data, obtaining a real-time gazing feature expansion area, performing de-duplication depth rendering on the real-time gazing feature expansion area to form real-time gazing feature expansion area rendering data, obtaining a real-time non-gazing feature area, performing rendering gradient-based depth rendering on the real-time non-gazing feature area to form real-time non-gazing feature area rendering data, pre-storing the real-time gazing feature expansion area rendering data, and combining the real-time gazing feature area rendering data and the real-time non-gazing feature area rendering data to form the real-time view angle scene rendering data.
After the rendering partition data of the real-time visual angle range are obtained, the rendering of the real-time gaze feature area is on the one hand the depth rendering of the gaze feature object so as to ensure that the gaze feature object can meet the requirement of visual picture feel, and on the other hand the de-duplication of the shielding part of the scene is carried out so as to reduce the workload of rendering and improve the rendering efficiency. The rendering process of the real-time gaze feature extended region is a deduplication rendering process performed when the position of the gaze feature object is predicted. The rendering processing of the real-time non-gaze feature region considers the rendering processing with the gradient change of the rendering depth so as to obtain the best visual picture feel, ensure the rationality of the rendering workload and save the resources consumed by rendering. Here, the rendering of the real-time gaze extension area is implemented under the subsequent position prediction of the gaze feature object, so that the rendering result of the real-time gaze extension area is not directly displayed in the scene in the current real-time state, but is temporarily stored as the preliminary data of the subsequent scene change, the initial rendering data mainly including the gaze feature object is provided for the subsequent rendering, and the timeliness of the subsequent scene rendering is improved.
The method comprises the steps of obtaining a real-time gazing feature area, performing de-duplication depth rendering on the real-time gazing feature area to form real-time gazing feature area rendering data, and determining a scene rear end area shielded by real-time gazing feature objects according to a real-time view angle range, removing scene information of the scene rear end area, and performing depth rendering on the real-time gazing feature objects corresponding to the real-time gazing feature area to form real-time gazing feature area rendering data.
The de-duplication rendering of the real-time gazing feature area is to screen out the scene at the rear end under the condition that the scene area blocked behind the gazing feature object is determined, and the depth rendering of the gazing feature object is performed.
The method comprises the steps of obtaining a real-time gazing feature expansion area, performing de-duplication depth rendering on the real-time gazing feature expansion area to form real-time gazing feature expansion area rendering data, performing prediction change analysis on a real-time view angle range to form different prediction view angle ranges, performing shielding analysis on gazing feature objects based on prediction changes of the different prediction view angle ranges in the real-time gazing feature expansion area to form different prediction change shielding ranges, performing de-duplication analysis on each prediction change shielding range on the real-time gazing feature expansion area to form corresponding expansion shielding subareas and expansion scene expression subareas under the prediction view angle ranges, removing scene information shielded by the expansion shielding subareas in the real-time gazing feature expansion area for each prediction view angle range, performing depth rendering on the expansion shielding subareas to form expansion feature subarea rendering data corresponding to the prediction view angle ranges, performing depth rendering on expansion scene expression subareas to form expansion scene expression subareas corresponding to the prediction view angle ranges, associating the expansion feature subarea data and the expansion scene expression rendering subareas under the prediction view angle ranges to form prediction expansion data, and combining all the prediction feature expansion subarea rendering data to form real-time rendering data.
The rendering consideration of the real-time gaze feature expansion area is performed under the condition of predicting the subsequent position change of the gaze feature object, and the subsequent position change possibility of the gaze feature object is multiple, so that a plurality of different shielding situations in the real-time gaze feature expansion area can be formed. Therefore, in the case of performing the rendering processing, the depth rendering is performed in the predicted position for the gaze feature object for occlusion in the case of performing the deduplication for each occlusion case.
The method comprises the steps of obtaining a real-time non-gaze characteristic region, carrying out depth rendering based on rendering gradients on the real-time non-gaze characteristic region to form real-time non-gaze characteristic region rendering data, setting the rendering gradients, and carrying out depth rendering outwards according to the rendering gradients from the boundary of the real-time gaze characteristic expansion region until the rendering is extended to the boundary of a real-time visual angle range or is stopped when the rendering is intersected with the expansion range of the depth rendering outwards according to the rendering gradients from the boundary of other real-time gaze characteristic expansion region, wherein the depth rendering is completed when the expansion of the depth rendering reaches the whole range of the real-time non-gaze characteristic region.
Considering that the range ratio of the real-time non-gazing characteristic region is maximum, and the scene regions are not the most main positions which can cause the viewing angle key points, so that the rendering of reasonable depth can be performed relative to the gazing characteristic region, wherein the viewing angle is considered to have range regionality, so that the gradient change value of the rendering depth is set to perform the rendering mode that the rendering depth gradually decreases from the boundary close to the real-time gazing characteristic region to the boundary far from the real-time gazing characteristic region, and the rendering of the real-time non-gazing characteristic region is realized. On one hand, the method can reduce the rendering workload, reasonably utilize the rendering resources, and on the other hand, the method also fully ensures the reasonable supplement of the region rendering to the picture feel.
The invention also provides a cartoon scene real-time rendering system which adopts the cartoon scene real-time rendering method provided by the invention and comprises a scene data storage unit, a visual angle tracking analysis unit, a scene rendering partition processing unit and a real-time rendering processing unit, wherein the scene data storage unit is used for storing cartoon scene basic data to be rendered, the visual angle tracking analysis unit is used for retrieving the cartoon scene basic data stored by the scene data storage unit and tracking a head visual angle in real time to form visual angle tracking real-time information under the cartoon scene, the scene rendering partition processing unit is used for acquiring the visual angle tracking real-time information formed by the visual angle tracking analysis unit and performing partition processing based on the visual angle tracking real-time information to form visual angle rendering partition data, and the real-time rendering processing unit is used for acquiring the visual angle rendering analysis data formed by the scene rendering partition processing unit and performing real-time rendering processing on the cartoon scene according to the visual angle rendering partition data.
The system effectively stores scene basic data for rendering through a scene data storage unit. The view angle tracking analysis unit performs real-time tracking of view angle positions on the basis of the integrated scene basic data, accurately determines the positions of the head view angles in a real-time state, and further determines scene areas defined by the head view angles in real time based on scene data information. And then, the determined real-time visual angle area is subjected to partition processing of rendering by using a scene rendering partition processing unit, so that different types of rendering areas are formed. And utilizing the real-time rendering processing unit to conduct targeted rendering on the divided different rendering areas. Each unit of the whole rendering system has an efficient and specific data processing function, and the units are combined with each other to form a closely related scene real-time rendering processing system. On one hand, the method can effectively ensure efficient and timely rendering of the scene in the real-time visual angle range, and on the other hand, an important material basis is provided for real-time rendering of the cartoon scene.
In summary, the animation scene real-time rendering method and device provided by the embodiment of the invention have the beneficial effects that:
According to the method, the real-time view angle range of the cartoon scene covered by the head view angle in the real-time state is determined by tracking the real-time change condition of the head view angle in the cartoon scene, and then the scene is rendered in the scene area defined by the determined real-time view angle range, so that the workload of rendering can be greatly reduced compared with the whole scene, and the picture experience of the cartoon scene in the view angle range is effectively ensured. Meanwhile, reasonable partition processing is carried out by considering the gazing features of the scene area in the real-time visual angle range, partition data aiming at different feature conditions are formed, and important partition information is provided for subsequent targeted reasonable and efficient partition rendering. On the one hand, the effect of rendering the objects with the gazing features in the range of the real-time visual angles can be highlighted, the effect of improving the picture feeling of the cartoon scene is achieved, on the other hand, the unnecessary consumption of rendering resources caused by unified depth rendering is avoided, the rendering resources are effectively saved under the condition of fully meeting the rendering requirements, the rendering effect is further improved, and the timeliness and the effect of real-time scene display are ensured.
The system effectively stores scene basic data for rendering through a scene data storage unit. The view angle tracking analysis unit performs real-time tracking of view angle positions on the basis of the integrated scene basic data, accurately determines the positions of the head view angles in a real-time state, and further determines scene areas defined by the head view angles in real time based on scene data information. And then, the determined real-time visual angle area is subjected to partition processing of rendering by using a scene rendering partition processing unit, so that different types of rendering areas are formed. And utilizing the real-time rendering processing unit to conduct targeted rendering on the divided different rendering areas. Each unit of the whole rendering system has an efficient and specific data processing function, and the units are combined with each other to form a closely related scene real-time rendering processing system. On one hand, the method can effectively ensure efficient and timely rendering of the scene in the real-time visual angle range, and on the other hand, an important material basis is provided for real-time rendering of the cartoon scene.
In the present invention, "at least one" means one or more, and "a plurality" means two or more. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (a, b, or c) of a, b, c, a-b, a-c, b-c, or a-b-c may be represented, wherein a, b, c may be single or plural.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
Those of ordinary skill in the art will appreciate that the elements and method steps of the examples described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or as a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.