CN113034653B - Animation rendering method and device - Google Patents
Animation rendering method and device Download PDFInfo
- Publication number
- CN113034653B CN113034653B CN201911349293.3A CN201911349293A CN113034653B CN 113034653 B CN113034653 B CN 113034653B CN 201911349293 A CN201911349293 A CN 201911349293A CN 113034653 B CN113034653 B CN 113034653B
- Authority
- CN
- China
- Prior art keywords
- frame image
- decoding
- animation
- layer
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Processing Or Creating Images (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The application relates to the technical field of image processing, and discloses an animation rendering method and device, which are used for shortening the total time length of animation rendering and improving the fluency of playing. The method comprises the following steps: acquiring a current frame image of an animation, wherein the current frame image is decoded and drawn in the process of displaying a previous frame image on a display interface; and displaying the current frame image on a display interface, and decoding and drawing a next frame image in the display process of the current frame image.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an animation rendering method and apparatus.
Background
Animation is achieved by a series of images, and continuous changes are caused to vision by continuously displaying a series of image frames for describing expressions, actions, changes and the like of people and objects, so that animation is formed. With the development of computer technology, animation processing technology is also continuously improved. In the prior art, the rendering process of the animation pictures is to sequentially render each frame of image, and then render the next frame after all rendering steps of the previous frame are completely executed, so that the rendering time of the whole animation is longer.
Disclosure of Invention
The embodiment of the application provides an animation rendering method and device, which are used for shortening the total time length of animation rendering and improving the smoothness of playing.
According to a first aspect of an embodiment of the present application, there is provided an animation rendering method, including:
acquiring a current frame image of an animation, wherein the current frame image is decoded and drawn in the process of displaying a previous frame image on a display interface;
and displaying the current frame image on a display interface, and decoding and drawing a next frame image in the display process of the current frame image.
According to a second aspect of embodiments of the present application, there is provided an animation rendering device, the device including:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a current frame image of an animation, and the current frame image is decoded and drawn in the process of displaying a previous frame image on the display interface;
the display unit is used for displaying the current frame image on a display interface;
a decoding unit, configured to decode a next frame image in a display process of the current frame image;
and the drawing unit is used for drawing the next frame image in the display process of the current frame image.
In an alternative embodiment, the acquiring unit is further configured to determine that the next frame image includes a plurality of layers;
the decoding unit is specifically configured to decode the multiple layers in parallel by using multiple decoding threads, where one decoding thread is used to perform a decoding task of one layer;
the drawing unit is specifically configured to asynchronously draw the decoded multiple layers by using multiple drawing threads, where one drawing thread is used to execute a drawing task of one layer.
In an alternative embodiment, the decoding unit is specifically configured to:
starting a plurality of decoding threads with corresponding quantity for all layers;
the plurality of decoding threads execute decoding tasks in parallel;
after any decoding thread finishes the decoding task, the decoded image layer is put in a corresponding position in a message queue;
the drawing unit is specifically configured to:
starting a plurality of drawing threads with corresponding numbers for all layers;
and the plurality of drawing threads acquire the decoded image layer from the message queue according to a fixed time sequence and draw the image layer.
In an alternative embodiment, the decoding unit and the rendering unit are further configured to:
and acquiring rendering environments from the same material resources.
In an alternative embodiment, the decoding unit is further configured to determine that the next frame image is located within a static interval; acquiring a reference frame layer corresponding to the static interval;
and the drawing unit is also used for drawing the reference frame layer as the layer of the next frame image.
In an alternative embodiment, the acquiring unit is further configured to acquire a first frame image of the animation;
the decoding unit is further used for decoding the first frame image;
the drawing unit is further used for drawing the first frame image;
the display unit is further used for displaying the first frame image on a display interface;
the decoding unit is used for decoding a second frame image in the processes of decoding, drawing and displaying the first frame image; the drawing unit is used for drawing the second frame image in the process of decoding, drawing and displaying the first frame image.
According to a third aspect of the embodiments of the present application, there is provided a computing device comprising at least one processor, and at least one memory, wherein the memory stores a computer program, which when executed by the processor, causes the processor to perform the steps of the animation rendering method provided by the embodiments of the present application.
According to a fourth aspect of embodiments of the present application, there is provided a storage medium storing computer instructions that, when executed on a computer, cause the computer to perform the steps of the animation rendering method provided by the embodiments of the present application.
In the embodiment of the application, in the process of rendering the animation, the current frame image is decoded and drawn in the process of displaying the previous frame image on the display interface. When the current frame is refreshed, the current frame image of the animation is obtained, the decoded and drawn current frame image is directly displayed on a display interface, and in the display process of the current frame image, the next frame image is decoded and drawn. Thus, when the next frame is refreshed, the decoded and drawn next frame image is directly displayed on the display interface. In the animation in the embodiment of the application, in the process of displaying the current frame image, the next frame image is decoded and drawn in advance, so that the total time length of animation rendering is reduced, and the playing is smoother.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application.
FIG. 1 is a system architecture diagram of an animation rendering system in an embodiment of the present application;
FIG. 2 is a flow chart of an animation rendering method in the prior art;
FIG. 3 is a flow chart of an animation rendering method in an embodiment of the present application;
FIG. 4 is a timing diagram of an animation rendering process according to an embodiment of the present application;
FIG. 5a is a schematic diagram of a prior art flow for synchronous decoding and rendering of an animation;
FIG. 5b is a schematic flow chart of asynchronous decoding and drawing of an animation in an embodiment of the present application;
FIG. 6 is a schematic diagram of an image including three layers according to an embodiment of the present application;
FIG. 7 is a flowchart illustrating the operation of multiple threads in an embodiment of the present application.
FIG. 8 is a diagram of an OpenGL Context dependency framework in an embodiment of the present application;
FIG. 9 is a schematic diagram showing the comparison of the effects of the prior art animation synchronous rendering and the animation asynchronous rendering in the embodiment of the present application;
fig. 10 is a block diagram showing a configuration of an animation rendering device according to an embodiment of the present application;
fig. 11 is a block diagram illustrating a structure of a terminal according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the technical solutions of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the disclosure herein, are within the scope of the disclosure herein.
The terms first and second in the description and claims of the invention and in the above-mentioned figures are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the term "include" and any variations thereof is intended to cover non-exclusive protection. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Some of the concepts involved in the embodiments of the present application are described below.
Frame: the single image picture is the minimum unit in the image animation. One frame is a still image and successive frames form an animation, such as a television image. Typically, the number of frames, simply the number of frames of an image transmitted in 1 second, is also understood to mean that the graphics processor can refresh several times per second, typically represented by FPS (Frames Per Second ). Each frame is a still image and displaying the frames in rapid succession creates the illusion of motion. A higher frame rate results in a smoother, more realistic animation, and the larger the FPS, the smoother the displayed motion.
Animation: the moving image is realized by a series of images, and the series of images used for describing the expression, the action, the change and the like of people and things are continuously displayed to continuously change the vision, so that the animation is formed. There are many types of animation, which may include, but are not limited to, planar animation (also known as two-dimensional animation), three-dimensional animation, virtual reality animation, digital animation, and the like. There are also many animation forms, which may include, but are not limited to, shape-patch animation, position-patch animation, guide-line animation, frame animation (Frame By Frame), and so forth.
Rendering: english is Render, which is the last process of CG (Computer Animation ). This process of displaying the abstract picture information data through computer graphics processing is called rendering, and is simply a process of presenting an abstract model (an abstract but visualized object, such as a table chair, or a data model, such as a tree diagram and a pie chart) in a format that can be recognized by an output device (display).
Thread: (thread) is the minimum unit that the operating system can perform operation scheduling. It is included in the process and is the actual unit of operation in the process. One thread refers to a single sequential control flow in a process, and multiple threads can be concurrent in a process, each thread executing different tasks in parallel.
Asynchronous processing: in contrast to synchronous processing, asynchronous processing does not block the current thread to wait for processing to complete, but rather allows subsequent operations until other threads complete processing and callback notifies the thread. The CPU is enabled to temporarily hold the response of the current request, process the next request, and start to operate after being notified by polling or other means. The multithreading puts the asynchronous operation into another thread to run, the completion notification is obtained through a polling or callback method, but the completion port is used for taking over the scheduling of the asynchronous operation by the operating system, and the callback method is triggered when the completion is completed through hardware interrupt, so that the mode does not need to occupy additional threads.
Layer (c): each layer is composed of a plurality of pixels, and the layers are stacked one above the other to form the whole image. In popular terms, the layers are like films containing text or graphics, and a sheet Zhang An of film is sequentially stacked together to form the final effect of the image. The layer can precisely locate elements on the image. Text, pictures, tables and plug-ins can be added into the layers, and the layers can be nested inside.
OpenGL: (Open Graphics Library ), is a cross-language, cross-platform Application Programming Interface (API) for rendering 2D, 3D vector graphics. This interface consists of nearly 350 different function calls, used to draw a scene from simple graphics bits to complex three dimensions. OpenGL is commonly used for CAD, virtual reality, scientific visualization programs, and electronic game development.
Referring to fig. 1, an architecture diagram of an animation rendering system according to an embodiment of the present application is shown, including a server 101 and a terminal 102.
The terminal 102 may be an electronic device with a wireless communication function, such as a mobile phone, a tablet computer, or a dedicated handheld device, or may be a device connected to the internet by a wired access method, such as a personal computer (personal computer, abbreviated as PC), a notebook computer, or a server.
The server 101 may be a network device such as a computer. The server 101 may be a stand-alone device or may be a server cluster formed by a plurality of servers. Preferably, the server 101 may employ cloud computing technology for information processing.
The network in the system can be an INTERNET network, and also can be a mobile communication system such as a global system for mobile communication (Global System for Mobile Communications, abbreviated as GSM) system, a long term evolution (long term evolution, abbreviated as LTE) system and the like.
The animation rendering method in the embodiment of the present application is performed by the terminal 102. The animation may be stored in a storage space of the terminal 102, and when the animation needs to be rendered, the terminal acquires the animation from its own storage space. The animation may also be stored in the server 101, and when the animation needs to be processed, the terminal 102 may download and render the animation from the server 101 in real time, and the animation may be uploaded to the server 101 for other terminals. Alternatively, the terminal 102 may perform the animation rendering method of the embodiment of the present application by a client or browser installed thereon.
It should be noted that the above-mentioned application scenario is only shown for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited in this respect. Rather, embodiments of the present application may be applied to any scenario where applicable.
Generally, 2D rendering engines render animated processes including refresh (flush), decode (Decoder), draw (onDraw), and display (swabbuffer). Fig. 2 shows an animation rendering process in the prior art, and as shown in fig. 2, after each frame of image is refreshed, steps of decoding, drawing and displaying need to be sequentially performed in turn. After all steps of the previous frame are executed, refreshing the next frame, and then executing other steps of rendering until all frame images of the animation are rendered.
In order to reduce the rendering time of the animation, the animation rendering method provided in the embodiment of the present application is described below with reference to the application scenario shown in fig. 1.
Referring to fig. 3, an embodiment of the present application provides an animation rendering method, as shown in fig. 3, including:
step S301: and acquiring a current frame image of the animation, wherein the current frame image is decoded and drawn in the process of displaying the previous frame image on a display interface.
For each frame in the animation, the image needs to be decoded, drawn and displayed after the frame is refreshed. In the prior art, after refreshing a current frame, decoding and drawing the current frame image, after finishing drawing, temporarily storing the current frame image in a cache, and then exchanging the cached image into a screen of equipment for displaying. In the implementation process of the method, before refreshing the current frame, the current frame image is preprocessed, namely, in the process of exchanging and displaying the previous frame image of the current frame, the current frame image is decoded and drawn, the drawn current frame image is stored in a memory, and after refreshing the current frame, the drawn current frame image in the memory is directly exchanged into a screen for displaying.
Step S302: and displaying the current frame image on a display interface, and decoding and drawing the next frame image in the display process of the current frame image.
In the embodiment of the application, in the process of rendering the animation, the current frame image is decoded and drawn in the process of displaying the previous frame image on the display interface. When the current frame is refreshed, the current frame image of the animation is obtained, the decoded and drawn current frame image is directly displayed on a display interface, and in the display process of the current frame image, the next frame image is decoded and drawn. Thus, when the next frame is refreshed, the decoded and drawn next frame image is directly displayed on the display interface. In the animation in the embodiment of the application, in the process of displaying the current frame image, the next frame image is decoded and drawn in advance, so that the total time length of animation rendering is reduced, and the playing is smoother.
Specifically, for the first frame image in the animation, since there is no previous frame image, the rendering process for the first frame image includes:
acquiring a first frame image of the animation;
decoding and drawing the first frame image;
displaying the first frame image on a display interface;
wherein the second frame image is decoded and rendered during the decoding, rendering and display of the first frame image.
In the implementation process, after the first frame of the animation is refreshed, a first frame image is acquired, the first frame image is decoded and drawn, and then the first frame image is displayed on a display interface. Meanwhile, in the process of decoding, drawing and displaying the first frame image, decoding and drawing the second frame image of the animation. Therefore, after the second frame is refreshed, the drawn second frame image can be directly obtained from the memory and displayed in the screen, so that the rendering time of the animation is shortened.
FIG. 4 shows a timing diagram of an animation rendering process in one possible embodiment. As shown in fig. 4, the first flush is a first frame refreshing time of an animation, after the first frame refreshing, the first frame image is decoded, drawn and displayed, in this process, the second frame image is acquired at the same time, and is decoded and drawn, and the drawn second frame image is stored in the memory. And the second flush is the second frame refreshing time, after the second frame is refreshed, the drawn second frame image is directly obtained from the memory and is displayed in the screen, the third frame image is obtained while the second frame image is displayed, the third frame image is decoded and drawn, and the drawn third frame image is stored in the memory. And the third flush is the third frame refreshing time, after the third frame refreshing, the drawn third frame image is directly obtained from the memory and is displayed in the screen, the fourth frame image is obtained while the third frame image is displayed, the fourth frame image is decoded and drawn, and the drawn fourth frame image is stored in the memory. Thereafter, for each frame of the animation, the processing manner is similar to that described above, and details are not repeated here.
In an alternative embodiment, to further shorten the rendering time of the animation, the decoding and drawing the next frame of image includes:
determining that the next frame of image comprises a plurality of image layers;
decoding a plurality of layers in parallel by using a plurality of decoding threads, wherein one decoding thread is used for executing the decoding task of one layer;
and asynchronously drawing the decoded multiple layers by utilizing multiple drawing threads, wherein one drawing thread is used for executing drawing tasks of one layer.
In the implementation process, each layer is decoded and drawn by adopting an independent thread. The decoding threads and the main threads are executed in parallel, and decoding threads corresponding to different layers in the same image can be executed in parallel, so that the processing time consumption of the main threads can be saved, the delay of animation playing is reduced, and the fluency of an animation playing interface is ensured. After the decoding thread finishes the decoding task processing of the layer, the drawing thread acquires the decoded layer and asynchronously draws the decoded layer. Asynchronous drawing is carried out on a plurality of layers, so that the rendering time of a main thread can be further saved, and the fluency of animation playing is improved.
Fig. 5a shows a schematic flow diagram of the prior art for synchronous decoding and rendering of an animation. Fig. 5b shows a schematic flow diagram of the asynchronous decoding and rendering of an animation in an embodiment of the present application. On the premise that the image comprises three layers, as shown in fig. 5a, in the synchronous execution process, for a plurality of layers of each frame, after the frame image is refreshed, the processes of decoding and drawing are sequentially executed, and then the image is displayed on the same screen, wherein the time occupied by decoding and drawing is longer. As shown in fig. 5b, in the asynchronous execution process, since decoding and drawing tasks are asynchronously executed using the multi-line Cheng Kuangjia, the time for which decoding and drawing occupy the main thread is shortened, thereby shortening the rendering time.
Because the drawing of different layers in the same image needs to be performed in a certain order, in an alternative embodiment, the message queue is used to ensure the time sequence of drawing the layers.
Specifically, the parallel decoding of the multiple layers by using multiple threads includes:
starting a plurality of decoding threads with corresponding quantity for all layers;
executing decoding tasks by a plurality of decoding threads in parallel;
after any decoding thread finishes the decoding task, the decoded image layer is put in a corresponding position in a message queue;
asynchronously rendering the decoded plurality of layers using a plurality of threads, comprising:
starting a plurality of drawing threads with corresponding numbers for all layers;
and the plurality of drawing threads acquire the decoded image layer from the message queue according to the fixed time sequence and draw the image layer.
In a specific implementation process, the same image generally includes multiple layers, and a decoding thread and a drawing thread are respectively created for each layer. After the rendering thread of the same layer executes the rendering task, the decoding thread executes the decoding task, and then the decoded layer is put in a corresponding position in a message queue, and the rendering thread acquires the corresponding layer from the corresponding position of the message queue to execute the rendering task. On the other hand, different layers in the same image need to be drawn in a certain order, typically in the order from bottom to top. Fig. 6 shows a schematic diagram of three layers contained in one image. As shown in fig. 6, the three layers are sequentially drawn from bottom to top, i.e., layer a is drawn first, layer b is drawn later, and layer c is drawn last. Therefore, regardless of the order in which the decoded layers are placed into the queue by the decode thread, the draw thread needs to obtain the decoded layers from the message queue at a fixed timing.
The rendering process of each layer will be described in detail taking the image of fig. 6 as an example. If the current frame image is shown in fig. 6 and includes three layers, in the previous frame image display process, the current frame image is acquired, and after the current frame image is determined to include three layers, three decoding threads and three drawing threads are started, wherein the three decoding threads are respectively used for executing the decoding tasks of the three layers, and the three drawing threads are respectively used for executing the drawing tasks of the three layers.
FIG. 7 illustrates a workflow diagram for multiple threads in an alternative embodiment. As shown in fig. 7, thread 1, thread 2 and thread 3 are decoding threads, and thread 1, thread 2 and thread 3 execute decoding tasks in parallel, wherein thread 1 is used for executing decoding tasks of layer a, thread 2 is used for executing decoding tasks of layer b, and thread 3 is used for executing decoding tasks of layer c. Thread 1, thread 2 and thread 3 place the decoded layer in the message queue according to the corresponding positions, i.e. thread 1 places decoded layer a in the first position in the message queue, thread 2 places decoded layer b in the second position in the message queue, and thread 3 places decoded layer c in the third position in the message queue. Here, the layers are placed in the message queues according to the corresponding positions no matter the execution sequence of the decoding tasks is completed.
Thread 4, thread 5 and thread 6 are drawing threads, thread 4, thread 5 and thread 6 asynchronously execute drawing tasks, and because different layers need to be drawn in sequence, thread 4, thread 5 and thread 6 sequentially acquire decoded layers according to the layer sequence in a message queue, and execute drawing tasks according to the time sequence. Specifically, the thread 4 acquires the decoded layer a from the message queue, and draws the decoded layer a; thread 5 obtains decoded layer b from the message queue and draws the decoded layer b; thread 6 obtains decoded layer c from the message queue and draws the decoded layer c. The drawing tasks are sequentially and asynchronously executed, so that the layer a is firstly drawn, the layer b is secondly drawn, the layer c is finally drawn, and finally the whole image is drawn, stored into a memory and displayed after the current frame is refreshed.
The decoding and drawing processes realize communication between two threads through the message queue, so that the parallel execution and asynchronous execution of the threads are simultaneously ensured, the sequence of drawing after decoding of the same layer is ensured, and the drawing sequence of different layers in the same image is ensured.
In an alternative embodiment, a thread for loop execution may be additionally built, and the decoded layers are sequentially obtained from the message queue according to the sequence and handed to the drawing thread for execution until the layers in the message queue are executed, or the task is suspended, or the task is canceled.
In an alternative embodiment, the message queue may also be a priority queue, i.e. layers in the message queue are given priority, and the layers are performing the drawing tasks in order of priority from high to low.
It should be noted that, in the embodiment of the present application, the execution sequence is sequential execution, that is, the corresponding layer execution drawing task is sequentially obtained according to the order in the message queue. However, the execution sequence of the layers is not limited to the sequence arrangement in the above-mentioned embodiments, but may be other arrangements and execution sequences, and the corresponding arrangements and execution sequence configurations may be performed according to the requirements.
Further, in order to achieve sharing of rendering environments among threads, in the embodiment of the present application, after determining that a next frame image includes a plurality of layers, the method further includes:
multiple decoding threads and multiple rendering threads acquire rendering environments from the same material resources.
In the implementation process, for each thread, a configuration mode of configuring animation information in the same open graphic library can be adopted, namely, all threads share the same open graphic library, so that the information consistency of each layer is ensured.
In an alternative embodiment, a Context dependent framework is created based on OpenGL Context, as shown in fig. 8, where the material resource is OpenGL, and the rendering environment for all threads is created from shared OpenGL Context. Thus, each thread has own OpenGL Context environment, and meanwhile, the OpenGL environments are shared among the threads, so that information consistency is realized.
In addition, if other OpenGL contexts need to be shared among individual threads, a rendering environment can be created for the threads based on other OpenGL contexts, so that environment sharing among the threads is realized, and information consistency is realized.
Compared with the prior art, the animation rendering method has obvious beneficial effects. FIG. 9 is a schematic diagram showing the comparison of the effects of the prior art animation synchronous rendering and the animation asynchronous rendering in the embodiment of the present application. The left diagram in fig. 9 is an effect diagram of three animation sequence frame-synchronous rendering, wherein the decoding time is 5.12ms, and the overall rendering time is 29.44ms. The right diagram in fig. 9 is an effect diagram of asynchronous rendering of three animation sequence frames, wherein the decoding time is 2.56ms, and the overall rendering time is 19.88ms. As can be seen from the comparison, the decoding time consumption and the overall rendering time consumption are obviously reduced, and the results brought by asynchronous decoding and asynchronous drawing are adopted in the embodiment of the application, so that the rendering time consumption is greatly shortened, and the smoothness of animation playing is improved.
In an alternative embodiment, in order to further reduce rendering time, the concept of static intervals is added in the embodiment of the application. The embodiment of the present application draws the next frame of image, including:
determining that the next frame of image is positioned in the static interval;
acquiring a reference frame layer corresponding to a static interval;
and drawing the reference frame layer as the layer of the next frame image.
In the implementation process, the situation that the image layer is unchanged among a plurality of frames can occur. For example, the nth frame of the animation contains three layers, the n+1st frame contains three layers, and the n+2nd frame contains two layers, wherein layer d remains unchanged between the nth frame and the n+2nd frame. When creating the animation, the N-th frame to the n+2-th frame can be marked as a static interval, and the reference frame layer is layer d of the N-th frame. In this way, when drawing the n+1st frame image and the n+2nd frame image, the decoded layer d of the N frame can be multiplexed, that is, the decoded layer d is directly drawn as the layer of the n+1st frame and the layer of the n+2nd frame, so that the decoding time of the n+1st frame layer and the n+2nd frame layer is saved, and the rendering time is further shortened. In addition, the reference frame layer after the decoding is directly multiplexed by the subsequent frames in the static interval, and repeated storage is not needed, so that the occupancy rate of the memory is reduced.
In a specific implementation scene, the animation rendering time length of the static interval and the reference frame layer is 7346us, the animation rendering time consumption is 486us after the static interval and the reference frame layer are arranged, and the rendering efficiency is improved by more than 10 times.
The following describes the above procedure in detail with specific embodiments, including the following steps:
and after the first frame is refreshed, acquiring a first frame image in the animation.
And determining the number of layers contained in the first frame image as M, wherein M is more than or equal to 1.
2M threads are created, including M decoding threads and M rendering threads. Wherein, the OpenGL Context environment of all 2M threads is created from the shared OpenGL Context.
And the M decoding threads respectively decode the M layers and put the decoded layers into a message queue.
And the M drawing threads sequentially acquire the decoded layers from the message queue according to a preset sequence, and perform asynchronous drawing processing to form a first frame image after drawing is completed.
And displaying the first frame of image on the screen.
And in the process of decoding, drawing and displaying the first frame image, acquiring a second frame image, determining the number of layers contained in the second frame image as L, and creating 2L threads, wherein the L threads comprise L decoding threads and L drawing threads.
Each of the L decoding threads performs: judging whether the corresponding layer is provided with a static interval and a reference frame layer, if so, directly acquiring the reference frame layer and putting the reference frame layer into a message queue; if not, decoding the corresponding layer, and putting the decoded layer into a message queue.
And the L drawing threads sequentially acquire the decoded image layers from the message queue according to a preset sequence, perform asynchronous drawing processing to form a second frame image after drawing is completed, and store the second frame image.
And after refreshing the second frame, acquiring a second frame image in the memory, and displaying the second frame image on a screen.
In the second frame image display process, a third frame image is acquired. The processing procedure of the second frame image is performed on the third frame image, which is not described herein.
The processing procedure is sequentially executed for the subsequent frame images of the animation until all frame images of the animation are played.
The following is an embodiment of the device of the present application, and for details of the device embodiment that are not described in detail, reference may be made to the foregoing one-to-one method embodiment.
Referring to fig. 10, a block diagram illustrating the structure of data processing according to one embodiment of the present application is shown. The animation rendering device is implemented as all or a part of the terminal 102 in fig. 1 by hardware or a combination of hardware and software. The device comprises: an acquisition unit 1001, a decoding unit 1002, a drawing unit 1003, and a display unit 1004.
An obtaining unit 1001, configured to obtain a current frame image of an animation, where the current frame image is decoded and drawn in a process of displaying a previous frame image on a display interface;
a display unit 1004, configured to display a current frame image on a display interface;
a decoding unit 1002, configured to decode a next frame image during a display process of a current frame image;
a drawing unit 1003 for drawing a next frame image in the display process of the current frame image.
In an alternative embodiment, the obtaining unit 1001 is further configured to determine that the next frame image includes a plurality of layers;
the decoding unit 1002 is specifically configured to decode multiple layers in parallel by using multiple decoding threads, where one decoding thread is configured to perform a decoding task of one layer;
the drawing unit 1003 is specifically configured to asynchronously draw the decoded multiple layers by using multiple drawing threads, where one drawing thread is used to perform a drawing task of one layer.
In an alternative embodiment, the decoding unit 1002 is specifically configured to:
starting a plurality of decoding threads with corresponding quantity for all layers;
executing decoding tasks by a plurality of decoding threads in parallel;
after any decoding thread finishes the decoding task, the decoded image layer is put in a corresponding position in a message queue;
the drawing unit 1003 is specifically configured to:
starting a plurality of drawing threads with corresponding numbers for all layers;
and the plurality of drawing threads acquire the decoded image layer from the message queue according to the fixed time sequence and draw the image layer.
In an alternative embodiment, the decoding unit 1002 and the rendering unit 1003 are further configured to:
and acquiring rendering environments from the same material resources.
In an alternative embodiment, the decoding unit 1002 is further configured to determine that the next frame image is located in the static interval; acquiring a reference frame layer corresponding to a static interval;
the drawing unit 1003 is further configured to draw the reference frame layer as a layer of the next frame image.
In an alternative embodiment, the obtaining unit 1001 is further configured to obtain a first frame image of the animation;
a decoding unit 1002, configured to decode the first frame image;
a drawing unit 1003 further configured to draw a first frame image;
a display unit 1004, configured to display a first frame image on a display interface;
wherein, the decoding unit 1002 is configured to decode the second frame image during the process of decoding, drawing and displaying the first frame image; a drawing unit 1003 for drawing the second frame image in the process of decoding, drawing, and displaying the first frame image.
Based on the correspondence to the embodiment of the animation rendering method discussed in fig. 3, the embodiment of the application further provides a terminal device 1100, where the terminal device 1100 may be an electronic device such as a smart phone, a tablet computer, a laptop computer, or a PC.
Referring to fig. 11, the terminal device 1100 includes a display unit 1140, a processor 1180 and a memory 1120, wherein the display unit 1140 includes a display panel 1141 for displaying animated images and various operation interfaces of the terminal device 1100, and in the embodiment of the present application, the display unit is mainly used for displaying interfaces, shortcut windows, and the like of installed clients in the terminal device 1100. Alternatively, the display panel 1141 may be configured in the form of an LCD (Liquid Crystal Display) or an OLED (Organic Light-Emitting Diode) or the like.
The processor 1180 is used to read the computer program and then execute a method defined by the computer program, for example, the processor 1180 reads the animation playing application long hair, so as to run the application on the terminal device 1100, and display the interface of the application on the display unit 1140. The processor 1180 may include one or more general-purpose processors and may also include one or more DSPs (Digital Signal Processor, digital signal processors) for performing related operations to implement the solutions provided by the embodiments of the present application.
Memory 1120 typically includes memory and external memory, which may be Random Access Memory (RAM), read Only Memory (ROM), and CACHE memory (CACHE), among others. The external memory can be a hard disk, an optical disk, a USB disk, a floppy disk, a tape drive, etc. The memory 1120 is used to store computer programs, including client-side corresponding applications, etc., and other data, which may include data generated after the operating system or applications are run, including system data (e.g., configuration parameters of the operating system) and game player data. The program instructions in the embodiment of the present application are stored in the memory 1120, and the processor 1180 executes the program instructions stored in the memory 1120 to implement the animation rendering method discussed in fig. 2 above.
In addition, the terminal apparatus 1100 may further include a display unit 1140 for receiving input digital information, character information, or touch operation/noncontact gestures, and generating signal inputs related to animation rendering of the terminal apparatus 1100, and the like. Specifically, in the embodiment of the present application, the display unit 1140 may include a display panel 1141. The display panel 1141, such as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the player on the display panel 1141 or on the display panel 1141 using any suitable object or accessory such as a finger, stylus, etc.), and drive the corresponding connection device according to a predetermined program. Alternatively, the display panel 1141 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device and converts it into touch point coordinates, which are then sent to the processor 1180, and can receive commands from the processor 1180 and execute them.
The display panel 1141 may be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the display unit 1140, the terminal device 1100 may further include an input unit 1130, and the input unit 1130 may include, but is not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, etc. For example, the user may operate by touching the display panel 1141 during the animation playing process, or may operate by the input unit 1130, for example, by a shortcut key corresponding to a physical keyboard.
In addition to the above, terminal device 1100 can also include a power supply 1190 for powering other modules, audio circuitry 1160, near field communication module 1170, and RF circuitry. The terminal device 1100 may also include one or more sensors 1150, such as acceleration sensors, light sensors, pressure sensors, and the like. Audio circuitry 1160 may comprise, in particular, a speaker 1161 and a microphone 1162, etc., for example, terminal device 1100 may play animated sounds through speaker 1161.
Based on the same inventive concept, embodiments of the present application provide a computer-readable storage medium storing computer instructions that, when run on a computer, cause the computer to perform any of the apparatus discussed above to perform an animation rendering method.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.
Claims (10)
1. An animation rendering method, the method comprising:
acquiring a current frame image of an animation, wherein the current frame image is decoded and drawn in the process of displaying a previous frame image on a display interface;
displaying the current frame image on a display interface, determining a plurality of layers included in a next frame image in the display process of the current frame image, and starting a plurality of decoding threads for executing decoding tasks in parallel in a corresponding number for the plurality of layers, wherein one decoding thread is used for executing the decoding task of one layer; after any decoding thread finishes the decoding task, the decoded image layer is put in a corresponding position in a message queue;
and starting a plurality of drawing threads corresponding to the plurality of layers, and acquiring the decoded layers from the message queue and drawing the decoded layers by the plurality of drawing threads according to a fixed time sequence, wherein one drawing thread is used for executing the drawing task of one layer.
2. The method of claim 1, wherein after determining the plurality of layers included in the next frame of image, further comprising:
and the plurality of decoding threads and the plurality of drawing threads acquire rendering environments from the same material resource.
3. The method as recited in claim 1, further comprising:
determining that the next frame of image is positioned in a static interval;
acquiring a reference frame layer corresponding to the static interval;
and drawing the reference frame layer as the layer of the next frame image.
4. A method according to any one of claims 1 to 3, further comprising:
acquiring a first frame image of the animation;
decoding and drawing the first frame image;
displaying the first frame image on a display interface;
wherein, in the process of decoding, drawing and displaying the first frame image, the second frame image is decoded and drawn.
5. An animation rendering device, characterized in that the device comprises:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a current frame image of an animation, and the current frame image is decoded and drawn in the process of displaying a previous frame image on the display interface;
the display unit is used for displaying the current frame image on a display interface;
the decoding unit is used for determining a plurality of layers included in the next frame image in the display process of the current frame image, and starting a plurality of decoding threads for executing decoding tasks in parallel in a corresponding number for the plurality of layers, wherein one decoding thread is used for executing the decoding task of one layer; after any decoding thread finishes the decoding task, the decoded image layer is put in a corresponding position in a message queue;
the drawing unit is used for starting a plurality of drawing threads corresponding to the plurality of layers, the plurality of drawing threads acquire the decoded layers from the message queue according to a fixed time sequence and draw the decoded layers, and one drawing thread is used for executing the drawing task of one layer.
6. The apparatus of claim 5, wherein the decoding unit and the rendering unit are further configured to:
and acquiring rendering environments from the same material resources.
7. The apparatus of claim 5, wherein the decoding unit is further configured to determine that the next frame image is located within a static interval; acquiring a reference frame layer corresponding to the static interval;
and the drawing unit is also used for drawing the reference frame layer as the layer of the next frame image.
8. The apparatus according to any one of claims 5 to 7, wherein the acquiring unit is further configured to acquire a first frame image of an animation;
the decoding unit is further used for decoding the first frame image;
the drawing unit is also used for drawing the first frame image;
the display unit is further used for displaying the first frame image on the display interface;
the decoding unit is used for decoding a second frame image in the processes of decoding, drawing and displaying the first frame image; the drawing unit is used for drawing the second frame image in the process of decoding, drawing and displaying the first frame image.
9. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor,
the processor, when executing the computer program, implements the animation rendering method as claimed in any one of claims 1 to 4.
10. A computer readable storage medium having stored thereon processor-executable instructions,
the processor executable instructions when executed by a processor are for implementing the animation rendering method of any of claims 1 to 4.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911349293.3A CN113034653B (en) | 2019-12-24 | 2019-12-24 | Animation rendering method and device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911349293.3A CN113034653B (en) | 2019-12-24 | 2019-12-24 | Animation rendering method and device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN113034653A CN113034653A (en) | 2021-06-25 |
| CN113034653B true CN113034653B (en) | 2023-08-08 |
Family
ID=76451913
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201911349293.3A Active CN113034653B (en) | 2019-12-24 | 2019-12-24 | Animation rendering method and device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113034653B (en) |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114567789A (en) * | 2021-11-04 | 2022-05-31 | 浙江浙大中控信息技术有限公司 | Video live broadcast method based on double buffer queues and video frame congestion control |
| CN114241094B (en) * | 2021-12-16 | 2025-04-22 | 广州博冠信息科技有限公司 | Animation drawing method, device, storage medium and electronic device |
| CN119450079A (en) * | 2023-07-31 | 2025-02-14 | 华为技术有限公司 | Interface image processing method, electronic device and storage medium |
| CN117812332B (en) * | 2023-12-29 | 2024-09-24 | 书行科技(北京)有限公司 | Playing processing method and device, electronic equipment and computer storage medium |
| CN118115634A (en) * | 2024-03-18 | 2024-05-31 | 海南渔人映画文化传媒有限公司 | Digital animation rendering method |
| CN119248157A (en) * | 2024-03-29 | 2025-01-03 | 荣耀终端有限公司 | Data processing method, device, equipment, storage medium and program product |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6430591B1 (en) * | 1997-05-30 | 2002-08-06 | Microsoft Corporation | System and method for rendering electronic images |
| CN101031085A (en) * | 2007-03-30 | 2007-09-05 | 中国联合通信有限公司 | Method for processing mobile-terminal frame carboon |
| CN103221918A (en) * | 2010-11-18 | 2013-07-24 | 德克萨斯仪器股份有限公司 | Context switching method and device |
| CN106095366A (en) * | 2016-06-07 | 2016-11-09 | 北京小鸟看看科技有限公司 | A kind of shorten the method for picture delay, device and virtual reality device |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP5635672B1 (en) * | 2013-12-05 | 2014-12-03 | 株式会社 ディー・エヌ・エー | Image processing apparatus and image processing program |
| US9798581B2 (en) * | 2014-09-24 | 2017-10-24 | Facebook, Inc. | Multi-threaded processing of user interfaces for an application |
-
2019
- 2019-12-24 CN CN201911349293.3A patent/CN113034653B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6430591B1 (en) * | 1997-05-30 | 2002-08-06 | Microsoft Corporation | System and method for rendering electronic images |
| CN101031085A (en) * | 2007-03-30 | 2007-09-05 | 中国联合通信有限公司 | Method for processing mobile-terminal frame carboon |
| CN103221918A (en) * | 2010-11-18 | 2013-07-24 | 德克萨斯仪器股份有限公司 | Context switching method and device |
| CN106095366A (en) * | 2016-06-07 | 2016-11-09 | 北京小鸟看看科技有限公司 | A kind of shorten the method for picture delay, device and virtual reality device |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113034653A (en) | 2021-06-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN113034653B (en) | Animation rendering method and device | |
| EP3111318B1 (en) | Cross-platform rendering engine | |
| CN110989878B (en) | Animation display method and device in applet, electronic equipment and storage medium | |
| US20220241689A1 (en) | Game Character Rendering Method And Apparatus, Electronic Device, And Computer-Readable Medium | |
| US9342322B2 (en) | System and method for layering using tile-based renderers | |
| US20130055072A1 (en) | Multi-Threaded Graphical Display System | |
| CN113244614B (en) | Image picture display method, device, equipment and storage medium | |
| WO2020186935A1 (en) | Virtual object displaying method and device, electronic apparatus, and computer-readable storage medium | |
| CN105027039A (en) | Reducing latency in ink rendering | |
| JP2018511859A (en) | Backward compatibility using spoof clock and fine grain frequency control | |
| EP4478285A1 (en) | Image display method and apparatus, electronic device, and storage medium | |
| CN113453073B (en) | Image rendering method, device, electronic equipment and storage medium | |
| CN113411664A (en) | Video processing method and device based on sub-application and computer equipment | |
| CN110750664A (en) | Picture display method and device | |
| WO2018175869A1 (en) | System and method for mass-animating characters in animated sequences | |
| CN115705668A (en) | View drawing method and device and storage medium | |
| CN115661375B (en) | Three-dimensional hair style generation method and device, electronic equipment and storage medium | |
| US20230267063A1 (en) | Real-time latency measurements in streaming systems and applications | |
| CN104142807A (en) | Android-control-based method and system for drawing image through OpenGL | |
| CN111107427A (en) | Image processing method and related product | |
| CN115601555A (en) | Image processing method and apparatus, device and medium | |
| CN115018955A (en) | An image generation method and device | |
| HK40050662A (en) | An animation rendering method and device | |
| CN116168123A (en) | Image processing method, device and electronic equipment | |
| EP4538979A1 (en) | Method and apparatus for image processing, and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40050662 Country of ref document: HK |
|
| GR01 | Patent grant | ||
| GR01 | Patent grant |