CN110298780B - Map rendering method, map rendering device and computer storage medium - Google Patents
Map rendering method, map rendering device and computer storage medium Download PDFInfo
- Publication number
- CN110298780B CN110298780B CN201810246925.2A CN201810246925A CN110298780B CN 110298780 B CN110298780 B CN 110298780B CN 201810246925 A CN201810246925 A CN 201810246925A CN 110298780 B CN110298780 B CN 110298780B
- Authority
- CN
- China
- Prior art keywords
- rendered
- patch
- rendering
- map
- identifier
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Abstract
The disclosure provides a map rendering method and device. The method comprises the following steps: putting a to-be-rendered patch of the map and the to-be-rendered patch identifier into a rendering calling command, and sending the rendering calling command to the GPU; and storing the corresponding relation between the identification of the to-be-rendered surface patch and the rendering style so that the GPU can render the to-be-rendered surface patch by referring to the corresponding relation based on the identification of the to-be-rendered surface patch. The embodiment of the disclosure reduces the blockage during map rendering.
Description
Technical Field
The disclosure relates to the field of computer graphics, in particular to a map rendering method and device.
Background
Map rendering in the prior art is completed by parallel work of a Central Processing Unit (CPU) and a Graphic Processing Unit (GPU). The DrawCall command is a command that the CPU calls the GPU for rendering. A map tile to be rendered is decomposed into patches of various types of elements, such as points, lines, faces, and so on. In placing a primitive patch into a DrawCall command for GPU rendering, typically only one type of element patch can be placed into the DrawCall command. And (3) configuring rendering patterns (colors, line widths and the like) for the element patches of the type on the CPU side, sending the element patches to the GPU in a DrawCall command, and rendering the element patches by the GPU according to the rendering patterns. If too many types of element patches are placed in a DrawCall, the rendering of these element patches often requires different rendering styles, which makes it difficult for the GPU to identify which element patches the different rendering styles are for respectively.
The DrawCall command is invoked separately for each type (point, line, face, etc.) element patch. Sometimes one type of element patch also calls multiple DrawCall commands. Invoking each DrawCall command requires much preparation (detecting rendering state, committing rendering data, committing rendering state, etc.). This makes the CPU exceptionally busy.
The GPU itself has a very powerful computational power and can process rendering tasks very quickly. When the DrawCall command is too many, the CPU will have much extra overhead and run slowly, and the GPU is always in an idle state, causing the GPU to be stuck during rendering.
Disclosure of Invention
One object of the present disclosure is to reduce the jerkiness in map rendering.
According to a first aspect of the disclosed embodiments, a map rendering method on a CPU side is disclosed, comprising:
putting a to-be-rendered patch of the map and the to-be-rendered patch identifier into a rendering calling command, and sending the rendering calling command to the GPU;
and storing the corresponding relation between the identification of the to-be-rendered surface patch and the rendering style so that the GPU can render the to-be-rendered surface patch by referring to the corresponding relation based on the identification of the to-be-rendered surface patch.
According to a second aspect of the embodiments of the present disclosure, there is disclosed a map rendering method on a GPU side, comprising:
receiving a rendering calling command, wherein the rendering calling command comprises a to-be-rendered patch of a map and a to-be-rendered patch identifier;
determining a rendering style by referring to the stored corresponding relation between the mark of the surface patch to be rendered and the rendering style based on the mark of the surface patch to be rendered;
and rendering the to-be-rendered patch by using the rendering style.
According to a third aspect of an embodiment of the present disclosure, there is disclosed a map rendering apparatus including:
the sending unit is configured to put a to-be-rendered surface patch of the map and the to-be-rendered surface patch identifier into a rendering calling command and send the rendering calling command to the GPU;
and the storage unit is configured to store the corresponding relation between the identification of the to-be-rendered patch and the rendering style, so that the GPU refers to the corresponding relation based on the identification of the to-be-rendered patch to render the to-be-rendered patch.
According to a fourth aspect of an embodiment of the present disclosure, there is disclosed a map rendering apparatus including:
the receiving unit is configured to receive a rendering call command, wherein the rendering call command comprises a to-be-rendered surface patch of the map and an identifier of the to-be-rendered surface patch;
the determining unit is configured to determine a rendering style by referring to the stored corresponding relation between the identifier of the to-be-rendered surface patch and the rendering style based on the identifier of the to-be-rendered surface patch;
and the rendering unit is configured to render the surface patch to be rendered by using the rendering style.
According to a fifth aspect of the embodiments of the present disclosure, there is disclosed a map rendering apparatus including:
a memory storing computer readable instructions;
a processor that reads computer readable instructions stored by the memory to perform a method according to an embodiment of the present disclosure.
According to a sixth aspect of embodiments of the present disclosure, a computer program medium is disclosed, having computer readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to perform a method according to embodiments of the present disclosure.
In the embodiment of the disclosure, the to-be-rendered tile and the to-be-rendered tile identifier are put into a rendering call command and sent to the GPU. Due to the existence of the mark of the element surface patch to be rendered, different element surface patches to be rendered can be distinguished, so that more than one type of element surface patches can be put into one rendering call command without distinction, and the problem of efficiency of frequently sending the rendering call command for many times caused by the fact that only one type of element surface patch can be put into the rendering call command at one time in the prior art is solved. And meanwhile, storing the corresponding relation between the mark of the patch to be rendered and the rendering style. Therefore, when the GPU is used for rendering, the rendering style can be obtained by referring to the corresponding relation based on the identification of the surface patch to be rendered, and therefore each surface patch to be rendered is rendered. And transferring a large amount of work from the CPU side to the GPU side by transferring the mark of the patch to be rendered and storing the corresponding relation, so that the strong processing capacity of the GPU is utilized, the processing load of the CPU is reduced, the processing load of the CPU and the GPU is balanced, and the pause in rendering is reduced.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
Fig. 1 illustrates a framework diagram of a usage environment of a map rendering method according to an example embodiment of the present disclosure.
Fig. 2A illustrates 4 map tiles to be rendered displayed when the display scale is one level according to an example embodiment of the present disclosure.
FIG. 2B illustrates 16 map tiles to be rendered displayed when the display scale is two-level, according to an example embodiment of the present disclosure.
Fig. 3 illustrates a flowchart of a map rendering method at a CPU side according to an example embodiment of the present disclosure.
Fig. 4 shows a detailed flowchart of step 110 in fig. 3 according to an example embodiment of the present disclosure.
Fig. 5 shows a flowchart of a map rendering method at a GPU side according to an example embodiment of the present disclosure.
Fig. 6 is a schematic diagram illustrating a texture storing correspondence between a patch identifier to be rendered and a rendering style at each display scale according to an example embodiment of the present disclosure.
Fig. 7 is a flowchart illustrating a specific application scenario of a map rendering method according to an example embodiment of the present disclosure.
Fig. 8 illustrates a block diagram of a structure of a map rendering apparatus at a CPU side according to an example embodiment of the present disclosure.
Fig. 9 illustrates a block diagram of a map rendering apparatus on a GPU side according to an example embodiment of the present disclosure.
Fig. 10 illustrates a block diagram of a map rendering apparatus according to an example embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more example embodiments. In the following description, numerous specific details are provided to give a thorough understanding of example embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the embodiments of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, steps, etc. In other instances, well-known structures, methods, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 illustrates a framework diagram of a usage environment of a map rendering method according to an example embodiment of the present disclosure. The usage environment includes a Central Processing Unit (CPU) 5, a Graphics Processing Unit (GPU) 6.GPU 6 has shader 7 therein. The CPU 5 is a unit responsible for data preparation before rendering. GPU 6 is a unit that performs rendering. Shader 7 is a unit that performs rendering inside GPU 6. The CPU 5 calls the GPU 6 to perform rendering by a rendering call command. Data preparation prior to rendering includes dividing the map into map tiles, determining different types of elements in the map tiles, dividing the different types of elements into element patches, and configuring a rendering style (color, line width, etc.) for each element patch. This workload is very large.
Fig. 3 shows a flowchart of a map rendering method at the CPU 5 side according to an example embodiment of the present disclosure. Map rendering is the process of processing a software-generated map model to achieve a displayable effect. It includes coloring, and light effect processing, shadow effect processing, surface texture effect processing, etc. for achieving a special display effect.
As shown in fig. 3, a map rendering method at a CPU 5 side according to one embodiment of the present disclosure includes:
and step 120, storing the corresponding relationship between the identification of the patch to be rendered and the rendering style, so that the GPU 6 renders the patch to be rendered by referring to the corresponding relationship based on the identification of the patch to be rendered.
These steps are described separately below.
In step 110, the to-be-rendered tile of the map and the to-be-rendered tile identifier are placed in a rendering call command and sent to GPU 6.
In one embodiment, a part of tiles to be rendered that are separated from the map and the corresponding tile identifier to be rendered may be placed in the same rendering call command and sent to GPU 6. Therefore, for the part of the to-be-rendered patches, different to-be-rendered patches can be distinguished due to the existence of the to-be-rendered patch identifiers, and multiple element patches can be sent in the same rendering call command, so that the problem that in the prior art, only one to-be-rendered patch identifier can be sent because the to-be-rendered patch identifier does not exist in the rendering call command is avoided, the processing load of a CPU (Central processing Unit) is reduced, the processing load of the CPU and the GPU is balanced, and the pause in rendering is reduced.
In one embodiment, all tiles to be rendered that are separated from the map and the corresponding tile identifiers to be rendered may be placed in the same rendering call command and sent to GPU 6. Therefore, for all the divided surface patches to be rendered, different surface patches to be rendered can be distinguished due to the existence of the identification of the surface patches to be rendered, all element surface patches can be placed in the same rendering calling command to be sent, the processing efficiency is greatly improved, the processing burden of a CPU (central processing unit) is reduced, and the pause during rendering is reduced.
As shown in FIG. 4, in one embodiment, step 110 includes:
1103, dividing each element into patches to be rendered;
1104, distributing a patch identifier to be rendered for the patch to be rendered;
In step 1101, a map to be rendered is divided into map tiles.
The map to be rendered refers to a software-generated map model that is an object of rendering. Rendering is the process of processing a software-generated map model to achieve a displayable effect.
Map tiles are blocks into which a map is pre-divided in a multi-resolution hierarchical model. Map rendering is performed tile-wise, i.e. one tile is rendered separately from another tile. For example, four tiles 0001, 0002, 0005, 0006 in FIG. 2A are rendered separately, or 16 tiles 0001-0016 in FIG. 2B are rendered separately.
In one embodiment, step 1101 includes: and dividing the map to be rendered into map tiles according to the pixel blocks with the fixed pixel length and the fixed pixel width.
In general, the size of a tile is a block of 64 pixels by 64 pixels. In this case, the map to be rendered is divided into map tiles by blocks of pixels of 64 pixels × 64 pixels. Of course, in some cases, other tile sizes may be specified.
While the size of the tiles is generally fixed, the number of tiles displayed in a display may vary, depending on the display scale. When the display scale is one level, it is generally specified that 2 × 2 tiles are displayed, and as shown in fig. 2A, the pixels of the entire display screen are 128 pixels × 128 pixels. When the display scale is two-level, it is generally specified that 4 × 4 tiles are displayed, and as shown in fig. 2B, the pixels of the entire display screen are 256 pixels × 256 pixels. And so on. As can be seen from fig. 2A and 2B, the higher the number of display scale steps, the larger the area displayed, and the more pixels the display screen includes.
In step 1102, for each map tile that is divided, the different types of elements that the map tile contains are determined.
The map tile dropped due to step 1101 is of its location. For example, for 16 map tiles split out in FIG. 2B, the location of each map tile may be represented in two-dimensional coordinates, respectively. For example, for tile 0016, it is in the 4 th column from left to right on the horizontal axis and the lowest row on the vertical axis, so its position coordinate can be considered (4,1). For tile 0006, which is in column 2 from left to right on the horizontal axis and row 3 from below on the vertical axis, its position coordinates can be considered (2,3).
In one embodiment, step 1102 includes: map tiles for predetermined locations are obtained and the different types of elements contained in the map tiles are determined until the map tiles for all locations are traversed. For example, a map tile of location coordinates (1, 1) is first obtained and the different types of elements it includes are determined; then, obtaining map tiles of the position coordinates (1, 2) and determining the different types of elements included in the map tiles; and so on until the map tile for the location coordinates (4, 4) is obtained and the different types of elements it contains are determined.
The elements refer to basic units constituting a map. The map is composed of points, lines, faces, and overlays, and thus the points, lines, faces, and overlays are 4 types of elements of the map. The point element 1 refers to an element expressed as a point on the map, for example, "guest on wink" in fig. 2A. It should be noted that unlike mathematical points, any point depicted on a map occupies a certain area. The line element 2 refers to an element that appears as a line having a certain width on the map, for example, "tian feng road" in fig. 2A. Unlike mathematical lines, lines drawn on a map occupy a certain area and represent streets and alleys. Lines that do not occupy area have no meaning on the map. The surface element 3 is an element expressed as an area on the map, for example, "makkaido" in fig. 2A, and occupies a certain area. The overlay element is, for example, a character in fig. 2A, or a semi-transparent thick line overlaid on the navigation path selected on the map in the case where the navigation path is displayed.
For example, in the tile 0005 of FIG. 2A, 4 elements are identified:
in step 1103, each element is divided into patches to be rendered.
A patch to be rendered refers to a patch into which an element is divided as a minimum unit of rendering. Triangular patches are typically used. In the field of rendering, no matter how complex the shape of an element is, the element can be finally divided into a plurality of triangular patches. Then, rendering is performed with these triangle patches as the minimum unit. The original method is adopted to divide the elements into triangular patches, so the description is omitted.
In one embodiment, the plurality of element patches decomposed in step 110 have at least two types of elements, i.e., at least two of a patch having a dot element, a patch having a line element, a patch having a face element, and a patch having a cover element.
For example, in the tile 0005 of fig. 2A, victory guests are divided into 2 element patches, the tian feng road is divided into 4 element patches, the ma jia xiao is divided into 4 element patches, and the fangyu is divided into 4 element patches, for 14 element patches in total.
In step 1104, a to-be-rendered patch identification is assigned to the to-be-rendered patch.
The identification of the to-be-rendered patch refers to identification which is allocated to the to-be-rendered patch and enables the to-be-rendered patch to be distinguished from other to-be-rendered patches. It has uniqueness to the patch to be rendered. For example, the 14 patches to be rendered are respectively assigned patch IDs a001-a014 to be rendered.
In step 1105, the divided patches to be rendered and the allocated patch identifications to be rendered are placed into a rendering call command and sent to the GPU 6.
In one embodiment, a part of the to-be-rendered tiles separated from the map and the corresponding allocated identifiers of the to-be-rendered tiles may be placed in the same rendering call command and sent to the GPU 6; in another embodiment, all tiles to be rendered that are separated from the map, together with the corresponding assigned tile identifiers to be rendered, may be placed in the same rendering call command and sent to GPU 6.
The render call command is a command, such as a DrawCall command, that CPU 5 calls GPU 6 to render.
Due to the existence of the mark of the element surface patch to be rendered, different element surface patches to be rendered can be distinguished, so that more than one type of element surface patches can be put into one rendering call command without distinction, and the problem of efficiency of frequently sending the rendering call command for many times caused by the fact that only one type of element surface patch can be put into the rendering call command at one time in the prior art is solved.
In step 120, a corresponding relationship between the identification of the to-be-rendered patch and the rendering style is saved, so that GPU 6 renders the to-be-rendered patch with reference to the corresponding relationship based on the identification of the to-be-rendered patch.
Rendering style refers to a style into which a patch to be rendered is rendered. In one embodiment, it includes color, line width, and the like. Color refers to the color used for rendering. The line width refers to the width of a line used in rendering. The color may be selected from a red (R) color value, a green (G) color value, a blue (B) color value, and an opacity (α). Opacity refers to the degree of opacity. When completely opaque, the opacity is 100%, when completely transparent, the opacity is 0%.
In one embodiment, the patch identifier to be rendered and the rendering style may be pre-configured and stored in a correspondence table, for example:
in one embodiment, the correspondence between the identification of the to-be-rendered surface patch and the rendering style comprises the correspondence between the identification of the to-be-rendered surface patch and the rendering style at each display scale.
In a map, sometimes it is necessary to use different rendering styles for the same element patch at different display scales, for example, the road line is originally red and needs to be changed to blue after being enlarged. Therefore, in the embodiment of the disclosure, the rendering style of each element patch under different display scales is stored, and the display change requirement of the same element patch under different display scales is met.
The correspondence table formed in this case is, for example:
in one embodiment, step 120 includes: and storing the corresponding relation into the texture of the GPU 6.
Texture in computer graphics includes both texture of an object surface in the general sense of even an object surface exhibiting uneven grooves, and color patterns on a smooth surface of an object. In computer graphics, the generation methods of these two types of textures are completely consistent, which is why they are collectively called textures in computer graphics. The latter is referred to in the embodiments of the present disclosure. The texture is stored in GPU 6. The texture includes texture blocks.
The RGBA values of each texture block represent a red (R) color value, a green (G) color value, a blue (B) color value, and opacity (α), respectively. If only a red (R) color value, a green (G) color value, a blue (B) color value, and opacity (α) of an object having a unique color are expressed, it is sufficient to use one texture block. If the colors of the objects are different, a plurality of texture blocks are required to represent the red (R), green (G), blue (B), and opacity (α) of the different colors, respectively.
In one embodiment, the texture includes an array of texture blocks when GPU 6 stores. The first arrangement direction of the texture block array represents different element patch identifications, and the second arrangement direction represents different display scales.
The texture block array refers to an array composed of texture blocks. For example, if there are 14 patches to be rendered, and there are 3 display scales corresponding to 14 rows and 3 columns, respectively, then the texture block array has 14 × 3=42 texture blocks, and the texture block in the 7 th row and the 2 nd column represents the rendering style of the 7 th element patch at the 2 nd display scale.
In one embodiment, the rendering style includes a color and a line width. The texture block array includes a color texture block array 12 and a line width texture block array 11. Color texture block array 12 is an array of texture block components representing colors. Line width texture block array 11 is an array of texture blocks representing line widths.
The first arrangement direction of the color texture block array 12 represents different patch identifiers to be rendered (e.g., Y-axis direction in fig. 6), and the second arrangement direction represents different display scales (e.g., X-axis direction in fig. 6), so that the specific texture block 15 in the color texture block array 12 represents a color at the specific display scale of the specific patch identifier to be rendered.
The first arrangement direction of the line width texture block array 11 represents different patch identifiers to be rendered (e.g., the Y-axis direction in fig. 6), and the other arrangement direction represents different display scales (e.g., the X-axis direction in fig. 6), so that the specific texture block 14 in the line width texture block array represents the line width of the specific display scale of the specific patch identifier to be rendered.
As shown in fig. 6, if there are 14 patch identifications a001 to a014 to be rendered, 2 display scales (primary, secondary), the color texture block array 12 has 14 × 2=28 texture blocks representing colors, and the line width texture block array 11 has 14 × 2=28 texture blocks representing line widths, and 56 texture blocks in total. A boundary line 13 is provided between the color texture block array 12 and the line-width texture block array 11.
The embodiment of the disclosure skillfully utilizes the texture to represent the corresponding relation among the mark of the surface patch to be rendered, the display scale and the rendering style, and improves the storage efficiency. Because the texture is a content which must be stored during rendering, the corresponding relation is embodied in the texture, and the burden brought by additionally storing the corresponding relation is avoided.
In one embodiment, the red (R) color value, the green (G) color value, the blue (B) color value, and the opacity (α) of a particular texture block in the color texture block array represent the red (R) color value, the green (G) color value, the blue (B) color value, and the opacity (α) at a particular display scale of a particular patch identifier to be rendered, for example, the red (R) color value, the green (G) color value, the blue (B) color value, and the opacity (α) of a texture block with ordinate a001 and abscissa of "secondary" in the color texture block array 12 are respectively 0.6, 0.3, 0.1, and 1, and the red (R) color value, the green (G) color value, the blue (B) color value, and the opacity (α) of an element patch representing the patch identifier a001 to be rendered are respectively 0.6, 0.3, 0.1, and 1 at a secondary display scale.
The disclosed embodiments subtly use the four quantities of red (R) color value, green (G) color value, blue (B) color value and opacity (α) in the texture block to represent the characteristics of the four parameters, such that the four quantities represent the red (R) color value, the green (G) color value, the blue (B) color value and the opacity (α) of a particular element patch at a particular display scale. Four metrics are simultaneously represented by one texture block, and the storage efficiency is improved.
In one embodiment, one of a red (R) color value, a green (G) color value, a blue (B) color value, and an opacity (α) of a particular texture block in the line-width texture block array represents a line width at a particular display scale for a particular patch identification to be rendered. That is, the texture block has four variables of red (R) color value, green (G) color value, blue (B) color value, and opacity (α), and only one variable is required for representing the line width, and the remaining three variables are idle.
For example, let the red (R) color value of a texture block represent the line width at a particular display scale for a particular patch identification to be rendered. The red (R) color value for the texture block with ordinate a013 and abscissa "level two" in the line width texture block array 11 is 0.5mm, which represents that the line width of the element patch of the patch identification a013 to be rendered is 0.5mm at the level two display scale.
After the rendering call command is sent to GPU 6, each element patch and the corresponding ID are taken out and put into GPU 6. And the shader 7 searches the corresponding relation in the texture according to the ID of the patch to be rendered, and finds the rendering color and the line width of the element patch corresponding to the ID of the patch to be rendered. The shader 7 renders the element patch by using the color and line width in the GPU 6.
In another embodiment, the storage of the correspondence between the identification of the patch to be rendered and the rendering style (including rendering styles at different display scales) by using a matrix instead of a texture can also improve the utilization rate of storage resources.
In this embodiment, the correspondence is stored in the GPU 6 in the form of a matrix. The first direction of the matrix represents different surface patch identifications to be rendered, and the second direction represents different display scales, so that a specific element in the matrix represents a rendering style under the specific display scale of the specific surface patch identification to be rendered. For example, there are two element patches whose IDs are a001 and a002, respectively. Its rendering style at two display scales is as follows:
therefore, the correspondence relationship can be expressed by the following matrix:
after the rendering call command is sent to GPU 6, each element patch and the corresponding ID are taken out and put into GPU 6. And the shader 7 searches the corresponding relation in the matrix according to the ID of the patch to be rendered, and finds the rendering color and the line width of the element patch corresponding to the ID of the patch to be rendered. The shader 7 renders the element patch by using the color and line width in the GPU 6.
Fig. 5 shows a flowchart of a map rendering method at the GPU 6 side according to an example embodiment of the present disclosure.
A map rendering method on the GPU 6 side according to an embodiment of the present disclosure includes:
and step 230, rendering the to-be-rendered patch by using the rendering style.
Steps 210-230 are different from the map rendering method shown in fig. 3 only in the aspect of description. Steps 210-230 are described from the perspective of GPU 6. The map rendering method shown in fig. 3 is described from the perspective of the CPU 5. Therefore, steps 210-230 are not described in detail.
In one embodiment, the rendering call command includes all tiles to be rendered for the map partition and a corresponding tile to be rendered identification.
In one embodiment, the correspondence between the to-be-rendered patch identifier and the rendering style includes: and the corresponding relation between the mark of the surface patch to be rendered and the rendering style under each display scale. Step 220 specifically includes: and determining a rendering style by referring to the corresponding relation between the mark of the surface patch to be rendered and the rendering style under each display scale based on the mark of the surface patch to be rendered and the display scale.
Fig. 7 shows a flowchart of a specific application example scenario of a map rendering method according to an example embodiment of the present disclosure.
In step 301, the CPU 5 receives a map to be displayed.
In step 302, the CPU 5 divides the map into map tiles by a 64 pixel × 64 pixel block.
In step 303, CPU 5 determines the different types of elements for each map tile, where for map tile 0005, it is determined that it has four elements of three types:
in step 304, the CPU 5 divides the determined 4 elements into 14 patches to be rendered. Specifically, divide into 2 facing slices of waiting to render with must victory guest, divide into 4 facing slices of waiting to render with tian feng lu, divide into 4 facing slices of waiting to render with the small scholar of horse, divide into 4 facing slices of waiting to render with the square and round mansion, 14 facing slices of waiting to render totally.
In step 305, the CPU 5 assigns tile IDs to be rendered, i.e., a001-a014, to the 14 tiles to be rendered, respectively.
In step 306, CPU 5 places the 14 element patches and the corresponding assigned IDs in a DrawCall command and sends them to GPU 6.
In step 307, the CPU 5 configures the following correspondence between the IDs of the 14 patches to be rendered and the colors and line widths at the primary and secondary display scales:
in step 308, the CPU 5 writes the correspondence to the texture of the GPU 6, as shown in fig. 6.
The texture block array in fig. 6 includes a color texture block array 12 and a line width texture block array 11. For example, for a to-be-rendered tile having a tile ID of a001 to be rendered, the R color value is 0.2, the g color value is 0.5, the b color value is 0.3, the opacity is 1, and the line width is 0.2mm at the secondary display scale, and therefore, in the color texture block array 12 of fig. 6, the R color value of a texture block having an X coordinate of "secondary" and a Y coordinate of a001 is set to 0.2, the g color value is set to 0.5, the b color value is set to 0.3, and the opacity is set to 1; in the line-width texture block array 11 of fig. 6, the R color value of a texture block having an X coordinate of "secondary" and a Y coordinate of a001 is set to 0.2.
In step 309, after GPU 6 receives the DrawCall command, shader 7 fetches the 14 patches to be rendered and the IDs from the texture.
In step 310, the shader 7 reads the corresponding relationship between the IDs of the 14 patches to be rendered in the texture and the color and line width at the primary and secondary display scales.
In step 311, the shader 7 determines the color and the line width at the display scale selected by the user with reference to the corresponding relationship in the texture according to the IDs of the 14 patches to be rendered. For example, for a to-be-rendered patch with a patch ID of a001 to be rendered, if the display scale selected by the user is two-level, the corresponding R color value is 0.2, g color value is 0.5, b color value is 0.3, opacity is 1, and line width is 0.2mm by referring to the above correspondence.
In step 312, the shader 7 sequentially renders 14 patches to be rendered using the determined colors and line widths. For example, a patch to be rendered with an ID of a patch to be rendered of a001 is rendered with an R color value of 0.2, a G color value of 0.5, a B color value of 0.3, an opacity of 1, and a line width of 0.2mm.
At step 313, GPU 6 combines the rendered 14 patches to be rendered to form rendered map tile 0005. Tiles 0001-0004, and tiles 0006-0016 are also rendered via a similar process.
In step 314, GPU 6 combines the rendered tiles to obtain a rendered map.
A map rendering apparatus according to an embodiment of the present disclosure is described below with reference to fig. 10. The map rendering apparatus shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 10, the map rendering apparatus is represented in the form of a general purpose computing device. The components of the map rendering apparatus may include, but are not limited to: at least one processing unit 810, at least one memory unit 820, and a bus 830 that couples various system components including the memory unit 820 and the processing unit 810.
The memory unit stores program code that can be executed by the processing unit 810 to cause the processing unit 810 to perform the steps according to various exemplary embodiments of the present invention described in the description part of the above exemplary methods of the present specification. For example, the processing unit 810 may perform various steps as shown in fig. 3-5.
The memory unit 820 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM) 8201 and/or a cache memory unit 8202, and may further include a read only memory unit (ROM) 8203.
A Graphics Processing Unit (GPU) 890 is a unit for graphics processing. It is different from the processing unit 810 in that the processing unit performs processing of data, such as processing of preparing data before map rendering, and the graphics processing unit performs processing of graphics, such as rendering, therein.
The map rendering apparatus may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the map rendering apparatus, and/or with any devices (e.g., router, modem, etc.) that enable the map rendering apparatus to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. Also, the map rendering device may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through the network adapter 860. As shown, the network adapter 860 communicates with the other modules of the map rendering device via the bus 830. It should be appreciated that although not shown in the figures, the map rendering apparatus may be implemented using other hardware and/or software modules, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
As shown in fig. 8, according to an embodiment of the present disclosure, there is provided a map rendering apparatus on a CPU 5 side including:
a sending unit 610, configured to place a to-be-rendered tile of the map and the to-be-rendered tile identifier into a rendering call command, and send the rendering call command to the GPU;
a storing unit 620, configured to store a correspondence between a to-be-rendered patch identifier and a rendering style, so that the GPU renders the to-be-rendered patch with reference to the correspondence based on the to-be-rendered patch identifier.
In one embodiment, the sending unit 610 is further configured to place all tiles to be rendered separated from the map and corresponding tile identifiers into the same rendering call command, and send the same rendering call command to the GPU.
In one embodiment, the correspondence between the to-be-rendered patch identifier and the rendering style includes: and the corresponding relation between the mark of the surface patch to be rendered and the rendering style under each display scale.
In one embodiment, the saving unit 620 is further configured to:
and storing the corresponding relation into the texture of the GPU.
In one embodiment, the texture comprises an array of texture blocks. The first arrangement direction of the texture block array represents different patch identifications to be rendered, and the second arrangement direction represents different display scales.
In one embodiment, the rendering style includes a color and a line width; the texture block array comprises a color texture block array and a line width texture block array. The red (R) color value, the green (G) color value, the blue (B) color value and the opacity (alpha) of the texture block in the color texture block array represent the red (R) color value, the green (G) color value, the blue (B) color value and the opacity (alpha) of the element patch identified by the corresponding patch to be rendered under the corresponding display scale. The line width corresponds to the line width of the element patch identified by the corresponding patch to be rendered under the corresponding display scale of one of the red (R) color value, the green (G) color value, the blue (B) color value and the opacity (alpha) of the texture block in the texture block array partition.
As shown in fig. 9, according to an embodiment of the present disclosure, there is also provided a map rendering apparatus on a GPU side, including:
a receiving unit 710, configured to receive a rendering call command, where the rendering call command includes a to-be-rendered tile of a map and a to-be-rendered tile identifier;
a determining unit 720, configured to determine a rendering style by referring to a stored correspondence between a to-be-rendered patch identifier and the rendering style based on the to-be-rendered patch identifier;
a rendering unit 730, configured to render the to-be-rendered tile by using the rendering style.
In one embodiment, the rendering call command includes all tiles to be rendered for the map partition and a corresponding tile to be rendered identification.
In one embodiment, the correspondence between the patch identifier to be rendered and the rendering style includes: and the corresponding relation between the mark of the surface patch to be rendered and the rendering style under each display scale. The determining unit 720 is further configured to: and determining a rendering style by referring to the corresponding relation between the mark of the surface patch to be rendered and the rendering style under each display scale based on the mark of the surface patch to be rendered and the display scale.
In one embodiment, the correspondence between the identification of the to-be-rendered surface patch and the rendering style at each display scale is stored in the texture.
In one embodiment, the texture comprises an array of texture blocks. The first arrangement direction of the texture block array represents different patch identifications to be rendered, and the second arrangement direction represents different display scales.
In one embodiment, the rendering style includes a color and a line width. The texture block array comprises a color texture block array and a line width texture block array.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, and may also be implemented by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer program medium having stored thereon computer readable instructions which, when executed by a processor of a computer, cause the computer to perform the method described in the above method embodiment section.
According to an embodiment of the present disclosure, there is also provided a program product for implementing the method in the above method embodiment, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
Claims (15)
1. A map rendering method, comprising:
placing a to-be-rendered patch of a map and an identifier of the to-be-rendered patch into a rendering call command, and sending the to-be-rendered patch to a Graphics Processing Unit (GPU), wherein the to-be-rendered patch is a patch which is divided by elements of the map and is used as a minimum unit for rendering, and the identifier of the to-be-rendered patch is an identifier which is allocated for the to-be-rendered patch and enables the to-be-rendered patch to be different from other to-be-rendered patches;
and storing the corresponding relation between the identification of the to-be-rendered surface patch and the rendering style into the GPU, so that the GPU can render the to-be-rendered surface patch by referring to the corresponding relation based on the identification of the to-be-rendered surface patch.
2. The method of claim 1, wherein placing a tile to be rendered of the map along with a tile to be rendered identifier in a render call command comprises:
and putting all the to-be-rendered patches separated from the map and the corresponding to-be-rendered patch identifications into the same rendering calling command.
3. The method of claim 1, wherein the correspondence between the patch identifier to be rendered and the rendering style comprises: and the corresponding relation between the mark of the surface patch to be rendered and the rendering style under each display scale.
4. The method according to claim 3, wherein the storing the correspondence between the identifier of the patch to be rendered and the rendering style specifically includes:
and storing the corresponding relation into the texture of the GPU.
5. The method of claim 4, wherein the texture comprises an array of texture blocks, a first arrangement direction of the array of texture blocks represents different patch identifications to be rendered, and a second arrangement direction represents different display scales.
6. The method of claim 5, wherein the rendering style comprises a color and a line width;
the texture block array comprises a color texture block array and a line width texture block array, wherein
The red R color value, the green G color value, the blue B color value and the opacity alpha of the texture block in the color texture block array represent the red R color value, the green G color value, the blue B color value and the opacity alpha of the element patch of the corresponding patch identification to be rendered under the corresponding display scale,
and the line width corresponds to the line width of the element patch of the corresponding patch identifier to be rendered under the corresponding display scale in one of the red R color value, the green G color value, the blue B color value and the opacity alpha of the texture block in the texture block array partition.
7. A map rendering method, comprising:
receiving a rendering calling command, wherein the rendering calling command comprises a to-be-rendered patch of a map and a to-be-rendered patch identifier, the to-be-rendered patch is a patch which is divided by elements of the map and is used as a minimum unit for rendering, and the to-be-rendered patch identifier is an identifier which is allocated for the to-be-rendered patch and is used for distinguishing the to-be-rendered patch from other to-be-rendered patches;
determining a rendering style by referring to the corresponding relation between the mark of the to-be-rendered surface patch and the rendering style stored in the GPU based on the mark of the to-be-rendered surface patch;
and rendering the to-be-rendered patch by using the rendering style.
8. The method of claim 7, wherein the render call command comprises all tiles to be rendered for the map partition and a corresponding tile to be rendered identification.
9. The method of claim 8, wherein the correspondence between the patch identifier to be rendered and the rendering style comprises: the corresponding relation between the mark of the surface patch to be rendered and the rendering style under each display scale, and
based on the identifier of the to-be-rendered patch, referring to the stored correspondence between the identifier of the to-be-rendered patch and the rendering style, determining the rendering style, specifically including: and determining the rendering style by referring to the corresponding relation between the mark of the surface patch to be rendered and the rendering style under each display scale based on the mark of the surface patch to be rendered and the display scale.
10. The method according to claim 9, wherein the correspondence between the identification of the patch to be rendered and the rendering style at each display scale is stored in a texture.
11. The method of claim 10, wherein the texture comprises an array of texture blocks, a first arrangement direction of the array of texture blocks represents different patch identifications to be rendered, and a second arrangement direction represents different display scales.
12. A map rendering apparatus, comprising:
a sending unit, configured to place a to-be-rendered patch of a map and a to-be-rendered patch identifier into a rendering call command, and send the to-be-rendered patch to a graphics processing unit GPU, where the to-be-rendered patch is a patch divided by elements of the map and used as a minimum unit for rendering, and the to-be-rendered patch identifier is an identifier allocated to the to-be-rendered patch and used for distinguishing the to-be-rendered patch from other to-be-rendered patches;
and the storage unit is configured to store the corresponding relation between the identification of the to-be-rendered patch and the rendering style into the GPU, so that the GPU refers to the corresponding relation based on the identification of the to-be-rendered patch to render the to-be-rendered patch.
13. A map rendering apparatus, characterized by comprising:
a receiving unit configured to receive a rendering call command, where the rendering call command includes a to-be-rendered tile of a map and a to-be-rendered tile identifier, the to-be-rendered tile is a tile divided by elements of the map and is a smallest unit of rendering, and the to-be-rendered tile identifier is an identifier allocated to the to-be-rendered tile and used for distinguishing the to-be-rendered tile from other to-be-rendered tiles;
a determining unit configured to determine a rendering style by referring to a correspondence between a patch identifier to be rendered and a rendering style stored in a GPU based on the patch identifier to be rendered;
and the rendering unit is configured to render the surface patch to be rendered by using the rendering style.
14. A map rendering apparatus, comprising:
a memory storing computer readable instructions;
a processor reading computer readable instructions stored by the memory to perform the method of any of claims 1-11.
15. A computer storage medium having computer readable instructions stored thereon which, when executed by a processor of a computer, cause the computer to perform the method of any of claims 1-11.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810246925.2A CN110298780B (en) | 2018-03-23 | 2018-03-23 | Map rendering method, map rendering device and computer storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810246925.2A CN110298780B (en) | 2018-03-23 | 2018-03-23 | Map rendering method, map rendering device and computer storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN110298780A CN110298780A (en) | 2019-10-01 |
| CN110298780B true CN110298780B (en) | 2022-10-28 |
Family
ID=68026140
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810246925.2A Active CN110298780B (en) | 2018-03-23 | 2018-03-23 | Map rendering method, map rendering device and computer storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN110298780B (en) |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111080744B (en) * | 2019-12-26 | 2023-05-09 | 南京师范大学 | Vector map line symbol half-open pointed arrow drawing method considering line width consistency |
| CN111028352B (en) * | 2019-12-26 | 2023-05-23 | 南京师范大学 | Drawing method of vector map line symbol open pointed arrow taking into account line width consistency |
| CN111028351B (en) * | 2019-12-26 | 2023-05-09 | 南京师范大学 | Vector map line symbol half-sharp angle arrow drawing method considering line width consistency |
| CN111145295B (en) * | 2019-12-26 | 2023-05-09 | 南京师范大学 | Vector map line symbol half-dovetail arrow drawing method considering line width consistency |
| CN111145303B (en) * | 2019-12-26 | 2023-04-25 | 南京师范大学 | A method of drawing vector map line symbols with pointed arrows in consideration of line width consistency |
| CN113205580A (en) * | 2021-05-10 | 2021-08-03 | 万翼科技有限公司 | Primitive rendering method, device and equipment and storage medium |
| CN116152411A (en) * | 2022-12-16 | 2023-05-23 | 深圳市博思云创科技有限公司 | Graphics rendering method, graphics rendering device, electronic equipment and storage medium |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105427236A (en) * | 2015-12-18 | 2016-03-23 | 魅族科技(中国)有限公司 | Method and device for image rendering |
| CN106504185A (en) * | 2016-10-26 | 2017-03-15 | 腾讯科技(深圳)有限公司 | One kind renders optimization method and device |
| CN107665205A (en) * | 2016-07-27 | 2018-02-06 | 高德信息技术有限公司 | A kind of map rendering intent and device |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9183651B2 (en) * | 2010-10-06 | 2015-11-10 | Microsoft Technology Licensing, Llc | Target independent rasterization |
-
2018
- 2018-03-23 CN CN201810246925.2A patent/CN110298780B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105427236A (en) * | 2015-12-18 | 2016-03-23 | 魅族科技(中国)有限公司 | Method and device for image rendering |
| CN107665205A (en) * | 2016-07-27 | 2018-02-06 | 高德信息技术有限公司 | A kind of map rendering intent and device |
| CN106504185A (en) * | 2016-10-26 | 2017-03-15 | 腾讯科技(深圳)有限公司 | One kind renders optimization method and device |
Non-Patent Citations (1)
| Title |
|---|
| 王明敏.基于特征自适应细分与纹理贴图的地形渲染.《现代计算机(专业版)》.2017,(第06期),第38-41页. * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN110298780A (en) | 2019-10-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110298780B (en) | Map rendering method, map rendering device and computer storage medium | |
| US12141908B2 (en) | Image rendering method and apparatus, computer device, and storage medium | |
| CN106233326B (en) | In graphics process based on show target flexibly show | |
| US11325045B2 (en) | Method and apparatus for acquiring merged map, storage medium, processor, and terminal | |
| KR102003655B1 (en) | Determining the Start Node for Tree Traversal for Shadow Lays in Graphics Processing | |
| CN104641396B (en) | Delay preemption techniques for Dispatching Drawings processing unit command stream | |
| EP2946364B1 (en) | Rendering graphics data using visibility information | |
| JP5956770B2 (en) | Tile-based graphics system and method of operating such a system | |
| US20150091892A1 (en) | Method and apparatus for rendering image data | |
| WO2022042436A1 (en) | Image rendering method and apparatus, and electronic device and storage medium | |
| US10733782B2 (en) | Graphics processing systems | |
| CN101604454A (en) | Graphic system | |
| CN112801855B (en) | Method and device for scheduling rendering task based on graphics primitive and storage medium | |
| CN113409411B (en) | Rendering method and device of graphical interface, electronic equipment and storage medium | |
| CN106575442A (en) | Bandwidth reduction using texture lookup by adaptive shading | |
| CN108140233A (en) | Graphics processing unit with block of pixels level granularity is seized | |
| US10922086B2 (en) | Reduction operations in data processors that include a plurality of execution lanes operable to execute programs for threads of a thread group in parallel | |
| US6181346B1 (en) | Graphics system | |
| CN112734900B (en) | Baking method, device and equipment for shadow map and computer readable storage medium | |
| KR102644276B1 (en) | Apparatus and method for processing graphic | |
| EP4270321B1 (en) | Graphic rendering method and apparatus, and storage medium | |
| US8842121B2 (en) | Stream compaction for rasterization | |
| KR20180038793A (en) | Method and apparatus for processing image data | |
| US11016774B1 (en) | Issuing execution threads in a data processor | |
| CN118043842A (en) | Rendering format selection method and related equipment thereof |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |