CN112473127B - Method, system, electronic device and storage medium for rendering large-scale identical objects - Google Patents
Method, system, electronic device and storage medium for rendering large-scale identical objects Download PDFInfo
- Publication number
- CN112473127B CN112473127B CN202011326125.5A CN202011326125A CN112473127B CN 112473127 B CN112473127 B CN 112473127B CN 202011326125 A CN202011326125 A CN 202011326125A CN 112473127 B CN112473127 B CN 112473127B
- Authority
- CN
- China
- Prior art keywords
- rendered
- model
- matrix
- visible
- vertex
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/35—Details of game servers
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/50—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
- A63F2300/53—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
- A63F2300/538—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for performing operations on behalf of the game client, e.g. rendering
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/66—Methods for processing data by generating or executing the game program for rendering three dimensional images
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Image Generation (AREA)
Abstract
The application relates to a large-scale identical object rendering method, a system, an electronic device and a storage medium, wherein a new model is synthesized by counting the total number of object types in a scene to be rendered, each object type uses a preset number of identical objects, a scene is divided by using a quadtree, object parameter information contained in the quadtree node is stored into a first matrix and is stored into a file, visibility detection is carried out on the information to be rendered, a visible quadtree node is obtained according to a visibility detection result, the visible object type is obtained according to the visible quadtree node, the first matrix group corresponding to the visible object type is a second matrix group, the new model to be rendered is obtained as the model to be rendered, the second matrix group corresponding to the model to be rendered is written into a vertex shader, a drawing instruction is generated for the model to be rendered, the problem that DrawCall is too much when an Instance technology is not adopted is solved, and the large-scale identical object efficiency is improved.
Description
Technical Field
The present application relates to the field of information processing technologies, and in particular, to a method, a system, an electronic device, and a storage medium for large-scale rendering of the same object.
Background
In internet games, game scenes can bring different experience effects to users. Among other things, the wide variety of vegetation on the game surface can make the game more lively. However, if an Instance (Instance) technology is not adopted, when a large amount of vegetation is rendered, for example, when a scene such as a forest is rendered, only one tree or one tree can be rendered, so that DrawCall is excessive and the rendering efficiency is low.
At present, an effective solution is not proposed for the problems of excessive DrawCall and low rendering efficiency when an Instance technology is not adopted in the related technology.
Disclosure of Invention
The embodiment of the application provides a method, a system, an electronic device and a storage medium for large-scale same object rendering, which at least solve the problems of excessive DrawCall and low rendering efficiency when an Instance technology is not adopted in the related technology.
In a first aspect, an embodiment of the present application provides a method for large-scale identical object rendering, where the method includes:
Obtaining a scene to be rendered, counting the total number of object types in the scene to be rendered, and synthesizing a new model for the objects by using the same type of objects with the preset number;
dividing the scene to be rendered by using a quadtree to obtain information to be rendered, storing the object parameter information contained in the quadtree node into a first matrix, and storing the first matrix group into a file, wherein the new model comprises the first matrixes with the preset number, and is a first matrix group;
Performing visibility detection on the information to be rendered, obtaining visible quadtree nodes according to the visibility detection result, and obtaining visible object types according to the visible quadtree nodes, wherein the first matrix group corresponding to the visible object types is a second matrix group;
The new model to be rendered is obtained to be a model to be rendered, the second matrix group corresponding to the model to be rendered is written into a vertex shader, and a drawing instruction is generated for the model to be rendered.
In some embodiments, after the new model to be rendered is obtained as the model to be rendered, the method further includes judging whether the number of the objects to be rendered in the model to be rendered is smaller than the preset number, if so, setting the second matrix of the spare part of the second matrix set to zero, wherein the second matrix set after zero is a third matrix set, writing the third matrix set into a vertex shader, and generating a drawing instruction for the model to be rendered.
In some embodiments, after generating a drawing instruction for the model to be rendered, the method includes obtaining model vertex stream elements of the model to be rendered, indexing the vertex shader with the model vertex stream elements to obtain the second matrix set, multiplying vertices of the model to be rendered and the corresponding second matrix to obtain an output vertex matrix, and rendering the model to be rendered according to the output vertex matrix.
In some embodiments, after generating a drawing instruction for the model to be rendered, the method includes determining whether the number of times of generating the drawing instruction is smaller than the total number of visible object types, and if so, jumping to a step of acquiring the new model to be rendered as the model to be rendered.
In some embodiments, before writing the second matrix set corresponding to the model to be rendered into the vertex shader, the method includes traversing a deletion buffer, judging whether the second matrix is in the deletion buffer, and if so, setting the second matrix as a zero matrix.
In some embodiments, after generating a drawing instruction for the model to be rendered, the method includes traversing an add buffer, writing the second matrix in the add buffer into the vertex shader, and generating a drawing instruction for the new model corresponding to the second matrix.
In some of these embodiments, the item category includes vegetation category and the item parameter information includes category, quantity, location and orientation information of the item.
In a second aspect, an embodiment of the present application provides a system for large-scale identical object rendering, where the system includes a synthesis module, a division module, a detection module, and a drawing instruction module,
The synthesizing module is used for acquiring a scene to be rendered, counting the total number of object types in the scene to be rendered, and synthesizing a new model for the objects by using the same type of objects with the preset number;
The dividing module is configured to divide the scene to be rendered by using a quadtree to obtain information to be rendered, store the object parameter information contained in the quadtree node into a first matrix, and store the first matrix group into a file, where the new model includes the first matrices of the preset number, and is a first matrix group;
The detection module is used for carrying out visibility detection on the information to be rendered, obtaining visible quadtree nodes according to the visibility detection result, obtaining visible object types according to the visible quadtree nodes, wherein the first matrix group corresponding to the visible object types is a second matrix group;
The drawing instruction module is used for obtaining the new model to be rendered as a model to be rendered, writing the second matrix group corresponding to the model to be rendered into a vertex shader, and generating a drawing instruction for the model to be rendered.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements a method for large-scale identical object rendering as described in the first aspect when the processor executes the computer program.
In a fourth aspect, embodiments of the present application provide a storage medium having stored thereon a computer program which when executed by a processor implements a method of large scale identical object rendering as described in the first aspect above.
Compared with the related art, the method for large-scale object rendering with the same object provided by the embodiment of the application comprises the steps of obtaining a scene to be rendered, counting the total number of object types in the scene to be rendered, synthesizing a new model for each object type by using the same object type with the preset number, dividing the scene to be rendered by using a quadtree to obtain information to be rendered, storing object parameter information contained in the quadtree node into a first matrix, storing the first matrix group into a file, wherein the new model comprises the first matrix with the preset number, detecting the visibility of the information to be rendered for the first matrix group, obtaining a visible quadtree node according to the visibility detection result, obtaining the visible object type according to the visible quadtree node, obtaining the new model to be rendered as the model to be rendered, writing the second matrix corresponding to the model to be rendered into a vertex shader, and generating a primary instruction for the model to be rendered, thereby solving the problem that when an Instance technology is not adopted, drawCall is excessively low, the large-scale object rendering efficiency is improved, and the same.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow chart of a method of large-scale identical object rendering according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a method of large-scale identical object rendering according to an embodiment of the present application;
FIG. 3 is a schematic diagram of another method of large-scale identical object rendering according to an embodiment of the present application;
FIG. 4 is a block diagram of a system for large-scale identical object rendering according to an embodiment of the present application;
Fig. 5 is a schematic view of an internal structure of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be described and illustrated with reference to the accompanying drawings and examples in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. All other embodiments, which can be made by a person of ordinary skill in the art based on the embodiments provided by the present application without making any inventive effort, are intended to fall within the scope of the present application. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the described embodiments of the application can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," and similar referents in the context of the application are not to be construed as limiting the quantity, but rather as singular or plural. The terms "comprises," "comprising," "includes," "including," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in connection with the present application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein means greater than or equal to two. "and/or" describes the association relationship of the association object, and indicates that three relationships may exist, for example, "a and/or B" may indicate that a exists alone, a and B exist simultaneously, and B exists alone. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
The present embodiment provides a method for large-scale identical object rendering, and fig. 1 is a flowchart of a method for large-scale identical object rendering according to an embodiment of the present application, as shown in fig. 1, where the flowchart includes the following steps:
Step S101, obtaining a scene to be rendered, counting the total number of object types in the scene to be rendered, and synthesizing a new model for the object by using a preset number of the same type of objects, wherein in the embodiment, the preset number is the number of constant buffer areas of a vertex shader, if the number of the constant buffer areas is 30, 30 same type of objects are used for synthesizing the new model for each object type, namely the number of the new model is the total number of object types, and each new model comprises 30 same type of objects;
In this embodiment, the quadtree is also called a quadtree, which is a tree-like data structure, and four sub-blocks are provided on each node, the quadtree is commonly applied to analysis and classification of two-dimensional spatial data, the quadtree divides the data into four quadrants, the data range can be square or rectangular or any other shape, the quadtree processing is performed on the scene to be rendered, the object parameter information contained in each quadtree node is calculated, the object parameter information is stored in the matrix as a first matrix, and each object parameter information is stored as a first matrix, so that the new model includes a first matrix of the predicted number, the number of the first matrix is the total number of objects, and the plurality of first matrix groups are stored in the file.
In the implementation, the visibility detection is a view cone elimination, which is a step before graphic rendering, is used for eliminating the part which does not need to be drawn, the view cone elimination is carried out, the visible quadtree node is obtained, the type of the object contained in the view field visible quadtree node is called as the visible object type, the first matrix group corresponding to the visible object type is called as the second matrix group, namely, a plurality of object types exist, but only the object type with the visible field is needed to be rendered, and the first matrix group of each visible object type is called as the second matrix group;
In the embodiment, the new model to be rendered in the multiple new models is called as a model to be rendered, the second matrix group corresponding to the model to be rendered is written into a constant buffer area of the vertex shader, and then a drawing command is sent to the GPU, namely, a drawing command is generated for the model to be rendered, a preset number of objects of the same kind are drawn by the drawing command once, if the preset number is 30, 30 objects of the same kind can be drawn once when the drawing command is generated for the model to be rendered once, and the number of drawing commands (DrawCall) is reduced.
Through the steps S101 to S104, compared with the problem that in the related art, when the Instance technology is not adopted, drawCall is too much and the rendering efficiency is low, the system synthesizes a new model by using the same kind of objects with the preset number, then divides the scene to be rendered by using the quadtree, acquires the object parameter information of each node of the quadtree, stores the matrix into a file, acquires what kind of new model to be rendered is the model to be rendered after the view-cone rejection, writes the matrix corresponding to the same kind of object with the preset number contained in the model to be rendered into a constant buffer zone of the vertex shader, generates a drawing instruction for the model to be rendered once, renders the preset number of objects by using the drawing instruction once, reduces DrawCall times, improves the rendering efficiency, and also solves the problem that in order to reduce DrawCall, the memory required by adopting the engine synthesis is large.
In some embodiments, after the new model to be rendered is obtained as the model to be rendered, whether the number of objects to be rendered in the model to be rendered is smaller than the preset number is determined, if so, the second matrix of the spare part of the second matrix set is set to zero, the second matrix set after the zero setting is a third matrix set, the third matrix set is written into a vertex shader, and a drawing instruction is generated for the model to be rendered. In this embodiment, the to-be-rendered model includes a preset number of objects of the same kind, the preset number of objects can be rendered by a drawing instruction at a time, if the number of objects to be rendered is smaller than the predicted number, the second matrix of the spare part of the second matrix set needs to be zeroed, the zeroed second matrix set is a third matrix set, for example, the preset number is 30, and the number of objects to be rendered is 28, and 2 second matrices need to be set as zero matrices, the third matrix set includes 2 zero matrices and 28 second matrices, and then the third matrix set is written into the vertex shader, at this time, 28 objects can be drawn at a time by generating a drawing instruction once for the to-be-rendered model, thereby realizing the reduction of DrawCall times and dynamic setting of how many objects can be drawn at a time DrawCall.
In some embodiments, after a drawing instruction is generated for the model to be rendered, a model vertex stream element of the model to be rendered is obtained, the vertex shader is indexed by the model vertex stream element to obtain the second matrix group, the vertex of the model to be rendered and the corresponding second matrix are multiplied to obtain an output vertex matrix, and the model to be rendered is rendered according to the output vertex matrix.
For example, if the object is composed of 1000 vertices, the model vertex stream element (bin index) of the model to be rendered is incremented once every object vertex number, if the preset number is 3, the model to be rendered is synthesized by 3 objects of the same kind, the bin index of the 0 th to 999 th vertices of the model to be rendered is 0, the bin index of the 1000 th to 1999 th vertices is 1, the bin index of the 2000 th to 2999 th vertices is 2, the constant buffer area of the vertex shader is indexed by the bin index, a second matrix group written in the constant buffer area can be obtained, the vertices of the model to be rendered are multiplied by the corresponding second matrix to obtain an output vertex matrix, the model to be rendered can be rendered according to the output vertex matrix, if the second matrix is zero, the output vertex matrix is zero, which indicates that the object does not exist, and the object is not displayed in the screen.
In some embodiments, after generating a drawing instruction for the model to be rendered, the method includes determining whether the number of times of generating the drawing instruction is smaller than the total number of visible object types, and if so, jumping to a step of obtaining a new model to be rendered as the model to be rendered. FIG. 2 is a schematic diagram of a method for large-scale identical object rendering according to an embodiment of the present application, as shown in FIG. 2, the process includes the following steps:
step S104, a model to be rendered is obtained, a second matrix group of the model to be rendered is written into a vertex shader, and a drawing instruction is generated for the model to be rendered;
Step S201, judging whether the number of times of generating the drawing instruction is smaller than the total number of types of the visible objects, if so, indicating that the objects with visible fields are not rendered yet, jumping to a step of obtaining a new model to be rendered as a model to be rendered, and continuing to obtain the next model to be rendered;
In step S202, the number of times of generating the drawing command is equal to the total number of types of visible objects, i.e. the objects with visible fields are rendered, and the cycle is ended. For example, the number of the visible species is 5, the number of times of generating the drawing instruction is 1, and then the step of jumping to the step of obtaining the new model to be rendered is to obtain a second model to be rendered, writing a second matrix group corresponding to the second model to be rendered into the vertex shader, and rendering the second model to be rendered.
In some embodiments, after writing the second matrix set corresponding to the model to be rendered into the vertex shader, the method includes traversing a deletion buffer, determining whether the second matrix is in the deletion buffer, if so, setting the second matrix as a zero matrix, and generating a drawing instruction for the model to be rendered. In this embodiment, fig. 3 is a schematic diagram of another method for large-scale rendering of the same object according to an embodiment of the application, as shown in fig. 3, the process includes the following steps:
Step S301, after writing a second matrix group corresponding to a model to be rendered into a vertex shader, if an object is to be deleted, putting matrix data of the object into a deletion buffer;
Step S302, traversing the deletion buffer area, judging whether the second matrix is in the deletion buffer area, if so, setting the second matrix as a zero matrix, namely judging whether the matrix data of the deletion buffer area is the same as the second matrix in the vertex shader, if so, setting the second matrix as the zero matrix;
Step S303, after setting the second matrix corresponding to the object to be deleted as a zero matrix, generating a drawing instruction for the model to be rendered once, so as to dynamically delete the object, thereby solving the problem that the object cannot be dynamically deleted by adopting art manual combination for reducing DrawCall.
In some embodiments, after generating a drawing instruction for the model to be rendered, the method includes traversing an add buffer, writing a second matrix in the add buffer into the vertex shader, and generating a drawing instruction for an object corresponding to the second matrix. In this embodiment, if one or more objects are to be added, matrix data of the objects are placed in an adding buffer area, the adding buffer area is traversed, the matrices in the adding buffer area are written into a constant buffer area of a vertex shader, if the preset number is 30 and the number of the matrices in the adding buffer area is less than 30, the matrices are filled in zero, a drawing command is sent to a GPU, the objects can be dynamically added, if the number of the matrices in the adding buffer area exceeds 30, the objects corresponding to the 30 matrices are rendered first, after the rendering is completed, the remaining matrices are written into the constant buffer area of the vertex shader, and then a drawing command is sent to the GPU, so that the problem that the objects cannot be dynamically added by adopting art manual operation for reducing DrawCall is solved.
In some embodiments, the object type includes vegetation type and the object parameter information includes type, number, location and orientation information of the object. In this embodiment, the object types include vegetation types, such as forests and grasslands, and the object parameter information includes the types, the numbers, the positions and the directions of the objects, and the object parameter information is stored as a matrix, and the position and the directions of the objects can be known by knowing the matrix of the objects.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
The embodiment also provides a system for rendering large-scale identical objects, which is used for implementing the above embodiment and the preferred implementation, and is not described in detail. As used below, the terms "module," "unit," "sub-unit," and the like may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 4 is a block diagram of a system for large-scale identical object rendering according to an embodiment of the present application, which includes a synthesizing module 41, a dividing module 42, a detecting module 43 and a drawing instruction module 44 as shown in fig. 4,
The system comprises a synthesis module 41 for acquiring a scene to be rendered, counting the total number of object types in the scene to be rendered, synthesizing a new model for each object type by using the same object type with a preset number, a dividing module 42 for dividing the scene to be rendered by using a quadtree to obtain information to be rendered, storing object parameter information contained in the quadtree node into a first matrix, storing a plurality of the first matrices into a file, a detection module 43 for performing visibility detection on the information to be rendered, obtaining a visible quadtree node according to the visibility detection result, obtaining a visible object type according to the visible quadtree node, wherein the plurality of the first matrices corresponding to the visible object type are a second matrix group, and a drawing instruction module 44 for acquiring the new model to be rendered, writing the second matrix group corresponding to the model to be rendered into a vertex shader, and generating a drawing instruction for the model to be rendered once. In the embodiment, the synthesizing module 41 synthesizes a new model by using the same object with the preset number, the dividing module 42 divides the scene to be rendered by using the quadtree, the first matrix contained in the quadtree node is stored in the file, the detecting module 43 performs visual cone elimination to obtain the quadtree node with visible field of view, the drawing instruction module 44 calculates the new model to be rendered, writes the preset number of matrices corresponding to the new model into the vertex shader, and generates a drawing instruction for the new model once, so that the object with the preset number can be drawn once, the problems of DrawCall excess and low rendering efficiency are solved, and the rendering efficiency is improved.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the modules may be located in the same processor, or may be located in different processors in any combination.
The present embodiment also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
S1, obtaining a scene to be rendered, counting the total number of object types in the scene to be rendered, and synthesizing a new model for the object by using the same type of objects with the preset number.
S2, dividing the scene to be rendered by using a quadtree to obtain information to be rendered, storing object parameter information contained in the quadtree node into a first matrix, and storing the first matrix group into a file, wherein the new model comprises a preset number of first matrices which are the first matrix group.
And S3, performing visibility detection on the information to be rendered, obtaining visible quadtree nodes according to the visibility detection result, and obtaining visible object types according to the visible quadtree nodes, wherein the first matrix group corresponding to the visible object types is a second matrix group.
S4, acquiring a new model to be rendered as a model to be rendered, writing the second matrix group corresponding to the model to be rendered into a vertex shader, and generating a drawing instruction for the model to be rendered.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and this embodiment is not repeated herein.
In addition, in combination with the method for rendering large-scale identical objects in the above embodiment, the embodiment of the application can be implemented by providing a storage medium. The storage medium has stored thereon a computer program which, when executed by a processor, implements a method of large-scale identical object rendering of any of the above embodiments.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a method for large-scale identical object rendering. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
In one embodiment, fig. 5 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application, and as shown in fig. 5, an electronic device, which may be a server, is provided, and an internal structure diagram thereof may be as shown in fig. 5. The electronic device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the electronic device is for storing data. The network interface of the electronic device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a method for large-scale identical object rendering.
It will be appreciated by those skilled in the art that the structure shown in fig. 5 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the electronic device to which the present inventive arrangements are applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be understood by those skilled in the art that the technical features of the above-described embodiments may be combined in any manner, and for brevity, all of the possible combinations of the technical features of the above-described embodiments are not described, however, they should be considered as being within the scope of the description provided herein, as long as there is no contradiction between the combinations of the technical features.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.
Claims (9)
1. A method of large-scale identical object rendering, the method comprising:
Obtaining a scene to be rendered, counting the total number of object types in the scene to be rendered, and synthesizing a new model for the objects by using the same type of objects with the preset number;
Dividing the scene to be rendered by using a quadtree to obtain information to be rendered, storing object parameter information contained in the quadtree node into a first matrix, and storing the first matrix into a file, wherein the new model comprises the first matrixes with the preset number, and is a first matrix group;
Performing visibility detection on the information to be rendered, obtaining visible quadtree nodes according to the visibility detection result, and obtaining visible object types according to the visible quadtree nodes, wherein the first matrix group corresponding to the visible object types is a second matrix group;
The new model to be rendered is obtained and is a model to be rendered, the second matrix group corresponding to the model to be rendered is written into a vertex shader, and a primary drawing instruction is generated for the model to be rendered;
after a drawing instruction is generated for the model to be rendered, the method comprises the steps of obtaining model vertex stream elements of the model to be rendered, indexing the vertex shader by using the model vertex stream elements to obtain the second matrix group, multiplying the vertexes of the model to be rendered by the corresponding second matrices to obtain an output vertex matrix, and rendering the model to be rendered according to the output vertex matrix.
2. The method of claim 1, wherein after the new model to be rendered is obtained as a model to be rendered, the method further comprises determining whether the number of objects to be rendered in the model to be rendered is less than the preset number, if so, zeroing a second matrix of a spare part of a second matrix set, the zeroed second matrix set being a third matrix set, writing the third matrix set into a vertex shader, and generating a drawing instruction for the model to be rendered.
3. The method of claim 1, wherein after generating a drawing command for the model to be rendered, the method includes determining whether the number of times the drawing command is generated is less than the total number of visible object types, and if so, jumping to the step of obtaining the new model to be rendered as the model to be rendered.
4. The method of claim 1, wherein after writing the second matrix set corresponding to the model to be rendered into a vertex shader, the method includes traversing a delete buffer, determining whether the second matrix is in the delete buffer, if so, setting the second matrix as a zero matrix, and generating a drawing instruction for the model to be rendered.
5. The method of claim 1, wherein after generating a drawing instruction for the model to be rendered, the method comprises traversing an add buffer, writing the second matrix in the add buffer into the vertex shader, and generating a drawing instruction for the new model to which the second matrix corresponds.
6. The method of claim 1, wherein the item category comprises vegetation category and the item parameter information comprises item category, quantity, location and orientation information.
7. A system for large-scale identical object rendering is characterized in that the system comprises a synthesis module, a division module, a detection module and a drawing instruction module,
The synthesizing module is used for acquiring a scene to be rendered, counting the total number of object types in the scene to be rendered, and synthesizing a new model for the objects by using the same type of objects with the preset number;
The dividing module is configured to divide the scene to be rendered by using a quadtree to obtain information to be rendered, store object parameter information contained in a quadtree node into a first matrix, and store the first matrix into a file, where the new model includes the first matrices of the preset number, and is a first matrix group;
The detection module is used for carrying out visibility detection on the information to be rendered, obtaining visible quadtree nodes according to the visibility detection result, obtaining visible object types according to the visible quadtree nodes, wherein the first matrix group corresponding to the visible object types is a second matrix group;
The drawing instruction module is used for obtaining the new model to be rendered as a model to be rendered, writing the second matrix group corresponding to the model to be rendered into a vertex shader, generating a drawing instruction for the model to be rendered, obtaining model vertex stream elements of the model to be rendered after generating the drawing instruction for the model to be rendered, indexing the vertex shader by using the model vertex stream elements to obtain the second matrix group, multiplying the vertex of the model to be rendered by the corresponding second matrix to obtain an output vertex matrix, and rendering the model to be rendered according to the output vertex matrix.
8. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of large scale identical object rendering of any one of claims 1 to 6.
9. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method of large scale identical object rendering of any one of claims 1 to 6 at run-time.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011326125.5A CN112473127B (en) | 2020-11-24 | 2020-11-24 | Method, system, electronic device and storage medium for rendering large-scale identical objects |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011326125.5A CN112473127B (en) | 2020-11-24 | 2020-11-24 | Method, system, electronic device and storage medium for rendering large-scale identical objects |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112473127A CN112473127A (en) | 2021-03-12 |
| CN112473127B true CN112473127B (en) | 2024-12-06 |
Family
ID=74933300
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202011326125.5A Active CN112473127B (en) | 2020-11-24 | 2020-11-24 | Method, system, electronic device and storage medium for rendering large-scale identical objects |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112473127B (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114529648A (en) * | 2022-02-18 | 2022-05-24 | 北京市商汤科技开发有限公司 | Model display method, device, apparatus, electronic device and storage medium |
| CN115758028A (en) * | 2022-11-24 | 2023-03-07 | 昆船智能技术股份有限公司 | Js engine-based webpage three-dimensional large scene building system and performance optimization method |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111063032A (en) * | 2019-12-26 | 2020-04-24 | 北京像素软件科技股份有限公司 | Model rendering method and system and electronic device |
| CN111340926A (en) * | 2020-03-25 | 2020-06-26 | 北京畅游创想软件技术有限公司 | Rendering method and device |
Family Cites Families (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7889205B1 (en) * | 2006-10-24 | 2011-02-15 | Adobe Systems Incorporated | Frame buffer based transparency group computation on a GPU without context switching |
| US8976168B2 (en) * | 2011-11-02 | 2015-03-10 | Microsoft Technology Licensing, Llc | Mesh generation from depth images |
| CN106780686B (en) * | 2015-11-20 | 2020-07-10 | 网易(杭州)网络有限公司 | 3D model merging and rendering system and method, and terminal |
| US10553013B2 (en) * | 2018-04-16 | 2020-02-04 | Facebook Technologies, Llc | Systems and methods for reducing rendering latency |
| CN109523621B (en) * | 2018-11-15 | 2020-11-10 | 腾讯科技(深圳)有限公司 | Object loading method and device, storage medium and electronic device |
| CN109978981B (en) * | 2019-03-15 | 2023-04-25 | 广联达科技股份有限公司 | Batch rendering method for improving display efficiency of building model |
| CN110738721B (en) * | 2019-10-12 | 2023-09-01 | 四川航天神坤科技有限公司 | Three-dimensional scene rendering acceleration method and system based on video geometric analysis |
| CN110992459B (en) * | 2019-11-04 | 2022-07-08 | 江苏艾佳家居用品有限公司 | Indoor scene rendering and optimizing method based on partitions |
| CN111784812B (en) * | 2020-06-09 | 2024-05-07 | 北京五一视界数字孪生科技股份有限公司 | Rendering method and device, storage medium and electronic equipment |
-
2020
- 2020-11-24 CN CN202011326125.5A patent/CN112473127B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111063032A (en) * | 2019-12-26 | 2020-04-24 | 北京像素软件科技股份有限公司 | Model rendering method and system and electronic device |
| CN111340926A (en) * | 2020-03-25 | 2020-06-26 | 北京畅游创想软件技术有限公司 | Rendering method and device |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112473127A (en) | 2021-03-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11270497B2 (en) | Object loading method and apparatus, storage medium, and electronic device | |
| CN112473127B (en) | Method, system, electronic device and storage medium for rendering large-scale identical objects | |
| CN112489172B (en) | Method, system, electronic device and storage medium for producing skeletal animation | |
| KR101953133B1 (en) | Apparatus and method for rendering | |
| CN114596423A (en) | Model rendering method, device and computer equipment based on virtual scene meshing | |
| US20210295138A1 (en) | Neural network processing | |
| CN114693856B (en) | Object generation method and device, computer equipment and storage medium | |
| US20250252657A1 (en) | Multi-view image generation apparatus and method, and graphics processing unit | |
| CN116051345A (en) | Image data processing method, device, computer equipment and readable storage medium | |
| KR20160068204A (en) | Data processing method for mesh geometry and computer readable storage medium of recording the same | |
| WO2023197762A1 (en) | Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product | |
| CN116012507A (en) | Rendering data processing method and device, electronic equipment and storage medium | |
| WO2022100059A1 (en) | Data storage management method, object rendering method, and device | |
| CN116797444B (en) | Image data processing method, device, computer equipment and storage medium | |
| CN116385253B (en) | Primitive drawing method, device, computer equipment and storage medium | |
| CN115410694A (en) | Image visualization layout method and device, image analysis system and storage medium | |
| CN114307158B (en) | Three-dimensional virtual scene data generation method and device, storage medium and terminal | |
| CN110853143B (en) | Scene realization method, device, computer equipment and storage medium | |
| CN110384926B (en) | Position determining method and device | |
| CN117744187B (en) | CAD drawing method, device, computer equipment and storage medium | |
| CN118657868B (en) | Ray tracing calculation method, device, equipment and storage medium | |
| CN118505886A (en) | Object rendering method and device, storage medium and computer equipment | |
| CN117873464B (en) | Component generating method, device, computer apparatus, storage medium, and program product | |
| US12211080B2 (en) | Techniques for performing matrix computations using hierarchical representations of sparse matrices | |
| US20250265773A1 (en) | Shadow rendering through grid-based light resource clustering |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |