CN115018968B - Image rendering method and device, storage medium and electronic equipment - Google Patents
Image rendering method and device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN115018968B CN115018968B CN202210652062.5A CN202210652062A CN115018968B CN 115018968 B CN115018968 B CN 115018968B CN 202210652062 A CN202210652062 A CN 202210652062A CN 115018968 B CN115018968 B CN 115018968B
- Authority
- CN
- China
- Prior art keywords
- rendering
- noise
- target
- image
- sampling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Generation (AREA)
Abstract
The disclosure relates to an image rendering method, an image rendering device, a storage medium and electronic equipment. The method comprises the following steps: determining a target noise image; determining a target sampling mode aiming at the target noise image according to a target rendering material to be rendered, wherein the target sampling mode comprises sampling times and sampling parameters of each sampling; sampling the target noise image for multiple times according to the sampling times and sampling parameters of each sampling to obtain noise samples; and performing image rendering according to the noise samples to obtain a rendered target image. Therefore, the target noise image is used as the texture for rendering, so that the randomness of the texture is improved, the sense of reality of rendering can be improved, the problem of image amplification distortion caused by insufficient resolution of the fixed texture map can be effectively avoided, and meanwhile, the deformation conforming to the target rendering material can be synchronized into the rendering model during rendering, so that the deformation of the rendering model is caused, and the sense of reality and the sense of three-dimension of a rendering result are improved.
Description
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to an image rendering method, an image rendering device, a storage medium and electronic equipment.
Background
In scenes such as movie special effects, games, AR (Augmented Reality, augmented Reality technology), VR (Virtual Reality technology), etc., it is generally required that an image or an animation with a sense of Reality, such as a dynamic effect simulating water, smoke, cloud, etc., be generated by a rendering engine, but strict physical computation (e.g., fluid simulation) has high computational complexity and cannot be executed in real time. In the related art, a solution is generally adopted to use a static texture map and perform actions of translation, rotation, superposition and the like in combination with UV (i.e., U, V texture map coordinates) animation, so as to approximately simulate a corresponding dynamic effect. However, in the above manner, the texture cycle generated by UV animation is repeated obviously, the randomness of fluid motion such as water, smoke and cloud is not met, the static texture mapping has fixed resolution, the problem of image magnification distortion exists, and the texture mapping is generally attached to the surface of the rendering grid, so that the space geometrical change of the grid is not caused, the generated image and animation are poor in reality, and the simulation effect is poor.
Disclosure of Invention
The purpose of the present disclosure is to provide an image rendering method, an image rendering device, a storage medium and an electronic device, which are beneficial to improving stereoscopic impression and realism of image rendering.
To achieve the above object, according to a first aspect of the present disclosure, there is provided an image rendering method including:
Determining a target noise image;
Determining a target sampling mode aiming at the target noise image according to a target rendering material to be rendered, wherein the target sampling mode comprises sampling times and sampling parameters of each sampling;
Sampling the target noise image for multiple times according to the sampling times and sampling parameters of each sampling to obtain noise samples;
and performing image rendering according to the noise samples to obtain a rendered target image.
Optionally, the determining the target noise image includes:
generating an initial noise image in response to identifying the image rendering request;
and carrying out continuous processing on the initial noise image to obtain the target noise image.
Optionally, the generating the initial noise image includes:
generating a noise grid, and generating a random number at each node in the noise grid to obtain the initial noise image;
the step of continuously processing the initial noise image to obtain the target noise image includes:
And carrying out interpolation processing on the non-node part in the noise grid based on the random number corresponding to the node in the initial noise image to obtain the target noise image.
Optionally, the determining, according to the target rendering material to be rendered, a target sampling manner for the target noise image includes:
and determining a sampling mode corresponding to the target rendering material as the target sampling mode according to the corresponding relation between the pre-defined rendering material and the sampling mode, wherein the sampling parameters at least comprise sampling frequency.
Optionally, the performing image rendering according to the noise sampling to obtain a rendered target image includes:
Determining a rendering model for image rendering, the rendering model being made up of a plurality of model meshes;
according to a predefined corresponding relation between the rendering material and the noise superposition mode, determining a noise superposition mode corresponding to the target rendering material as a target superposition mode, wherein the target superposition mode is used for indicating superposition sequence and/or superposition direction for noise sampling;
sequentially overlapping the noise samples to grid vertexes of the rendering model according to the target overlapping mode to obtain rendering parameters of the rendering model;
And performing image rendering according to the rendering parameters to obtain the target image.
Optionally, the target superposition methods include a first noise superposition method for model mesh vertices in the rendering model and a second noise superposition method for pixel points in the rendering model;
Sequentially stacking the noise samples to grid vertices of the rendering model according to the target stacking manner to obtain rendering parameters of the rendering model, including:
according to the first noise superposition mode, superposing the noise samples indicated by the first noise superposition mode to model grid vertexes in the rendering model to obtain a first superposition result, wherein the first superposition result is at least used for indicating a first position of each model grid vertex in the rendering model;
According to the second noise superposition mode, superposing the noise samples indicated by the second noise superposition mode to the pixel points in the first superposition result to obtain a second superposition result, wherein the second superposition result is at least used for indicating a second position of the pixel points in the rendering model;
performing illumination calculation on the pixel points in the second superposition result to obtain pixel values of the pixel points in the rendering model;
and taking the second position and the pixel value of the pixel point in the rendering model as the rendering parameters.
Optionally, the target rendering material is water, and the first noise superposition mode is used for simulating the big wave of the water, and the second noise superposition mode is used for simulating the big wave and the small wave of the water.
According to a second aspect of the present disclosure, there is provided an image rendering apparatus, the apparatus comprising:
the first determining module is used for determining a target noise image;
The second determining module is used for determining a target sampling mode aiming at the target noise image according to a target rendering material to be rendered, wherein the target sampling mode comprises sampling times and sampling parameters of each sampling;
the sampling module is used for sampling the target noise image for multiple times according to the sampling times and the sampling parameters of each sampling to obtain noise samples;
and the rendering module is used for performing image rendering according to the noise samples to obtain a rendered target image.
Optionally, the first determining module includes:
A generation sub-module for generating an initial noise image in response to identifying the image rendering request;
and the processing sub-module is used for carrying out continuous processing on the initial noise image to obtain the target noise image.
Optionally, the generating submodule is used for generating a noise grid and generating a random number at each node in the noise grid to obtain the initial noise image;
the processing sub-module is used for carrying out interpolation processing on the non-node part in the noise grid based on the random number corresponding to the node in the initial noise image to obtain the target noise image.
Optionally, the second determining module is configured to determine, according to a predefined correspondence between a rendering material and a sampling manner, a sampling manner corresponding to the target rendering material as the target sampling manner, where the sampling parameter includes at least a sampling frequency.
Optionally, the rendering module includes:
A first determination sub-module for determining a rendering model for image rendering, the rendering model being composed of a plurality of model meshes;
The second determining submodule is used for determining a noise superposition mode corresponding to the target rendering material according to the corresponding relation between the pre-defined rendering material and the noise superposition mode, and the noise superposition mode is used as a target superposition mode and used for indicating the superposition sequence and/or the superposition direction aiming at noise sampling;
the first superposition sub-module is used for sequentially superposing the noise samples to grid vertexes of the rendering model according to the target superposition mode so as to obtain rendering parameters of the rendering model;
And the rendering sub-module is used for performing image rendering according to the rendering parameters to obtain the target image.
Optionally, the target superposition methods include a first noise superposition method for model mesh vertices in the rendering model and a second noise superposition method for pixel points in the rendering model;
the first superposition sub-module comprises:
The second superposition sub-module is used for superposing the noise samples indicated by the first noise superposition mode to the model grid vertexes in the rendering model according to the first noise superposition mode to obtain a first superposition result, and the first superposition result is at least used for indicating the first positions of the model grid vertexes in the rendering model;
The third superposition sub-module is used for superposing the noise samples indicated by the second noise superposition mode to the pixel points in the first superposition result according to the second noise superposition mode to obtain a second superposition result, and the second superposition result is at least used for indicating a second position of the pixel points in the rendering model;
The calculation sub-module is used for carrying out illumination calculation on the pixel points in the second superposition result to obtain pixel values of the pixel points in the rendering model;
And the third determining submodule is used for taking the second position and the pixel value of the pixel point in the rendering model as the rendering parameter.
Optionally, the target rendering material is water, and the first noise superposition mode is used for simulating the big wave of the water, and the second noise superposition mode is used for simulating the big wave and the small wave of the water.
According to a third aspect of the present disclosure there is provided a non-transitory computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method of the first aspect of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising:
a memory having a computer program stored thereon;
A processor for executing the computer program in the memory to implement the steps of the method of the first aspect of the disclosure.
Through the technical scheme, the target noise image is determined, the target sampling mode aiming at the target noise image is determined according to the target rendering material to be rendered, wherein the target sampling mode comprises sampling times and sampling parameters of each sampling, the target noise image is sampled for a plurality of times according to the sampling times and the sampling parameters of each sampling, the noise sampling is obtained, and the image rendering is carried out according to the noise sampling, so that the rendered target image is obtained. Therefore, by generating the target noise image, the target noise image is used as the texture for rendering, so that the randomness of the texture is improved, the rendered image is not monotonous repetition or translation of the texture map, and the rendering sense of reality can be improved. And by means of sampling from the target noise image, the problem of image magnification distortion caused by insufficient resolution of the fixed texture map can be effectively avoided. Meanwhile, the noise sampled aiming at the target rendering material can synchronize the deformation conforming to the target rendering material into the rendering model during image rendering, so that the deformation of the rendering model is caused, and the reality and the third dimension of the rendering result are improved.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate the disclosure and together with the description serve to explain, but do not limit the disclosure. In the drawings:
FIG. 1 is a flow chart of an image rendering method provided in accordance with one embodiment of the present disclosure;
FIG. 2 is a block diagram of an image rendering apparatus provided according to one embodiment of the present disclosure;
Fig. 3 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
Specific embodiments of the present disclosure are described in detail below with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the disclosure, are not intended to limit the disclosure.
Fig. 1 is a flowchart of an image rendering method provided according to one embodiment of the present disclosure. As shown in fig. 1, the method may include steps 11 to 14.
In step 11, a target noise image is determined.
The target noise image may be obtained using currently available noise generation techniques. Meanwhile, since materials (for example, water, smoke, cloud) in nature are not completely disordered, though having randomness, in order to enhance the realism of image rendering, the target noise image should be continuous. For example, the target noise image may be obtained using a value noise, gradient noise, perlin noise (berlin noise), or the like technique.
In one possible embodiment, step 11 may comprise the steps of:
generating an initial noise image in response to identifying the image rendering request;
And carrying out continuous processing on the initial noise image to obtain a target noise image.
In order to improve the authenticity of image rendering, the target noise image used in the current rendering can be generated in real time under the condition that the image rendering request is identified, namely, the noise image is generated by a certain algorithm every time the image rendering request is identified and is used as the target noise image of the current rendering.
Illustratively, generating the initial noise image may be accomplished by:
a noise grid is generated and a random number is generated at each node in the noise grid to obtain an initial noise image.
Typically, an initial noise grid (i.e., lattice) may be generated. The structure of the noise grid may be set according to actual requirements, for example, if in two-dimensional image rendering, a two-dimensional grid may be generated as the noise grid, and if in three-dimensional image rendering, a three-dimensional grid may be generated as the noise grid.
After the noise grid is generated, a pseudo-random number may be generated at each grid node of the noise grid, and an initial noise image may be obtained. The initial noise image is a snowflake-like image which is similar to the display of an old-fashioned television, has high image frequency and strong randomness, has no continuity, and cannot be directly applied as a target noise image, so that continuous processing is required.
After the initial noise image is obtained, the initial noise image can be processed by using a continuous processing mode to obtain a continuous target noise image. The target noise images with different characteristics can be obtained by different continuous processing modes, so that the continuous processing modes can be set in a targeted manner according to actual requirements, and the method is not limited by the disclosure.
Illustratively, the continuous processing is performed on the initial noise image to obtain the target noise image, which can be implemented in the following manner:
and carrying out interpolation processing on non-node parts in the noise grid based on random numbers corresponding to nodes in the initial noise image to obtain a target noise image.
After the initial noise image is obtained, the initial noise image can be subjected to continuous processing in an interpolation processing mode, and the target noise image is obtained. The method is not limited in this disclosure, and different interpolation methods are selected to obtain target noise images with different characteristics, and what interpolation method to use can be freely selected according to actual requirements. For example, a smoother target noise image may be obtained using a cubic spline interpolation method.
In step 12, a target sampling manner for the target noise image is determined according to the target rendering material to be rendered.
The target noise image is a fixed noise figure, and in order to obtain a richer noise morphology for rendering, multiple sampling and superposition may be performed based on the target noise image.
Therefore, for the target rendering material to be rendered, a target sampling mode for the target noise image can be determined for later sampling superposition. The target sampling mode may include sampling times and sampling parameters of each sampling.
Illustratively, the sampling parameters may include, but are not limited to, sampling frequency, sampling location, and the like. For example, the target noise image may be sampled a plurality of times, and the sampling positions may be shifted according to a certain rule. In a possible embodiment, spectrum synthesis may be used, that is, the frequency of the current sampling is 2 times the frequency of the previous sampling, and the intensity of the sampling result is attenuated by half.
Different materials have different structures, and accordingly, in the rendering process, different modes of using noise for the different rendering materials are different, so that the modes of using noise for the different rendering materials need to be set in a targeted manner to add noise data in harmony.
For example, for water materials, assuming that the water wave is composed of waves of different frequencies, the wave of high frequency appears as a wavelet (e.g., a ripple), the amplitude is small, and the direction changes more frequently, the wave of low frequency appears as a large wave (e.g., a wave), the amplitude is large, and the direction does not change substantially. Based on the characteristics of the water material, the sampling mode corresponding to the water material can be adaptively set, so that the sampled data can restore the big wave and the small wave of the water. For example, the sampling mode may be set to sample N times, where M times samples at a frequency lower than a preset frequency (for reducing a large wave), N-M times samples at a frequency higher than the preset frequency (for reducing a small wave), wherein M, N are both positive integers, and M < N.
In one possible embodiment, step 12 may comprise the following steps:
and determining a sampling mode corresponding to the target rendering material as a target sampling mode according to the corresponding relation between the pre-defined rendering material and the sampling mode.
Wherein, as described above, the sampling parameters may include at least a sampling frequency.
As described above, the sampling manner of the rendering material may be set according to the characteristics of the rendering material itself. Based on this, a correspondence relationship between the rendering material and the sampling manner may be further formed, for example, if the rendering material has water, smoke, and cloud, each of the three corresponds to one sampling manner. Thus, the sampling pattern corresponding to the target rendering material can be obtained from the correspondence relationship based on the target rendering material as the target sampling pattern.
In step 13, the target noise image is sampled for a plurality of times according to the sampling times and the sampling parameters of each sampling, and noise sampling is obtained.
After the target sampling mode is determined, the target noise image can be sampled for multiple times according to the sampling times indicated by the target sampling mode and the sampling parameters of each sampling so as to obtain the noise sampling obtained by each sampling.
In step 14, image rendering is performed according to the noise samples, resulting in a rendered target image.
In one possible embodiment, step 14 may comprise the steps of:
Determining a rendering model for image rendering, the rendering model being composed of a plurality of model meshes;
According to the corresponding relation between the pre-defined rendering material and the noise superposition mode, determining the noise superposition mode corresponding to the target rendering material as a target superposition mode;
sequentially superposing the noise samples to grid vertexes of the rendering model according to a target superposition mode to obtain rendering parameters of the rendering model;
and performing image rendering according to the rendering parameters to obtain a target image.
Wherein the target superposition approach may be used to indicate a superposition order and/or a superposition direction for noise samples. For example, the superimposition order may be determined according to the sampling frequency to which the noise samples correspond (for example, the superimposition order is set to be the order from low frequency to high frequency). As another example, the stacking direction may be a fixed direction, or vary with frequency (e.g., rotate with increasing frequency).
The image rendering is generally completed based on a rendering model, which is composed of a plurality of model mesh meshes, which may be triangular patches, quadrangular patches, or the like. In the present disclosure, to enhance rendering accuracy, a model mesh of triangles is typically used. The rendering model may be based on rendering scene changes, e.g., a two-dimensional model if the scene is in a rendering of a two-dimensional image and a three-dimensional model if the scene is in a rendering of a three-dimensional image.
In one possible implementation, the target superposition approach may include a first noise superposition approach for model mesh vertices in the rendering model and a second noise superposition approach for pixel points in the rendering model. Accordingly, sequentially superimposing noise samples to mesh vertices of the rendering model according to a target superimposing manner to obtain rendering parameters of the rendering model, may include the following steps:
according to a first noise superposition mode, superposing noise samples indicated by the first noise superposition mode to model grid vertexes in a rendering model to obtain a first superposition result;
According to a second noise superposition mode, superposing noise samples indicated by the second noise superposition mode to pixel points in the first superposition result to obtain a second superposition result;
carrying out illumination calculation on the pixel points in the second superposition result to obtain pixel values of the pixel points in the rendering model;
and taking the second position and the pixel value of the pixel point in the rendering model as rendering parameters.
The first noise superposition mode is specific to each vertex of the model grid in the rendering model, and non-vertex parts are not considered, so that the first superposition result obtained by processing is relatively coarse, and after being processed according to the first noise superposition mode, the first superposition result is further processed according to the second noise superposition mode, wherein the second noise superposition mode is specific to each pixel in the rendering model, and comprises not only the vertex parts but also the non-vertex parts, and the second superposition result obtained by processing is relatively fine, so that the accuracy and the authenticity of rendering are facilitated.
Wherein the first superposition result may be obtained in a vertex shader, and the second superposition result and the result of the illumination calculation on the second superposition result may be obtained in FRAGMENT SHADER (fragment shader, or fragment shader).
The vertex loader acquires position coordinates of the model grid vertices from the rendering model, and superimposes corresponding noise samples into the model grid vertices in a first noise superposition mode to obtain a first superposition result. The first superposition result is at least used for indicating a first position of each model grid vertex in the rendering model.
FRAGMENT SHADER obtaining the first position of each model grid vertex and the position coordinates of the pixel points nearby the vertex from the first superposition result, and superposing the corresponding noise samples to the model grid vertex and the pixel points nearby the model grid vertex in a second noise superposition mode to obtain a second superposition result. Wherein the second superposition result is at least used for indicating a second position of the pixel point (the vertex of the model mesh and the nearby pixel points) in the rendering model.
For example, if the target rendering material is water, it can be seen from the foregoing that there are two forms of large wave and small wave, so the first noise superposition mode may be used to simulate the large wave of water, and the second noise superposition mode may be used to simulate the large wave and the small wave of water. That is, the large wave is overlapped based on the model grid vertex of the rendering model, and the large wave and the small wave are further overlapped aiming at the model grid vertex of the rendering model and the adjacent pixel points thereof, so that the reality and the naturality of the rendering result are improved.
Thereafter, FRAGMENT SHADER may perform illumination calculations based on the second superposition result to obtain pixel values for the pixel points in the rendering model. For example, the illumination calculation may be selected from PBR (PHYSICALLLY-Based Rendering) or Blinn-Phong illumination model, etc.
Based on the steps, the second positions of the pixel points in the rendering model and the pixel values corresponding to the pixel points are obtained, so that the data can be used as rendering parameters required by image rendering, and the rendering engine can conveniently perform image rendering based on the rendering parameters, and further a rendered target image is obtained.
Through the technical scheme, the target noise image is determined, the target sampling mode aiming at the target noise image is determined according to the target rendering material to be rendered, wherein the target sampling mode comprises sampling times and sampling parameters of each sampling, the target noise image is sampled for a plurality of times according to the sampling times and the sampling parameters of each sampling, the noise sampling is obtained, and the image rendering is carried out according to the noise sampling, so that the rendered target image is obtained. Therefore, by generating the target noise image, the target noise image is used as the texture for rendering, so that the randomness of the texture is improved, the rendered image is not monotonous repetition or translation of the texture map, and the rendering sense of reality can be improved. And by means of sampling from the target noise image, the problem of image magnification distortion caused by insufficient resolution of the fixed texture map can be effectively avoided. Meanwhile, the noise sampled aiming at the target rendering material can synchronize the deformation conforming to the target rendering material into the rendering model during image rendering, so that the deformation of the rendering model is caused, and the reality and the third dimension of the rendering result are improved.
Fig. 2 is a block diagram of an image rendering apparatus provided according to one embodiment of the present disclosure. As shown in fig. 2, the apparatus 20 includes:
A first determining module 21 for determining a target noise image;
A second determining module 22, configured to determine a target sampling manner for the target noise image according to a target rendering material to be rendered, where the target sampling manner includes a sampling number and sampling parameters of each sampling;
the sampling module 23 is configured to sample the target noise image for multiple times according to the sampling times and sampling parameters of each sampling, so as to obtain a noise sample;
And the rendering module 24 is used for performing image rendering according to the noise samples to obtain a rendered target image.
Optionally, the first determining module 21 includes:
A generation sub-module for generating an initial noise image in response to identifying the image rendering request;
and the processing sub-module is used for carrying out continuous processing on the initial noise image to obtain the target noise image.
Optionally, the generating submodule is used for generating a noise grid and generating a random number at each node in the noise grid to obtain the initial noise image;
the processing sub-module is used for carrying out interpolation processing on the non-node part in the noise grid based on the random number corresponding to the node in the initial noise image to obtain the target noise image.
Optionally, the second determining module 22 is configured to determine, as the target sampling mode, a sampling mode corresponding to the target rendering material according to a predefined correspondence between the rendering material and the sampling mode, where the sampling parameter includes at least a sampling frequency.
Optionally, the rendering module 24 includes:
A first determination sub-module for determining a rendering model for image rendering, the rendering model being composed of a plurality of model meshes;
The second determining submodule is used for determining a noise superposition mode corresponding to the target rendering material according to the corresponding relation between the pre-defined rendering material and the noise superposition mode, and the noise superposition mode is used as a target superposition mode and used for indicating the superposition sequence and/or the superposition direction aiming at noise sampling;
the first superposition sub-module is used for sequentially superposing the noise samples to grid vertexes of the rendering model according to the target superposition mode so as to obtain rendering parameters of the rendering model;
And the rendering sub-module is used for performing image rendering according to the rendering parameters to obtain the target image.
Optionally, the target superposition methods include a first noise superposition method for model mesh vertices in the rendering model and a second noise superposition method for pixel points in the rendering model;
the first superposition sub-module comprises:
The second superposition sub-module is used for superposing the noise samples indicated by the first noise superposition mode to the model grid vertexes in the rendering model according to the first noise superposition mode to obtain a first superposition result, and the first superposition result is at least used for indicating the first positions of the model grid vertexes in the rendering model;
The third superposition sub-module is used for superposing the noise samples indicated by the second noise superposition mode to the pixel points in the first superposition result according to the second noise superposition mode to obtain a second superposition result, and the second superposition result is at least used for indicating a second position of the pixel points in the rendering model;
The calculation sub-module is used for carrying out illumination calculation on the pixel points in the second superposition result to obtain pixel values of the pixel points in the rendering model;
And the third determining submodule is used for taking the second position and the pixel value of the pixel point in the rendering model as the rendering parameter.
Optionally, the target rendering material is water, and the first noise superposition mode is used for simulating the big wave of the water, and the second noise superposition mode is used for simulating the big wave and the small wave of the water.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 3 is a block diagram of an electronic device 700, according to an example embodiment. As shown in fig. 3, the electronic device 700 may include: a processor 701, a memory 702. The electronic device 700 may also include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.
The processor 701 is configured to control the overall operation of the electronic device 700 to perform all or part of the steps in the image rendering method described above. The memory 702 is used to store various types of data to support operation on the electronic device 700, which may include, for example, instructions for any application or method operating on the electronic device 700, as well as application-related data, such as contact data, messages sent and received, pictures, audio, video, and so forth. The Memory 702 may be implemented by any type or combination of volatile or non-volatile Memory devices, such as static random access Memory (Static Random Access Memory, SRAM for short), electrically erasable programmable Read-Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM for short), erasable programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM for short), programmable Read-Only Memory (Programmable Read-Only Memory, PROM for short), read-Only Memory (ROM for short), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia component 703 can include a screen and an audio component. Wherein the screen may be, for example, a touch screen, the audio component being for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signals may be further stored in the memory 702 or transmitted through the communication component 705. The audio assembly further comprises at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, which may be a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 705 is for wired or wireless communication between the electronic device 700 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, near Field Communication (NFC) for short, 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or one or a combination of more of them, is not limited herein. The corresponding communication component 705 may thus comprise: wi-Fi module, bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic device 700 may be implemented by one or more Application-specific integrated circuits (ASIC), digital signal Processor (DIGITAL SIGNAL Processor, DSP), digital signal processing device (DIGITAL SIGNAL Processing Device, DSPD), programmable logic device (Programmable Logic Device, PLD), field programmable gate array (Field Programmable GATE ARRAY, FPGA), controller, microcontroller, microprocessor, or other electronic components for performing the image rendering method described above.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the image rendering method described above is also provided. For example, the computer readable storage medium may be the memory 702 including program instructions described above, which are executable by the processor 701 of the electronic device 700 to perform the image rendering method described above.
The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the present disclosure is not limited to the specific details of the embodiments described above, and various simple modifications may be made to the technical solutions of the present disclosure within the scope of the technical concept of the present disclosure, and all the simple modifications belong to the protection scope of the present disclosure.
In addition, the specific features described in the above embodiments may be combined in any suitable manner without contradiction. The various possible combinations are not described further in this disclosure in order to avoid unnecessary repetition.
Moreover, any combination between the various embodiments of the present disclosure is possible as long as it does not depart from the spirit of the present disclosure, which should also be construed as the disclosure of the present disclosure.
Claims (9)
1. An image rendering method, the method comprising:
Determining a target noise image;
Determining a target sampling mode aiming at the target noise image according to a target rendering material to be rendered, wherein the target sampling mode comprises sampling times and sampling parameters of each sampling;
Sampling the target noise image for multiple times according to the sampling times and sampling parameters of each sampling to obtain noise samples;
performing image rendering according to the noise samples to obtain a rendered target image;
And performing image rendering according to the noise sampling to obtain a rendered target image, wherein the method comprises the following steps:
Determining a rendering model for image rendering, the rendering model being made up of a plurality of model meshes;
according to a predefined corresponding relation between the rendering material and the noise superposition mode, determining a noise superposition mode corresponding to the target rendering material as a target superposition mode, wherein the target superposition mode is used for indicating superposition sequence and/or superposition direction for noise sampling;
sequentially overlapping the noise samples to grid vertexes of the rendering model according to the target overlapping mode to obtain rendering parameters of the rendering model;
And performing image rendering according to the rendering parameters to obtain the target image.
2. The method of claim 1, wherein the determining the target noise image comprises:
generating an initial noise image in response to identifying the image rendering request;
and carrying out continuous processing on the initial noise image to obtain the target noise image.
3. The method of claim 2, wherein the generating an initial noise image comprises:
generating a noise grid, and generating a random number at each node in the noise grid to obtain the initial noise image;
the step of continuously processing the initial noise image to obtain the target noise image includes:
And carrying out interpolation processing on the non-node part in the noise grid based on the random number corresponding to the node in the initial noise image to obtain the target noise image.
4. The method according to claim 1, wherein determining a target sampling manner for the target noise image according to a target rendering material to be rendered comprises:
and determining a sampling mode corresponding to the target rendering material as the target sampling mode according to the corresponding relation between the pre-defined rendering material and the sampling mode, wherein the sampling parameters at least comprise sampling frequency.
5. The method of claim 1, wherein the target superposition approach includes a first noise superposition approach for model mesh vertices in the rendering model and a second noise superposition approach for pixel points in the rendering model;
Sequentially stacking the noise samples to grid vertices of the rendering model according to the target stacking manner to obtain rendering parameters of the rendering model, including:
according to the first noise superposition mode, superposing the noise samples indicated by the first noise superposition mode to model grid vertexes in the rendering model to obtain a first superposition result, wherein the first superposition result is at least used for indicating a first position of each model grid vertex in the rendering model;
According to the second noise superposition mode, superposing the noise samples indicated by the second noise superposition mode to the pixel points in the first superposition result to obtain a second superposition result, wherein the second superposition result is at least used for indicating a second position of the pixel points in the rendering model;
performing illumination calculation on the pixel points in the second superposition result to obtain pixel values of the pixel points in the rendering model;
and taking the second position and the pixel value of the pixel point in the rendering model as the rendering parameters.
6. The method of claim 5, wherein the target rendering material is water, and wherein the first noise superposition approach is used to simulate a large wave of water and the second noise superposition approach is used to simulate large and small waves of water.
7. An image rendering apparatus, the apparatus comprising:
the first determining module is used for determining a target noise image;
The second determining module is used for determining a target sampling mode aiming at the target noise image according to a target rendering material to be rendered, wherein the target sampling mode comprises sampling times and sampling parameters of each sampling;
the sampling module is used for sampling the target noise image for multiple times according to the sampling times and the sampling parameters of each sampling to obtain noise samples;
The rendering module is used for performing image rendering according to the noise samples to obtain a rendered target image;
The rendering module comprises:
A first determination sub-module for determining a rendering model for image rendering, the rendering model being composed of a plurality of model meshes;
The second determining submodule is used for determining a noise superposition mode corresponding to the target rendering material according to the corresponding relation between the pre-defined rendering material and the noise superposition mode, and the noise superposition mode is used as a target superposition mode and used for indicating the superposition sequence and/or the superposition direction aiming at noise sampling;
the first superposition sub-module is used for sequentially superposing the noise samples to grid vertexes of the rendering model according to the target superposition mode so as to obtain rendering parameters of the rendering model;
And the rendering sub-module is used for performing image rendering according to the rendering parameters to obtain the target image.
8. A non-transitory computer readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor realizes the steps of the method according to any of claims 1-6.
9. An electronic device, comprising:
a memory having a computer program stored thereon;
A processor for executing the computer program in the memory to implement the steps of the method of any one of claims 1-6.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210652062.5A CN115018968B (en) | 2022-06-09 | 2022-06-09 | Image rendering method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210652062.5A CN115018968B (en) | 2022-06-09 | 2022-06-09 | Image rendering method and device, storage medium and electronic equipment |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN115018968A CN115018968A (en) | 2022-09-06 |
| CN115018968B true CN115018968B (en) | 2024-09-24 |
Family
ID=83072528
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210652062.5A Active CN115018968B (en) | 2022-06-09 | 2022-06-09 | Image rendering method and device, storage medium and electronic equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115018968B (en) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116824018B (en) * | 2023-06-14 | 2024-10-11 | 粒界(上海)信息科技有限公司 | Rendering abnormality detection method and device, storage medium and electronic equipment |
| CN116958358A (en) * | 2023-07-27 | 2023-10-27 | 网易(杭州)网络有限公司 | Virtual object rendering method and device, storage medium and electronic equipment |
| CN117745915B (en) * | 2024-02-07 | 2024-05-17 | 西交利物浦大学 | Model rendering method, device, equipment and storage medium |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114155311A (en) * | 2021-12-06 | 2022-03-08 | 北京达佳互联信息技术有限公司 | Image rendering method and device, electronic equipment and storage medium |
| CN114359081A (en) * | 2021-12-24 | 2022-04-15 | 网易(杭州)网络有限公司 | Liquid material dissolving method, device, electronic device and storage medium |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9437040B2 (en) * | 2013-11-15 | 2016-09-06 | Nvidia Corporation | System, method, and computer program product for implementing anti-aliasing operations using a programmable sample pattern table |
| CN110827391B (en) * | 2019-11-12 | 2021-02-12 | 腾讯科技(深圳)有限公司 | Image rendering method, device and equipment and storage medium |
| CN112215934B (en) * | 2020-10-23 | 2023-08-29 | 网易(杭州)网络有限公司 | Game model rendering method and device, storage medium and electronic device |
| CN112686984B (en) * | 2020-12-18 | 2022-04-15 | 完美世界(北京)软件科技发展有限公司 | Rendering method, device, equipment and medium for sub-surface scattering effect |
| CN112652044B (en) * | 2021-01-05 | 2024-06-21 | 网易(杭州)网络有限公司 | Particle special effect rendering method, device, equipment and storage medium |
| CN113181642B (en) * | 2021-04-29 | 2024-01-26 | 网易(杭州)网络有限公司 | Method and device for generating wall model with mixed materials |
| CN114399573A (en) * | 2021-12-28 | 2022-04-26 | 网易(杭州)网络有限公司 | Texture rendering method, device, electronic device and storage medium |
-
2022
- 2022-06-09 CN CN202210652062.5A patent/CN115018968B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114155311A (en) * | 2021-12-06 | 2022-03-08 | 北京达佳互联信息技术有限公司 | Image rendering method and device, electronic equipment and storage medium |
| CN114359081A (en) * | 2021-12-24 | 2022-04-15 | 网易(杭州)网络有限公司 | Liquid material dissolving method, device, electronic device and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN115018968A (en) | 2022-09-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN115018968B (en) | Image rendering method and device, storage medium and electronic equipment | |
| US10311548B2 (en) | Scaling render targets to a higher rendering resolution to display higher quality video frames | |
| CN107680042B (en) | Rendering method, device, engine and storage medium combining texture and convolution network | |
| CN112652046B (en) | Game picture generation method, device, equipment and storage medium | |
| CN104751507B (en) | Graphical content rendering intent and device | |
| US20190172249A1 (en) | Systems and Methods for Real-Time Large-Scale Point Cloud Surface Reconstruction | |
| KR20080090671A (en) | Method and device for mapping texture to 3D object model | |
| CN111508058A (en) | Method and device for three-dimensional reconstruction of image, storage medium and electronic equipment | |
| CN114742931B (en) | Image rendering method, device, electronic device and storage medium | |
| TW200907854A (en) | Universal rasterization of graphic primitives | |
| KR20230013099A (en) | Geometry-aware augmented reality effects using real-time depth maps | |
| CN115205456A (en) | Three-dimensional model construction method and device, electronic equipment and storage medium | |
| CN112581632B (en) | House source data processing method and device | |
| EP1704535A1 (en) | Method of rendering graphical objects | |
| CN117557711B (en) | Method, device, computer equipment and storage medium for determining visual field | |
| JPH05290174A (en) | Graphics work station for handling structure of 3-dimensional model and method for generating 3-dimensional graphics picture of structure of model | |
| CN109816761B (en) | Graph conversion method, graph conversion device, storage medium and electronic equipment | |
| Suppan et al. | Neural Screen Space Rendering of Direct Illumination. | |
| AU2017212441B2 (en) | Haptic correlated graphic effects | |
| CN116778053B (en) | Target engine-based map generation method, device, equipment and storage medium | |
| CN118747718B (en) | A thin cloud fractal generation method and system for large-area weather environment simulation | |
| CN118429181B (en) | Image blur processing method, device, electronic device and storage medium | |
| CN119850819B (en) | Model rendering method, device, electronic device and computer-readable medium | |
| CN117036577B (en) | Scene rendering method and device, storage medium and electronic equipment | |
| JP7700379B2 (en) | A sampling-based method for objective quality assessment of meshes. |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |