[go: up one dir, main page]

CN112587915B - Lighting effect presentation method and device, storage medium and computer equipment - Google Patents

Lighting effect presentation method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN112587915B
CN112587915B CN202110007953.0A CN202110007953A CN112587915B CN 112587915 B CN112587915 B CN 112587915B CN 202110007953 A CN202110007953 A CN 202110007953A CN 112587915 B CN112587915 B CN 112587915B
Authority
CN
China
Prior art keywords
illumination
target
information
probe
illumination information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110007953.0A
Other languages
Chinese (zh)
Other versions
CN112587915A (en
Inventor
谢乃闻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110007953.0A priority Critical patent/CN112587915B/en
Publication of CN112587915A publication Critical patent/CN112587915A/en
Application granted granted Critical
Publication of CN112587915B publication Critical patent/CN112587915B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • A63F13/28Output arrangements for video game devices responding to control signals received from the game device for affecting ambient conditions, e.g. for vibrating players' seats, activating scent dispensers or affecting temperature or light
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)
  • Circuit Arrangement For Electric Light Sources In General (AREA)

Abstract

The application relates to a lighting effect presentation method, a device, a storage medium and computer equipment, wherein the lighting effect presentation method comprises the following steps: determining the current time point and the current position of the mobile game object in the target virtual scene; determining a target illumination probe from a plurality of preset illumination probes according to the current position; determining target illumination information according to the current time point and the target illumination probe; the illumination effect presentation is carried out on the mobile game object according to the target illumination information, so that the light receiving effect of various special effect lights on the object can be vividly simulated in the concert scene, the illusion that the stage character is not received light is avoided, and the stage display effect is improved.

Description

Lighting effect presentation method and device, storage medium and computer equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for presenting lighting effects, a storage medium, and a computer device.
Background
Along with the development of computer technology, art scene making is increasingly widely applied, for example, in the fields of game development, animation production, video processing and the like, and various art scenes are more and more fine and attractive in appearance.
In order to further optimize the effect of making an art scene, for example, for a concert scene in a game, various special effect lights such as light columns, neon lights and the like are designed to enrich stage expression, but in the current concert scene, the display of dynamic objects (such as game characters) is not influenced by the illumination of the special effect lights, namely, the illumination effect of the special effect lights cannot be displayed on the dynamic objects, so that the object is not influenced by the illusion of light, the stage display effect is not vivid, and the picture requirement of a user on the concert scene cannot be met.
Disclosure of Invention
The application aims to provide a lighting effect presentation method, a lighting effect presentation device, a storage medium and computer equipment, which can vividly show the light receiving effect of specific light on an object and improve the stage display effect.
The embodiment of the application provides a lighting effect presentation method, which comprises the following steps:
Determining the current time point and the current position of the mobile game object in the target virtual scene;
determining a target illumination probe from a plurality of preset illumination probes according to the current position;
determining target illumination information according to the current time point and the target illumination probe;
And carrying out illumination effect presentation on the mobile game object according to the target illumination information.
Wherein, according to the current position, determining a target illumination probe from a plurality of preset illumination probes includes:
acquiring the arrangement position of each preset illumination probe in a plurality of preset illumination probes;
Determining a distance value between each of the arrangement positions and the current position;
and taking the preset illumination probe corresponding to the distance value with the smallest value as a target illumination probe.
The target virtual scene includes a concert scene, and the determining the current time point of the mobile game object in the target virtual scene includes:
determining a current playing node of current playing audio in the concert scene;
And taking the current playing node as the current time point of the mobile game object in the concert scene.
Before determining the current playing node of the current playing audio in the concert scene, the method further comprises the following steps:
Displaying a playing operation interface corresponding to the current playing audio;
acquiring a progress adjustment operation for the current playing audio through the playing operation interface;
and adjusting the playing node of the current playing audio according to the progress adjusting operation.
Wherein the determining target illumination information according to the current time point and the target illumination probe includes:
acquiring an illumination information sequence corresponding to the target illumination probe to obtain a target illumination information sequence, wherein the target illumination information sequence comprises a plurality of frames of illumination information arranged according to a time sequence;
And acquiring the illumination information corresponding to the current time point from the target illumination information sequence, and taking the acquired illumination information as target illumination information.
Before the illumination information sequence corresponding to the target illumination probe is acquired, the method further comprises the following steps:
acquiring the position of at least one virtual light source in the target virtual scene and special effect light information sent by each virtual light source at a plurality of sampling time points;
Determining a plurality of preset illumination probes and arrangement positions of each preset illumination probe according to the position of the virtual light source and the target virtual scene;
generating an illumination information sequence of each preset illumination probe according to the special effect light information and the arrangement positions;
the obtaining the illumination information sequence corresponding to the target illumination probe comprises the following steps: and acquiring the illumination information sequences corresponding to the target illumination probes from all the generated illumination information sequences.
The generating, according to the special effect light information and the arrangement position, an illumination information sequence of each preset illumination probe includes:
Generating a cube box map of each preset illumination probe at each sampling time point according to the special effect light information and the arrangement positions, wherein each preset illumination probe corresponds to different cube box maps at different sampling time points;
Carrying out spherical harmonic sampling treatment on each cube box map to obtain a corresponding spherical harmonic illumination information set;
generating illumination information corresponding to the corresponding cube box map according to the spherical harmonic illumination information set;
generating an illumination information sequence according to the illumination information corresponding to the same preset illumination probe at different sampling time points.
The embodiment of the application also provides a lighting effect presenting device, which comprises:
The first determining module is used for determining the current time point and the current position of the mobile game object in the target virtual scene;
the second determining module is used for determining a target illumination probe from a plurality of preset illumination probes according to the current position;
the third determining module is used for determining target illumination information according to the current time point and the target illumination probe;
and the presentation module is used for presenting the illumination effect to the mobile game object according to the target illumination information.
The target virtual scene comprises a concert scene, and the first determining module is specifically configured to:
determining a current playing node of current playing audio in the concert scene;
And taking the current playing node as the current time point of the mobile game object in the concert scene.
Wherein, still include adjustment module for:
Before the first determining module determines a current playing node of current playing audio in the concert scene, displaying a playing operation interface corresponding to the current playing audio;
acquiring a progress adjustment operation for the current playing audio through the playing operation interface;
and adjusting the playing node of the current playing audio according to the progress adjusting operation.
The second determining module is specifically configured to:
acquiring the arrangement position of each preset illumination probe in a plurality of preset illumination probes;
Determining a distance value between each of the arrangement positions and the current position;
and taking the preset illumination probe corresponding to the distance value with the smallest value as a target illumination probe.
The third determining module is specifically configured to:
acquiring an illumination information sequence corresponding to the target illumination probe to obtain a target illumination information sequence, wherein the target illumination information sequence comprises a plurality of frames of illumination information arranged according to a time sequence;
And acquiring the illumination information corresponding to the current time point from the target illumination information sequence, and taking the acquired illumination information as target illumination information.
The device further comprises a generation module for:
before acquiring an illumination information frame sequence corresponding to the target illumination probe, acquiring the position of at least one virtual light source in the target virtual scene and special effect light information sent by each virtual light source at a plurality of sampling time points;
Determining a plurality of preset illumination probes and arrangement positions of each preset illumination probe according to the position of the virtual light source and the target virtual scene;
generating an illumination information sequence of each preset illumination probe according to the special effect light information and the arrangement positions;
The third determining module is specifically configured to: and acquiring the illumination information sequences corresponding to the target illumination probes from all the generated illumination information sequences.
The generating module is specifically configured to:
Generating a cube box map of each preset illumination probe at each sampling time point according to the special effect light information and the arrangement positions, wherein each preset illumination probe corresponds to different cube box maps at different sampling time points;
Carrying out spherical harmonic sampling treatment on each cube box map to obtain a corresponding spherical harmonic illumination information set;
generating single-frame illumination information corresponding to the corresponding cube box map according to the spherical harmonic illumination information set;
generating an illumination information sequence according to the illumination information corresponding to the same preset illumination probe at different sampling time points.
The embodiment of the application also provides a computer readable storage medium, wherein a plurality of instructions are stored in the storage medium, and the instructions are suitable for being loaded by a processor to execute any one of the light effect presentation methods.
The embodiment of the application also provides computer equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the steps in any one of the lighting effect presentation methods when executing the computer program.
According to the illumination effect presentation method, the device, the storage medium and the computer equipment, the current time point and the current position of the mobile game object in the target virtual scene are determined, the target illumination probes are determined from a plurality of preset illumination probes according to the current position, then the target illumination information is determined according to the current time point and the target illumination probes, and the illumination effect presentation is carried out on the mobile game object according to the target illumination information, so that the light receiving effect of various special effect lights on objects can be vividly simulated in the concert scene, the illusion that stage characters are not received is avoided, the stage expression content is enriched, and the stage display effect is improved.
Drawings
The technical solution and other advantageous effects of the present application will be made apparent by the following detailed description of the specific embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic view of a scenario of a lighting effect presenting method provided by an embodiment of the present application;
fig. 2 is a flowchart of a lighting effect presenting method according to an embodiment of the present application;
fig. 3 is an interface display schematic diagram of a concert scene provided by an embodiment of the present application;
FIG. 4 is a schematic illustration of a cube corner map provided in accordance with an embodiment of the present application;
FIG. 5 is a flowchart of another lighting effect presenting method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a process for generating an illumination information sequence according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an illumination effect presenting apparatus according to an embodiment of the present application;
FIG. 8 is a schematic diagram of another structure of a lighting effect presenting apparatus according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The embodiment of the application provides a lighting effect presentation method, a lighting effect presentation device, a storage medium and computer equipment.
Referring to fig. 1, fig. 1 is a schematic view of a scenario of a lighting effect presenting method provided by an embodiment of the present application, where the lighting effect presenting system may be applied to a computer device, and the computer device is specifically a server or a terminal, where the server may include a game server related to presentation of a singing conference scenario, and the terminal may include devices such as a smart phone, a tablet computer, and a game console.
The computer device may determine a current point in time and a current location at which the mobile game object is located in the target virtual scene; determining a target illumination probe from a plurality of preset illumination probes according to the current position; determining target illumination information according to the current time point and the target illumination probe; and carrying out illumination effect presentation on the mobile game object according to the target illumination information.
The target virtual scenes mainly comprise virtual stage scenes related to special effect light illumination, such as concert scenes in games. Mobile game objects may include dynamic objects and short-time static objects, which may be characters, animals, etc. The illumination probes (preset illumination probes and target illumination probes) are implemented based on an illumination detection technique that provides a way to capture and use information about the light passing through the vacuum area, storing information about the light passing through the vacuum area. The arrangement position and the number of the preset illumination probes are usually preset in advance. The target illumination information frame refers to the lamplight information of all special effect lights at the current time point stored in the target illumination probe.
For example, referring to fig. 1, the computer device may be a mobile terminal, when a user enters a concert interface in a terminal game and enters a concert scene, assuming that a player a appears in the concert scene, player a may be taken as a mobile game object, during the continuous movement of player a, special effect light (such as a light pillar) may be presented on player a in real time, for example, the light probe closest to the current movement position is determined from a plurality of preset light probes set in advance according to the movement position of player a in the concert scene in real time as a target light probe, special effect light information (i.e., target light information) of the target light probe set in advance at the current movement time point is acquired, and then the picture presentation effect of the special effect light on player a is simulated, for example, the picture presentation effect of the part of the body surface (such as the upper half) of player a is changed from natural skin color to red.
As shown in fig. 2, fig. 2 is a schematic flow chart of a lighting effect presenting method according to an embodiment of the present application, where the lighting effect presenting method is applied to a computer device, and a specific flow may be as follows:
S101, determining the current time point and the current position of the mobile game object in the target virtual scene.
The target virtual scenes mainly comprise virtual stage scenes related to special effect light illumination, such as concert scenes in games. The mobile game object may include a dynamic object and a short-time static object, which may be a character, an animal, or the like, and may be a previously set out object in a game or a game object controlled by a user player.
Specifically, when the target virtual scene is a concert scene, the step of determining the current time point of the mobile game object in the target virtual scene includes:
determining a current playing node of current playing audio in the concert scene;
and taking the current playing node as the current time point of the mobile game object in the concert scene.
The current playing audio can be a local library or songs and music made by a user, can also be an online audio resource, and the playing node mainly refers to the playing progress of the audio when playing, and can be the ratio of the played duration to the total playing duration. The playing progress can be generally adjusted by the user according to the requirement, that is, before determining the current playing node of the current playing audio in the concert scene, the lighting effect presenting method can further include:
Displaying a playing operation interface corresponding to the current playing audio;
Acquiring a progress adjustment operation for the current playing audio through the playing operation interface;
and adjusting the playing node of the current playing audio according to the progress adjusting operation.
Wherein, the playing operation interface can display the content related to the current playing audio, such as the name, singer, composer and other basic information, and progress control bar, playing button or pause button and the like. The progress adjustment operation may be that the user may drag a slider on the progress control bar to move, such as forward or backward or pause, by means of a manual, voice or gesture, etc., so as to change the playing progress of the audio.
S102, determining a target illumination probe from a plurality of preset illumination probes according to the current position.
The illumination probe (a preset illumination probe and a target illumination probe) is realized based on an illumination detection technology, and is mainly used for providing high-quality illumination for moving objects in a scene, providing a mode for capturing and using relevant information when the illumination passes through a vacuum area, and storing information when the illumination passes through the vacuum area. Since the closer the illumination probe is to the mobile game object, the more obvious the light receiving effect is to the object, the target illumination probe is usually the illumination probe closest to the mobile game object, and of course, in order to represent the light receiving authenticity of the object as much as possible, the target illumination probe may also be a plurality of illumination probes, for example, all the illumination probes with the distance between the target illumination probes and the mobile game object within the threshold range.
In an embodiment, the step S102 may specifically include:
Acquiring the arrangement position of each preset illumination probe in a plurality of preset illumination probes;
Determining a distance value between each of the arrangement positions and the current position;
and taking the preset illumination probe corresponding to the distance value with the smallest value as a target illumination probe.
The arrangement positions and the number of the preset illumination probes are usually set in advance, and a user can set the number and the arrangement positions of the preset illumination probes in advance according to the position of a virtual light source (such as a virtual stage lamp) in a scene and the main appearance area of a moving object.
S103, determining target illumination information according to the current time point and the target illumination probe.
The target illumination information refers to the lamplight information of all special effect lights at the current time point stored in the target illumination probe, and the lamplight information can be stored in a frame form.
In an embodiment, the step S103 may specifically include:
acquiring an illumination information sequence corresponding to the target illumination probe to obtain a target illumination information sequence, wherein the target illumination information sequence comprises a plurality of frames of illumination information arranged according to a time sequence;
And acquiring the illumination information corresponding to the current time point from the illumination information sequence, and taking the acquired illumination information as target illumination information.
The illumination information sequence mainly comprises multi-frame illumination information which changes along with sampling time points, each sampling time point corresponds to one frame of illumination information and corresponds to a specific light time sequence changing asset of the target illumination probe. The current time point and the sampling time point may be considered as the time recorded by the system timer after the game starts, and may be a song playing node, such as one third or three tenths of a song, for a concert scenario. When there is the same sampling time point as the current time point, the illumination information of the sampling time point may be taken as target illumination information, and when there is no sampling time point as the current time point, the illumination information at the last sampling time point before the current time point may be taken as target illumination information. In practical application, the illumination information sequence of each preset illumination probe needs to be set in advance, that is, before the step of acquiring the illumination information sequence corresponding to the target illumination probe, the illumination effect presentation method may further include:
1-1, acquiring the position of at least one virtual light source in the target virtual scene and special effect light information sent by each virtual light source at a plurality of sampling time points.
The sampling time points and the special effect light information are generally set in advance, and the sampling time points can comprise a plurality of time points with fixed time intervals and/or change nodes for changing the special effect light. The special effect light information may include parameters such as color value, luminous intensity, beam size, illumination angle (mainly for rotatable virtual light sources) of the special effect light.
It should be noted that, for a concert scene, different audio may set different sampling time points and specific light information, in general, the interval duration of the sampling time points and the sampling frequency are closely related, the higher the sampling frequency is, the shorter the interval duration of adjacent sampling time points is, the faster the variation frequency of specific light is, and the same audio may correspond to different specific light information at different sampling time points. The sampling frequency (or the change frequency of the special effect light) and the special effect color can be set along with the rhythm and the wind of music, for example, the overall change frequency of the special effect light can be faster for heavy metal styles such as rock, the color can be brighter, the overall change frequency of the special effect light can be slower for light styles such as ballad, the color can be softer, and in the same piece of music, the change frequency of the special effect light can be further adjusted according to the rhythm, for example, the change frequency can be faster for a compact-rhythm part, the change frequency can be slower for a slow-rhythm part, and the like.
And 1-2, determining a plurality of preset illumination probes and arrangement positions of each preset illumination probe according to the position of the virtual light source and the target virtual scene.
The virtual light sources can be multiple, and mainly comprise virtual lamps for simulating special effect light irradiation in a real stage scene, such as virtual stage lamps for simulating special effects of light columns, neon lights and the like in a concert scene. The number and arrangement positions of the preset illumination probes may be designed manually, in general, a relatively large number of preset illumination probes may be set in a region where the virtual light source is relatively dense, or the main active region of the moving object is relatively sparse, or a relatively small number of preset illumination probes may be set in a region where the virtual light source is relatively sparse, or the moving object is not so far, for example, please refer to fig. 3, for a concert simulation game, a relatively dense region or a main active region M where stage lamps are arranged may be set with a relatively large number of preset illumination probes, and for a region where stage lamps are not arranged, or the remaining active region N may be set with a small number of preset illumination probes or even no preset illumination probes.
The number and arrangement positions of the preset illumination probes can also be automatically designed through a deep learning model, for example, for different scene types related to special effect light, a user can train a corresponding illumination probe learning model in advance based on virtual light sources and active area conditions in a large number of sample scenes, and then the number and arrangement positions of the actual illumination probes are determined through the learning model and scene arrangement conditions corresponding to the specific scenes.
And 1-3, generating an illumination information sequence of each preset illumination probe according to the special effect light information and the arrangement positions.
For a concert scene, the specific light is usually changed continuously, the virtual stage lamps working at different times may be different, and the same virtual stage lamp may also be different in working conditions (such as the emitted specific light and the rotation angle of the lamp) at different times, so that we can manufacture specific light time sequence change assets (illumination information sequences) of each illumination probe according to the designed arrangement position and specific light change condition of the virtual stage lamp, that is, information of the time change of the specific light emitted by each illumination probe.
At this time, the step of acquiring the illumination information sequence corresponding to the target illumination probe specifically includes: and acquiring the illumination information sequences corresponding to the target illumination probes from all the generated illumination information sequences.
Of course, since different tones correspond to different special effect information, after the special effect light time sequence change asset (that is, the illumination probe and the special effect light time sequence change asset are established) of each illumination probe is manufactured, the association between the change asset and the tones is also required to be established, and then the corresponding special effect light time sequence change asset can be obtained based on the target tone and the target illumination probe, that is, further, the step of obtaining the illumination information sequence corresponding to the target illumination probe from all the generated illumination information sequences includes: and acquiring illumination information sequences corresponding to the target illumination probe and the target audio from all the generated illumination information sequences. In the actual operation process, the user can select the audio which the user wants to sing from a local library or online as the target audio. In one embodiment, the steps 1-3 may include:
1-3-1. Determining the cube box map of each preset illumination probe at each sampling time point according to the special effect light information and the arrangement positions, wherein each preset illumination probe corresponds to different cube box maps at different sampling time points.
The cube box (Cubemap) map is essentially a 3D texture map, and is composed of 6 individual two-dimensional textures, each two-dimensional texture is a face of the cube box, the center point of the cube box is the arrangement position of the preset illumination probes, the cube box map describes the illumination condition of a cube centered on the preset illumination probes, for example, please refer to fig. 4, fig. 4 shows a cube box map of a preset illumination probe, and the cube box map includes 6 faces A1-A6 in total, each face has corresponding texture pixels, and the 6 faces can be spliced into a cube. The sampling of the cube box map is performed by taking a 3D vector (e.g., (s, t, r)) as a texture coordinate, the 3D vector is used only as a direction vector, openGL (Open Graphics Library ) obtains that the direction vector touches a corresponding texture pixel on the surface of the cube box as a sampling result, and obtains the cube box map.
And 1-3-2, performing spherical harmonic sampling processing on each cube box map to obtain a corresponding spherical harmonic illumination information set.
And 1-3-3, generating illumination information corresponding to the corresponding cube box map according to the spherical harmonic illumination information set.
The spherical harmonic sampling is realized based on a spherical harmonic illumination technology, and generally, the spherical harmonic illumination needs to use a new illumination equation to replace a common illumination equation, related information in the equation is projected to a frequency space by using a spherical harmonic basis function and represented by coefficients, namely, the spherical harmonic sampling is equivalent to converting each texture pixel on a cube box map obtained by a conventional illumination equation by using the new illumination equation to obtain spherical harmonic illumination information, and all spherical harmonic illumination information corresponding to the same cube box map is integrated into one frame of illumination information based on the position of each texture pixel.
1-3-4, Generating an illumination information sequence according to the illumination information corresponding to the same preset illumination probe at different sampling time points.
The illumination information sequences are obtained by integrating all illumination information of the same preset illumination probe according to the sequence of sampling time points, and each preset illumination probe corresponds to one illumination information sequence.
S104, carrying out illumination effect presentation on the mobile game object according to the target illumination information.
In an embodiment, the step S104 may specifically include:
determining a virtual object model corresponding to the mobile game object;
Determining light receiving feedback information of the virtual object model after receiving special effect light according to the target light information;
and displaying the mobile game object in the target virtual scene according to the light receiving feedback information and the virtual object model.
The virtual object model is usually a three-dimensional model, and the light receiving feedback information mainly includes the color, brightness, coverage range and the like of the specific light presented on the surface of the object. The display image of the mobile game object finally presented in the game screen can be determined according to the light receiving feedback information and the display information of the virtual object model.
As can be seen from the foregoing, in the lighting effect presenting method provided by the present embodiment, by determining the current time point and the current position of the mobile game object in the target virtual scene, determining the target lighting probe from the plurality of preset lighting probes according to the current position, then determining the target lighting information according to the current time point and the target lighting probe, and presenting the lighting effect to the mobile game object according to the target lighting information, the light receiving effect of various special effect lights on the object can be vividly simulated in the concert scene, so that the illusion that the stage character is not received is avoided, the stage performance is enriched, and the stage display effect is improved.
On the basis of the above embodiment, the present embodiment further describes an illumination effect presenting method by taking a target virtual scene as a concert scene as an example. Referring to fig. 5, a lighting effect presenting method includes the following steps:
S201, acquiring the position of at least one virtual light source in a concert scene and special effect light information sent by each virtual light source at a plurality of sampling time points.
S202, determining a plurality of preset illumination probes and arrangement positions of each preset illumination probe according to the position of the virtual light source and the concert scene.
For example, in fig. 6, a total of 7 stage lamps are required to be arranged on a virtual concert scene, the stage lamps may include top row lamps, foot lamps, floodlights and the like, the main active area is a microphone peripheral area M, n preset illumination probes Q1-Q n may be arranged for such a scene, and most of the preset illumination probes are concentrated in the M area.
S203, determining the cube box map of each preset illumination probe at each sampling time point according to the special effect light information and the arrangement positions, wherein each preset illumination probe corresponds to different cube box maps at different sampling time points.
S204, carrying out spherical harmonic sampling processing on each cube box map to obtain a corresponding spherical harmonic illumination information set, generating illumination information corresponding to the corresponding cube box map according to the spherical harmonic illumination information set, and then generating an illumination information sequence according to the illumination information corresponding to the same preset illumination probe at different sampling time points.
For example, for the preset illumination probe Q5, a cube map of the preset illumination probe Q at each sampling time point (for example, t1 to tm) may be generated, that is, for each preset illumination probe, m cube maps need to be generated, and n×m cube maps need to be generated in total, so as to generate n illumination information sequences.
S205, determining a current playing node of the current playing audio in the concert scene, taking the current playing node as a current time point, and determining the current position of the mobile game object in the concert scene.
S206, determining a distance value between each arrangement position and the current position, and taking the preset illumination probe corresponding to the distance value with the smallest value as a target illumination probe.
For example, assuming that a mobile game object a appears in a concert scene, a mobile game object user can control the mobile game object a to move in the concert scene through a function key on a display interface, and in the process that the mobile game object a continuously moves, special effect light (such as a light beam) can be presented on the mobile game object a in real time, and a corresponding target illumination probe also changes, for example, for a time t3, the mobile game object a moves to a position a to obtain a target illumination probe Q9 closest to the position; at time t4, the target illumination probe Q3 is obtained by moving to the position b.
S207, acquiring the illumination information sequence corresponding to the target illumination probe to obtain a target illumination information sequence, and acquiring the illumination information corresponding to the current time point from the illumination information sequence to serve as target illumination information.
S208, carrying out illumination effect presentation on the mobile game object according to the target illumination information.
For example, according to the target illumination probe Q9 and the target illumination information frame obtained at the time t3 being h3, the display information of h3 and the display information of the mobile game object a can be combined to determine the display image of the mobile game object a in the concert scene, for example, after Q9 irradiation, the body surface of part of the mobile game object a changes from natural skin color to red.
On the basis of the foregoing embodiments, this embodiment will be further described from the perspective of a lighting effect presenting apparatus, and referring to fig. 7, fig. 7 specifically describes a lighting effect presenting apparatus provided by an embodiment of the present application, which may include: a first determination module 10, a second determination module 20, a third determination module 30, and a presentation module 40, wherein:
(1) First determination module 10
A first determining module 10 is configured to determine a current point in time and a current position of the mobile game object in the target virtual scene.
The target virtual scenes mainly comprise virtual stage scenes related to special effect light illumination, such as concert scenes in games. The mobile game object may include a dynamic object and a short-time static object, which may be a character, an animal, or the like, and may be a previously set out object in a game or a game object controlled by a user player.
In some embodiments, when the target virtual scene is a concert scene, the first determining module 10 is specifically configured to:
determining a current playing node of current playing audio in the concert scene;
and taking the current playing node as the current time point of the mobile game object in the concert scene.
The current playing audio can be a local library or songs and music made by a user, can also be an online audio resource, and the playing node mainly refers to the playing progress of the audio when playing, and can be the ratio of the played duration to the total playing duration. The playing progress may be adjusted at will by the user according to his own needs, i.e. referring to fig. 8, the lighting effect presenting apparatus may further be included in the adjusting module 50 for:
before the first determining module 10 determines a current playing node of the current playing audio in the concert scene, displaying a playing operation interface corresponding to the current playing audio;
Acquiring a progress adjustment operation for the current playing audio through the playing operation interface;
and adjusting the playing node of the current playing audio according to the progress adjusting operation.
Wherein, the playing operation interface can display the content related to the current playing audio, such as the name, singer, composer and other basic information, and progress control bar, playing button or pause button and the like. The progress adjustment operation may be that the user may drag a slider on the progress control bar to move, such as forward or backward or pause, by means of a manual, voice or gesture, etc., so as to change the playing progress of the audio.
(2) The second determination module 20
A second determining module 20 for determining a target illumination probe from a plurality of preset illumination probes according to the current position.
The illumination probe (a preset illumination probe and a target illumination probe) is realized based on an illumination detection technology, and is mainly used for providing high-quality illumination for moving objects in a scene, providing a mode for capturing and using relevant information when the illumination passes through a vacuum area, and storing information when the illumination passes through the vacuum area. Since the closer the illumination probe is to the mobile game object, the more obvious the light receiving effect is to the object, the target illumination probe is usually the illumination probe closest to the mobile game object, and of course, in order to represent the light receiving authenticity of the object as much as possible, the target illumination probe may also be a plurality of illumination probes, for example, all the illumination probes with the distance between the target illumination probes and the mobile game object within the threshold range.
In one embodiment, the second determining module 20 is specifically configured to:
Acquiring the arrangement position of each preset illumination probe in a plurality of preset illumination probes;
Determining a distance value between each of the arrangement positions and the current position;
and taking the preset illumination probe corresponding to the distance value with the smallest value as a target illumination probe.
The arrangement positions and the number of the preset illumination probes are usually set in advance, and a user can set the number and the arrangement positions of the preset illumination probes in advance according to the position of a virtual light source (such as a virtual stage lamp) in a scene and the main appearance area of a moving object.
(3) Third determination module 30
A third determination module 30 for determining target illumination information based on the current point in time and the target illumination probe.
The target illumination information refers to the lamplight information of all special effect lights at the current time point stored in the target illumination probe, and the lamplight information can be stored in a frame form.
In one embodiment, the third determining module 30 is specifically configured to:
Acquiring an illumination information sequence corresponding to the target illumination probe to obtain a target illumination information sequence, wherein the target illumination information sequence comprises a plurality of illumination information arranged according to a time sequence;
And acquiring the illumination information corresponding to the current time point from the target illumination information sequence, and taking the acquired illumination information as target illumination information.
The illumination information sequence mainly comprises multi-frame illumination information which changes along with sampling time points, each sampling time point corresponds to one frame of illumination information and corresponds to a specific light time sequence changing asset of the target illumination probe. The current time point and the sampling time point may be considered as the time recorded by the system timer after the game starts, and may be a song playing node, such as one third or three tenths of a song, for a concert scenario. When there is the same sampling time point as the current time point, the illumination information of the sampling time point may be taken as target illumination information, and when there is no sampling time point as the current time point, the illumination information at the last sampling time point before the current time point may be taken as target illumination information.
In practical application, the illumination information sequence of each preset illumination probe needs to be set in advance, that is, please continue to refer to fig. 8, and the illumination effect presenting apparatus further includes a generating module 60 for:
1-1. Before the third determining module 30 obtains the illumination information sequence corresponding to the target illumination probe, the position of at least one virtual light source in the target virtual scene and the specific light information sent by each virtual light source at a plurality of sampling time points are obtained.
The sampling time points and the special effect light information are generally set in advance, and may include a plurality of time points with fixed time intervals and/or a change node where the special effect light changes. The special effect light information may include parameters such as color value, luminous intensity, beam size, illumination angle (mainly for rotatable virtual light sources) of the special effect light.
It should be noted that, for a concert scene, different audio may set different sampling time points and specific light information, in general, the interval duration of the sampling time points and the sampling frequency are closely related, the higher the sampling frequency is, the shorter the interval duration of adjacent sampling time points is, the faster the variation frequency of specific light is, and the same audio may correspond to different specific light information at different sampling time points. The sampling frequency (or the change frequency of the special effect light) and the special effect color can be set along with the rhythm and the wind of music, for example, the overall change frequency of the special effect light can be faster for heavy metal styles such as rock, the color can be brighter, the overall change frequency of the special effect light can be slower for light styles such as ballad, the color can be softer, and in the same piece of music, the change frequency of the special effect light can be further adjusted according to the rhythm, for example, the change frequency can be faster for a compact-rhythm part, the change frequency can be slower for a slow-rhythm part, and the like.
And 1-2, determining a plurality of preset illumination probes and arrangement positions of each preset illumination probe according to the position of the virtual light source and the target virtual scene.
The virtual light source may be multiple, and mainly includes virtual lamps for simulating special effect light irradiation in a real scene, such as virtual stage lamps for simulating special effects such as light columns and neon lights in a concert scene. The number and arrangement positions of the preset illumination probes may be designed manually, in general, a relatively large number of preset illumination probes may be set in a region where the virtual light source is relatively dense, or the main active region of the moving object is relatively sparse, or a relatively small number of preset illumination probes may be set in a region where the virtual light source is relatively sparse, or the moving object is not so far, for example, please refer to fig. 3, for a concert simulation game, a relatively dense region or a main active region M where stage lamps are arranged may be set with a relatively large number of preset illumination probes, and for a region where stage lamps are not arranged, or the remaining active region N may be set with a small number of preset illumination probes or even no preset illumination probes.
The number and arrangement positions of the preset illumination probes can also be automatically designed through a deep learning model, for example, for different scene types related to special effect light, a user can train a corresponding illumination probe learning model in advance based on virtual light sources and active area conditions in a large number of sample scenes, and then the number and arrangement positions of the actual illumination probes are determined through the learning model and scene arrangement conditions corresponding to the specific scenes.
And 1-3, generating an illumination information sequence of each preset illumination probe according to the special effect light information and the arrangement positions.
For a concert scene, the specific light is usually changed continuously, the virtual stage lamps working at different times may be different, and the same virtual stage lamp may also be different in working conditions (such as the emitted specific light and the rotation angle of the lamp) at different times, so that we can manufacture specific light time sequence change assets (illumination information sequences) of each illumination probe according to the designed arrangement position and specific light change condition of the virtual stage lamp, that is, information of the time change of the specific light emitted by each illumination probe.
At this time, the third determining module 30 specifically is configured to: and acquiring the illumination information sequences corresponding to the target illumination probes from all the generated illumination information sequences.
Of course, since different tones correspond to different special effect information, after the special effect light time sequence change asset (that is, the illumination probe and the special effect light time sequence change asset are established) of each illumination probe is manufactured, the association between the change asset and the tones is also required to be established, and then the corresponding special effect light time sequence change asset can be obtained based on the target tone and the target illumination probe, that is, further, the step of obtaining the illumination information sequence corresponding to the target illumination probe from all the generated illumination information sequences includes: and acquiring illumination information sequences corresponding to the target illumination probe and the target audio from all the generated illumination information sequences. In the actual operation process, the user can select the audio which the user wants to sing from a local library or online as the target audio.
In one embodiment, in performing the above steps 1-3, the generating module 60 may specifically be configured to:
1-3-1. Determining the cube box map of each preset illumination probe at each sampling time point according to the special effect light information and the arrangement positions, wherein each preset illumination probe corresponds to different cube box maps at different sampling time points.
The cube box (Cubemap) map is essentially a 3D texture map, and is composed of 6 individual two-dimensional textures, each two-dimensional texture is a face of the cube box, the center point of the cube box is the arrangement position of the preset illumination probes, the cube box map describes the illumination condition of a cube centered on the preset illumination probes, for example, please refer to fig. 4, fig. 4 shows a cube box map of a preset illumination probe, and the cube box map includes 6 faces A1-A6 in total, each face has corresponding texture pixels, and the 6 faces can be spliced into a cube. The sampling of the cube box map is performed by taking a 3D vector (e.g., (s, t, r)) as a texture coordinate, the 3D vector is used only as a direction vector, openGL (Open Graphics Library ) obtains that the direction vector touches a corresponding texture pixel on the surface of the cube box as a sampling result, and obtains the cube box map.
And 1-3-2, performing spherical harmonic sampling processing on each cube box map to obtain a corresponding spherical harmonic illumination information set.
And 1-3-3, generating illumination information corresponding to the corresponding cube box map according to the spherical harmonic illumination information set.
The spherical harmonic sampling is realized based on a spherical harmonic illumination technology, and generally, the spherical harmonic illumination needs to use a new illumination equation to replace a common illumination equation, related information in the equation is projected to a frequency space by using a spherical harmonic basis function and represented by coefficients, namely, the spherical harmonic sampling is equivalent to converting each texture pixel on a cube box map obtained by a conventional illumination equation by using the new illumination equation to obtain spherical harmonic illumination information, and all spherical harmonic illumination information corresponding to the same cube box map is integrated into one frame of illumination information based on the position of each texture pixel.
1-3-4, Generating an illumination information sequence according to the illumination information corresponding to the same preset illumination probe at different sampling time points.
The illumination information sequences are obtained by integrating all illumination information of the same preset illumination probe according to the sequence of sampling time points, and each preset illumination probe corresponds to one illumination information sequence.
(4) Presentation module 40
And the presenting module 40 is used for presenting the illumination effect to the mobile game object according to the target illumination information.
In one embodiment, the presenting module 40 is specifically configured to:
determining a virtual object model corresponding to the mobile game object;
Determining light receiving feedback information of the virtual object model after receiving special effect light according to the target light information;
and displaying the mobile game object in the target virtual scene according to the light receiving feedback information and the virtual object model.
The virtual object model is usually a three-dimensional model, and the light receiving feedback information mainly includes the color, brightness, coverage range and the like of the specific light presented on the surface of the object. The display image of the mobile game object finally presented in the game screen can be determined according to the light receiving feedback information and the display information of the virtual object model.
In the implementation, each unit may be implemented as an independent entity, or may be implemented as the same entity or several entities in any combination, and the implementation of each unit may be referred to the foregoing method embodiment, which is not described herein again.
As can be seen from the foregoing, in the lighting effect presenting apparatus provided in this embodiment, the first determining module 10 determines the current time point and the current position of the mobile game object in the target virtual scene, the second determining module 20 determines the target lighting probe from the plurality of preset lighting probes according to the current position, and then the third determining module 30 determines the target lighting information according to the current time point and the target lighting probe, and the presenting module 40 presents the lighting effect to the mobile game object according to the target lighting information, so that the light receiving effect of various special effects on the object can be realistically simulated in the concert scene, the illusion that the stage character is not light-receiving is avoided, the stage expression content is enriched, and the stage display effect is improved.
Correspondingly, the embodiment of the application also provides computer equipment, which can be a terminal or a server, wherein the terminal can be a smart phone, a tablet Personal computer, a notebook computer, a touch screen, a game console, a Personal computer, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA) and other equipment. Fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer apparatus 400 includes a processor 401 having one or more processing cores, a memory 402 having one or more computer readable storage media, and a computer program stored on the memory 402 and executable on the processor. The processor 401 is electrically connected to the memory 402. It will be appreciated by those skilled in the art that the computer device structure shown in the figures is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
Processor 401 is a control center of computer device 400 and connects the various portions of the entire computer device 400 using various interfaces and lines to perform various functions of computer device 400 and process data by running or loading software programs and/or modules stored in memory 402 and invoking data stored in memory 402, thereby performing overall monitoring of computer device 400.
In the embodiment of the present application, the processor 401 in the computer device 400 loads the instructions corresponding to the processes of one or more application programs into the memory 402 according to the following steps, and the processor 401 executes the application programs stored in the memory 402, so as to implement various functions:
Determining the current time point and the current position of the mobile game object in the target virtual scene;
Determining a target illumination probe from a plurality of preset illumination probes according to the current position;
determining target illumination information according to the current time point and the target illumination probe;
And carrying out illumination effect presentation on the mobile game object according to the target illumination information.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 9, the computer device 400 further includes: a touch display 403, a radio frequency circuit 404, an audio circuit 405, an input unit 406, and a power supply 407. The processor 401 is electrically connected to the touch display 403, the radio frequency circuit 404, the audio circuit 405, the input unit 406, and the power supply 407, respectively. Those skilled in the art will appreciate that the computer device structure shown in FIG. 9 is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The touch display 403 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display screen 403 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of a computer device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 401, and can receive and execute commands sent from the processor 401. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 401 to determine the type of touch event, and the processor 401 then provides a corresponding visual output on the display panel in accordance with the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 403 to realize the input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch-sensitive display 403 may also implement an input function as part of the input unit 406.
In the embodiment of the present application, the processor 401 executes the game application program to generate a picture of the virtual three-dimensional scene on the touch display screen 403, where the picture includes a graphical user interface (UI interface), and the graphical user interface includes a second spatial orientation indicator, where a spatial orientation identifier corresponding to the target object is displayed on the second spatial orientation indicator, and the spatial orientation identifier is used to indicate an orientation where the target object is located.
The touch display 403 may be used to present a screen of a virtual three-dimensional scene, and a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface.
The radio frequency circuitry 404 may be used to transceive radio frequency signals to establish wireless communications with a network device or other computer device via wireless communications.
The audio circuitry 405 may be used to provide an audio interface between a user and a computer device through speakers, microphones, and so on. The audio circuit 405 may transmit the received electrical signal after audio data conversion to a speaker, where the electrical signal is converted into a sound signal for output; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 405 and converted into audio data, which are processed by the audio data output processor 401 and sent via the radio frequency circuit 404 to, for example, another computer device, or which are output to the memory 402 for further processing. The audio circuit 405 may also include an ear bud jack to provide communication of the peripheral ear bud with the computer device.
The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 407 is used to power the various components of the computer device 400. Alternatively, the power supply 407 may be logically connected to the processor 401 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. The power supply 407 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 9, the computer device 400 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer readable storage medium having stored therein a plurality of computer programs that can be loaded by a processor to perform the steps of any of the lighting effect presentation methods provided by the embodiments of the present application. For example, the computer program may perform the steps of:
Determining the current time point and the current position of the mobile game object in the target virtual scene;
Determining a target illumination probe from a plurality of preset illumination probes according to the current position;
determining target illumination information according to the current time point and the target illumination probe;
And carrying out illumination effect presentation on the mobile game object according to the target illumination information.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The steps of any one of the lighting effect presenting methods provided by the embodiments of the present application can be executed by the computer program stored in the storage medium, so that the beneficial effects that any one of the lighting effect presenting methods provided by the embodiments of the present application can be achieved, which are detailed in the previous embodiments and are not described herein.
The foregoing describes in detail a lighting effect presentation method, apparatus, storage medium and computer device provided by the embodiments of the present application, and specific examples are applied to illustrate the principles and embodiments of the present application, where the foregoing examples are only used to help understand the method and core idea of the present application; meanwhile, as those skilled in the art will vary in the specific embodiments and application scope according to the ideas of the present application, the present description should not be construed as limiting the present application in summary.

Claims (9)

1. A lighting effect presentation method, comprising:
Determining the current time point and the current position of a mobile game object in a target virtual scene, wherein the current time point is the time recorded by a system timer after the game starts;
determining a target illumination probe from a plurality of preset illumination probes according to the current position;
acquiring an illumination information sequence corresponding to the target illumination probe to obtain a target illumination information sequence, wherein the target illumination information sequence comprises a plurality of frames of illumination information arranged according to a time sequence;
acquiring the illumination information corresponding to the current time point from the target illumination information sequence, and taking the acquired illumination information as target illumination information;
And carrying out illumination effect presentation on the mobile game object according to the target illumination information.
2. The lighting effect presentation method according to claim 1, wherein the determining a target lighting probe from a plurality of preset lighting probes according to the current position comprises:
acquiring the arrangement position of each preset illumination probe in a plurality of preset illumination probes;
Determining a distance value between each of the arrangement positions and the current position;
and taking the preset illumination probe corresponding to the distance value with the smallest value as a target illumination probe.
3. The lighting effect presentation method as claimed in claim 1, wherein the target virtual scene comprises a concert scene, and the determining a current point in time at which the mobile game object is located in the target virtual scene comprises:
determining a current playing node of current playing audio in the concert scene;
And taking the current playing node as the current time point of the mobile game object in the concert scene.
4.A lighting effect rendering method according to claim 3, characterized in that before determining a current playing node of a current playing audio in the concert scene, further comprising:
Displaying a playing operation interface corresponding to the current playing audio;
acquiring a progress adjustment operation for the current playing audio through the playing operation interface;
and adjusting the playing node of the current playing audio according to the progress adjusting operation.
5. The lighting effect presentation method according to claim 1, further comprising, before acquiring the lighting information sequence corresponding to the target lighting probe:
acquiring the position of at least one virtual light source in the target virtual scene and special effect light information sent by each virtual light source at a plurality of sampling time points;
Determining a plurality of preset illumination probes and arrangement positions of each preset illumination probe according to the position of the virtual light source and the target virtual scene;
generating an illumination information sequence of each preset illumination probe according to the special effect light information and the arrangement positions;
the obtaining the illumination information sequence corresponding to the target illumination probe comprises the following steps: and acquiring the illumination information sequences corresponding to the target illumination probes from all the generated illumination information sequences.
6. The lighting effect presentation method according to claim 5, wherein the generating a lighting information frame sequence of each preset lighting probe according to the special effect light information and the arrangement position comprises:
Generating a cube box map of each preset illumination probe at each sampling time point according to the special effect light information and the arrangement positions, wherein each preset illumination probe corresponds to different cube box maps at different sampling time points;
Carrying out spherical harmonic sampling treatment on each cube box map to obtain a corresponding spherical harmonic illumination information set;
generating single-frame illumination information corresponding to the corresponding cube box map according to the spherical harmonic illumination information set;
generating an illumination information sequence according to the illumination information corresponding to the same preset illumination probe at different sampling time points.
7. A lighting effect presentation device, comprising:
The first determining module is used for determining the current time point and the current position of the mobile game object in the target virtual scene, wherein the current time point is the time recorded by the system timer after the game starts;
the second determining module is used for determining a target illumination probe from a plurality of preset illumination probes according to the current position;
The third determining module is used for obtaining an illumination information sequence corresponding to the target illumination probe to obtain a target illumination information sequence, wherein the target illumination information sequence comprises a plurality of frames of illumination information arranged according to a time sequence, the illumination information corresponding to the current time point is obtained from the target illumination information sequence, and the obtained illumination information is used as target illumination information;
and the presentation module is used for presenting the illumination effect to the mobile game object according to the target illumination information.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, which is adapted to be loaded by a processor for performing the steps in the lighting effect presentation method as claimed in any one of claims 1-6.
9. A computer device, characterized in that the computer device comprises a memory in which a computer program is stored and a processor which performs the steps in the lighting effect presentation method according to any one of claims 1-6 by calling the computer program stored in the memory.
CN202110007953.0A 2021-01-05 2021-01-05 Lighting effect presentation method and device, storage medium and computer equipment Active CN112587915B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110007953.0A CN112587915B (en) 2021-01-05 2021-01-05 Lighting effect presentation method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110007953.0A CN112587915B (en) 2021-01-05 2021-01-05 Lighting effect presentation method and device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN112587915A CN112587915A (en) 2021-04-02
CN112587915B true CN112587915B (en) 2024-07-09

Family

ID=75207345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110007953.0A Active CN112587915B (en) 2021-01-05 2021-01-05 Lighting effect presentation method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN112587915B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299220B (en) * 2021-11-19 2024-11-15 腾讯科技(成都)有限公司 Lightmap data generation method, device, equipment, medium and program product
CN114862995B (en) * 2022-03-31 2025-07-15 北京智明星通科技股份有限公司 Data processing method, device, electronic device and storage medium
CN115063521B (en) * 2022-05-06 2025-09-12 网易(杭州)网络有限公司 Point light source effect processing method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101458823A (en) * 2008-12-19 2009-06-17 北京航空航天大学 Real-time lighting drawing method under virtual stage environment
CN108236783A (en) * 2018-01-09 2018-07-03 网易(杭州)网络有限公司 The method, apparatus of illumination simulation, terminal device and storage medium in scene of game

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6685326B2 (en) * 2001-06-08 2004-02-03 University Of Southern California Realistic scene lighting simulation
CN109712226A (en) * 2018-12-10 2019-05-03 网易(杭州)网络有限公司 The see-through model rendering method and device of virtual reality
CN110992466B (en) * 2019-12-05 2021-05-18 腾讯科技(深圳)有限公司 Illumination probe generation method and device, storage medium and computer equipment
CN111744183B (en) * 2020-07-02 2024-02-09 网易(杭州)网络有限公司 Illumination sampling method and device in game and computer equipment
CN111760277B (en) * 2020-07-06 2024-05-28 网易(杭州)网络有限公司 Illumination rendering method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101458823A (en) * 2008-12-19 2009-06-17 北京航空航天大学 Real-time lighting drawing method under virtual stage environment
CN108236783A (en) * 2018-01-09 2018-07-03 网易(杭州)网络有限公司 The method, apparatus of illumination simulation, terminal device and storage medium in scene of game

Also Published As

Publication number Publication date
CN112587915A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
US10953336B2 (en) Role simulation method and terminal apparatus in VR scene
CN112587915B (en) Lighting effect presentation method and device, storage medium and computer equipment
CN112138386B (en) Volume rendering method, device, storage medium and computer equipment
CN111031386B (en) Video dubbing method and device based on voice synthesis, computer equipment and medium
TWI486904B (en) Method for rhythm visualization, system, and computer-readable memory
CN113487662B (en) Picture display method and device, electronic equipment and storage medium
CN113824982B (en) Live broadcast method, live broadcast device, computer equipment and storage medium
US12112731B2 (en) Method and apparatus for generating music file, and electronic device and storage medium
CN113420177A (en) Audio data processing method and device, computer equipment and storage medium
CN112717395B (en) Audio binding method, device, equipment and storage medium
CN111831249B (en) Audio playing method and device, storage medium and electronic equipment
WO2024077498A1 (en) Playback interface display method and apparatus, device, and readable storage medium
CN117215707A (en) UGC generation method, device and equipment in game program and storage medium
CN116503521A (en) Virtual scene display method, device, equipment and medium based on virtual production
CN116385609A (en) Method and device for processing special effect animation, computer equipment and storage medium
CN115908643A (en) Storm special effect animation generation method and device, computer equipment and storage medium
CN114723871B (en) Picture data processing method and device, electronic equipment and storage medium
CN115633223B (en) Video processing method, device, electronic device and storage medium
CN114159798B (en) Scene model generation method, device, electronic equipment and storage medium
CN115633223A (en) Video processing method, device, electronic device and storage medium
CN119656600A (en) A virtual component processing method, device, computer equipment and storage medium
CN119006684A (en) Image rendering method, device, computer equipment and computer readable storage medium
JP2025090132A (en) Information processing device, method, and program
CN118807200A (en) Volumetric cloud drawing method, device, electronic equipment and program product
CN119185935A (en) A virtual model processing method, device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant