[go: up one dir, main page]

WO2024255597A1 - Component generation method and apparatus in game scene, storage medium, and electronic device - Google Patents

Component generation method and apparatus in game scene, storage medium, and electronic device Download PDF

Info

Publication number
WO2024255597A1
WO2024255597A1 PCT/CN2024/096124 CN2024096124W WO2024255597A1 WO 2024255597 A1 WO2024255597 A1 WO 2024255597A1 CN 2024096124 W CN2024096124 W CN 2024096124W WO 2024255597 A1 WO2024255597 A1 WO 2024255597A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
component
scene component
image
image area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2024/096124
Other languages
French (fr)
Chinese (zh)
Inventor
初小宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Publication of WO2024255597A1 publication Critical patent/WO2024255597A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/34Graphical or visual programming
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/36Software reuse
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6009Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video
    • A63F2300/695Imported photos, e.g. of the player
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present disclosure relates to the field of computer graphics technology, and in particular to a component generation method in a game scene, a component generation device in a game scene, a computer-readable storage medium, and an electronic device.
  • a game scene is usually composed of a certain number of components (or models), each of which may be an object or a person in the game scene. Modeling and editing objects or people in the game scene is one of the main tasks in building a game scene.
  • modeling and editing in game scenes mainly rely on manual operations by relevant personnel.
  • the generation process of a single component generally includes: manually editing the shape, adjusting the size, position, rendering color, texture, etc., and finally generating the required component. This method consumes high manpower and time costs and is inefficient.
  • the present disclosure provides a component generation method in a game scene, a component generation device in a game scene, a computer-readable storage medium, and an electronic device.
  • a method for generating components in a game scene comprising: displaying a graphical user interface provided by running a game program, displaying a game editing scene to be edited and a plurality of scene component selection controls in the graphical user interface, the scene component selection controls being used to respond to and generate corresponding scene components in the game editing scene according to operation instructions; providing an image import entry in the graphical user interface, and accepting an initial image imported based on the image import entry; dividing the initial image into a plurality of image areas, determining the scene components corresponding to the image areas and information of the scene components; generating the scene components corresponding to the image areas in the game editing scene according to the information of the scene components corresponding to the image areas, to form a scene component combination corresponding to the initial image.
  • a device for generating components in a game scene comprising: a graphical user interface processing module, configured to display a graphical user interface provided by running a game program, displaying a game editing scene to be edited and a plurality of scene component selection controls in the graphical user interface, the scene component selection controls being used to respond to and generate corresponding scene components in the game editing scene according to operation instructions; an information acquisition module, configured to provide an image import entry in the graphical user interface, and accept an initial image imported based on the image import entry; a scene component determination module, configured to divide the initial image into a plurality of image areas, determine the scene components corresponding to the image areas and information of the scene components; a component generation module, configured to generate the scene components corresponding to the image areas in the game editing scene according to the information of the scene components corresponding to the image areas, to form a scene component combination corresponding to the initial image.
  • a graphical user interface processing module configured to display a graphical user interface provided by running a game program, displaying a game editing scene to be
  • a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the component generation method in the game scene of the first aspect and its possible implementation methods are implemented.
  • an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to execute the component generation method in the game scene of the above-mentioned first aspect and its possible implementation method by executing the executable instructions.
  • FIG1 is a schematic diagram showing a game editing scene and a scene component selection control according to one of the exemplary embodiments
  • FIG2 is a schematic diagram showing a perspective of setting a game editing scene according to one of the exemplary embodiments
  • FIG3A is a schematic diagram showing an observation angle of one of the exemplary embodiments.
  • FIG3B is a schematic diagram showing a game perspective of one of the exemplary embodiments.
  • FIG4 is a flowchart showing a method for generating components in a game scene according to one of the exemplary embodiments
  • FIG5 is a schematic diagram showing one of the steps of importing an initial image according to the exemplary embodiment
  • FIG6 shows a flowchart of determining a scene component and information of the scene component according to one of the exemplary embodiments
  • FIG. 7 is a schematic diagram showing a generation queue according to one of the exemplary embodiments.
  • FIG8 is a schematic diagram showing a generated partial scene component according to one of the exemplary embodiments.
  • FIG9 is a schematic diagram showing all scene components generated according to one of the exemplary embodiments.
  • FIG10 is a schematic diagram showing one of the exemplary embodiments of moving a scene component combination
  • FIG. 11 is a schematic diagram showing scaling of a scene component combination according to one of the exemplary embodiments.
  • FIG. 12 is a schematic diagram showing a method of rotating a scene component combination according to one of the exemplary embodiments
  • FIG13 shows a system architecture diagram of an operating environment according to one of the exemplary embodiments
  • FIG14 is a schematic diagram showing the structure of a component generation device in a game scene in one of the exemplary embodiments
  • FIG. 15 is a schematic structural diagram of an electronic device according to one of the exemplary embodiments.
  • the picture wall is a component that can present image content.
  • the original picture is directly placed on the wall of the wall component in the game scene.
  • the player's perspective in the game is aimed at the wall, the original picture can be seen, thereby realizing the wall component as a picture wall.
  • the picture is inconsistent with the style of the game scene, such as the picture is a photo taken in the real world and the game scene is in an animation style, such a picture wall will cause a visual abruptness to the player, that is, the picture wall does not look like a part of the game scene.
  • an exemplary embodiment of the present disclosure provides a component generation method in a game scene, which can improve the generation efficiency of components such as a picture wall.
  • a graphical user interface can be displayed by a terminal device, and the terminal device can be a mobile phone, a personal computer, a tablet computer, an intelligent wearable device, a game console, etc., which has a display function and can display a graphical user interface.
  • the graphical user interface may include a screen of the terminal device running an operating system, such as a desktop, a system setting interface, an application program interface, etc.
  • an operating system such as a desktop, a system setting interface, an application program interface, etc.
  • a game editing scene provided by the running game program can be displayed in the graphical user interface.
  • the game program may be a main game program, and a game scene editing function is provided in the main game program (such as a game editor built into the game program).
  • the game editing scene can be entered.
  • the game program may also be a game scene editing program associated with the main game program, such as a game editor that can run independently without relying on the main game program.
  • the user can choose to create a new game scene and edit it, or can choose to edit an existing game scene.
  • the game editing scene to be edited and multiple scene component selection controls can be displayed in the graphical user interface.
  • the game editing scene may include the background of the scene and the generated components.
  • the scene component selection controls may include controls such as "block component", "cylinder component”, “semi-cylinder component”, etc.
  • This exemplary embodiment supports players to customize editing scenes. Therefore, the user in this article can refer to the game production staff (such as artists) of the game manufacturer, or it can refer to the player.
  • a virtual camera may be provided in the game editing scene.
  • the virtual camera is a tool in the game program that simulates a real camera to shoot the game scene.
  • the virtual camera may be provided at any position in the game editing scene and shoot the game scene from any perspective. That is, the virtual camera may have any position in the game scene, and the position may be fixed or dynamically changed.
  • any number of virtual cameras may be provided in the game editing scene, and different virtual cameras may shoot different game scene images.
  • the game editing scene can present two different perspectives, namely the observation perspective and the game perspective.
  • the observation perspective refers to observing the game editing scene from a third-person perspective.
  • the user can directly control the virtual camera to move the perspective instead of controlling the game character in the game editing scene.
  • the game perspective refers to observing the game editing scene from a first-person perspective.
  • the user can control a certain game character in the game editing scene, and the game character can be bound to a virtual camera, that is, the positional relationship between the game character and the virtual camera is fixed, for example, the game character can be located at the focus of the virtual camera, and when the user controls the game character to move, the virtual camera moves synchronously, thereby moving the perspective.
  • a virtual joystick, upward or downward controls, etc. can be set in the game editing scene, and the user can move the virtual camera or move the game character by operating these controls.
  • the game program provides a plurality of different scene components, as can be shown in the form of a scene component selection control in FIG. 1.
  • a scene component is a virtual model that constitutes a game scene, which can be an object, a person, or a partial object or person.
  • the scene components provided by the game program can include a basic scene component and a basic scene component combination.
  • a basic scene component refers to an indivisible scene component, which can be regarded as the smallest unit that constitutes a game scene.
  • a basic scene component can include a block component, a cuboid component, a cylindrical component, a spherical component, and the like.
  • a basic scene component combination is a scene component composed of multiple basic scene components.
  • a block component or a cuboid component can be combined on the circular surface of a cylindrical component to form a scene mechanism in the form of a roller, which is a combination of basic scene components.
  • the game program may come with a plurality of different scene components, which may be pre-configured by artists and stored in the game program, so that players can conveniently use these scene components to edit scenes.
  • the scene components can be pre-configured by the player, and the player can obtain the scene components that are not originally in the game program by modeling in the game editing scene or other editing interfaces.
  • the scene components configured by the player may include basic scene components and basic scene component combinations. For example, if the player sets a scene component as an indivisible whole when configuring it, the scene component is a basic scene component, otherwise it is a basic scene component combination.
  • the scene components configured by the player can be used only by the player himself, or they can be shared with other players.
  • one or more information such as the size, position, direction, color, texture, and shape of the scene components can be configured.
  • users can directly call the configured information, which is very convenient and efficient.
  • users can also adjust the configured information in the scene components, such as adjusting one or more of the above information to make it more in line with their needs and preferences.
  • FIG4 shows an exemplary process of a component generation method in a game scene, which may include the following steps S410 to S440:
  • Step S410 displaying a graphical user interface provided by running the game program, displaying a game editing scene to be edited and a plurality of scene component selection controls in the graphical user interface, wherein the scene component selection controls are used to respond to and generate corresponding scene components in the game editing scene according to the operation instructions;
  • Step S420 providing an image import entry in the graphical user interface, and accepting an initial image imported based on the image import entry;
  • Step S430 dividing the initial image into a plurality of image regions, and determining scene components corresponding to the image regions and information of the scene components;
  • Step S440 generating a scene component corresponding to the image area in the game editing scene according to the information of the scene component corresponding to the image area, and forming a scene component combination corresponding to the initial image.
  • the scene component combination is a combined virtual model formed by multiple scene components, and the scene component combination itself can be regarded as a model.
  • the scene component combination is a virtual model used to present the initial image in the game scene.
  • presenting the initial image or other images such as sampled images, target images, etc.
  • the approximate presentation can refer to presenting a blurred pattern of the initial image.
  • the scene component combination has a corresponding relationship with the initial image, and the corresponding relationship is reflected in the fact that the two are the same or similar in pattern, and it is also reflected in the fact that the scene component combination is the component finally generated in response to the instruction to import the initial image.
  • the scene component combination has at least one surface that can present an initial image.
  • the scene component combination can be a wall or a wall-like model, such as a picture wall, a pixel image, etc.
  • the present disclosure does not specifically limit the appearance of the scene component combination.
  • the surface of the scene component combination used to present the initial image can be a flat plane, such as the scene component combination can be a rectangular parallelepiped, and its two surfaces with the largest surface area can be used to present the initial image.
  • the surface of the scene component combination used to present the initial image can be a non-flat surface, such as the coordinates of each scene component in the scene component combination on the normal axis of the surface are not exactly the same, presenting an effect that each scene component has a certain undulation on the surface, which can create a visual sense of three-dimensionality.
  • a solution for generating a scene component combination (i.e., a virtual model) by importing corresponding images, determining the corresponding scene components and the information of the scene components according to the image area in the imported initial image, and then automatically generating the scene components and forming a scene component combination, without the need for the user to perform manual operation and editing, which greatly reduces the manpower and time costs and improves the efficiency of component generation.
  • a scene component combination i.e., a virtual model
  • the user can import the initial image according to his own needs and preferences, and finally generate a scene component combination of the corresponding pattern, that is, the generated scene component combination is used to present the imported initial image, thereby obtaining a virtual model in the form of a picture wall, a pixel image, etc. in the game scene, which can enrich the game scene and meet the personalized needs of the user.
  • step S410 a graphical user interface provided by running the game program is displayed, and a game editing scene to be edited and multiple scene component selection controls are displayed in the graphical user interface.
  • the scene component selection controls are used to respond to and generate corresponding scene components in the game editing scene according to operation instructions.
  • the game editing scene to be edited can be a newly created game editing scene or a stored game editing scene.
  • the scene component selection control can be as shown in Figure 1, but is not limited to the scene component type shown in Figure 1.
  • the corresponding scene component can be generated in the game editing scene.
  • the user can press and hold the "block component" control and drag it to the game editing scene to generate the corresponding block component in the game editing scene. Then, the user can also edit and adjust the size, position, direction, color, texture, shape, etc. of the block component.
  • step S420 an image import portal is provided in the graphical user interface, and an initial image imported based on the image import portal is accepted.
  • the image import entrance can be provided in the relevant game editing interface.
  • the option of the generator can be selected to trigger the opening of the generator interface, which can be shown in Figure 5, where an image import entrance is provided.
  • the user can select the initial image based on the image import entrance and import it into the current game scene to be edited.
  • the user can import the initial image through the image import entrance provided in other interfaces of the game program without opening the game editing scene, and can select the game scene when importing.
  • the game program can also open the game editing scene and load the game scene selected by the user.
  • the initial image can be automatically selected by the game program and imported into the game editing scene.
  • the initial image is used to provide pattern information for the final generated scene component combination, so that the scene component combination can present the initial image.
  • the initial image can be an imported original image, or it can be an image after the original image is processed.
  • the original image can be processed into a specified size to obtain the initial image.
  • the present disclosure does not limit the source, format, content, etc. of the initial image.
  • the initial image can come from the photo album of the terminal device, or it can come from the Internet. Users can select an initial image of any content according to their needs and preferences.
  • the server may review the initial image to ensure that the content of the initial image meets relevant requirements.
  • step S430 the initial image is divided into a plurality of image regions, and scene components corresponding to the image regions and information of the scene components are determined.
  • the image area of the initial image is represented by a scene component, so that the entire initial image is modeled in the form of a combination of scene components.
  • the image area can be divided, and the scene component corresponding to the image area and the information of the scene component can be determined.
  • each image area presents a single color or a single color system, which makes it easy to determine the color or texture information of the scene component corresponding to each image area.
  • the image area can be divided according to the size, number, shape and other information of the image area set in advance.
  • the scene component corresponding to the image area is determined to be a square component, and if the image area is a rectangle, the scene component corresponding to the image area is determined to be a cuboid component.
  • the information of the scene component corresponding to each image area can be determined, including but not limited to size, position, direction, color, texture, morphology, etc., so that the scene component subsequently generated based on the information can present the appearance or morphological effect of the image area.
  • the size of the scene component corresponding to the image area can be determined based on the size of the image area, such as the mapping relationship between the size of the image area and the size of the scene component can be pre-established, and the size of the scene component corresponding to the size of the image area can be calculated from the mapping relationship.
  • the position of the scene component corresponding to the image area can be determined based on the position of the image area in the initial image.
  • the color or texture of the scene component corresponding to the image area can be determined based on the color or texture of the image area, such as extracting a primary color from the color of the image area (which can be the average color calculated from the color values of all pixels in the image area), using the primary color as the color of the scene component, or extracting texture features from the image area, and generating the texture of the scene component based on the texture features.
  • the component generation method may further include the following steps:
  • the resource quantification parameter characterizes the amount of game resources used by the scene component combination, which can be quantified by the number of game resources, or by the amount of data, memory, etc. of the game resources.
  • the resource quantification parameter can be an exact value or a numerical range. If the resource quantification parameter is a numerical range, it means that the quantified value of the game resources used by the scene component combination should be within the numerical range.
  • the resource quantification parameter may include the number of scene components of the scene component combination, which may be a numerical value or a numerical range, indicating the number or numerical range of scene components contained in the scene component combination. The user can set the resource quantification parameter for the scene component combination. Referring to FIG.
  • the maximum number of scene components of the scene component combination (i.e., the maximum occupancy value in FIG. 5) can be set to 3425, indicating that the number of scene components contained in the scene component combination does not exceed 3425.
  • the user can set resource quantification parameters for the current game scene to be edited, indicating the amount of game resources used by the game scene, and then determine the resource quantification parameters of the scene component combination based on the resource quantification parameters of the game scene, such as subtracting the resource quantification parameters of other generated models from the resource quantification parameters of the game scene to obtain the resource quantification parameters of the scene component combination, or multiplying the resource quantification parameters of the game scene by a certain ratio (expressed as the resource ratio allocated to the scene component combination) to obtain the resource quantification parameters of the scene component combination.
  • the resource quantification parameters of the scene component combination can also be automatically determined by the game program, such as calculating the resource quantification parameters of the scene component combination based on the user's authority, the amount of cached data of the game level, the resource status of the terminal device (such as the remaining memory), or first calculating the resource quantification parameters of the game scene, and then calculating the resource quantification parameters of the scene component combination from the resource quantification parameters of the game scene.
  • the above-mentioned dividing the initial image into multiple image regions and determining the scene components and the information of the scene components corresponding to the image regions may include the following steps S610 and S620:
  • Step S610 sampling the initial image based on the resource quantization parameter, and obtaining the target image based on the sampling result
  • Step S620 taking the pixels of the target image as an image area, determining the scene components corresponding to the image area and the information of the scene components.
  • the sampling of the initial image can be downsampling or upsampling. If a relatively large resource quantization parameter is set, and the fineness of the corresponding scene component combination exceeds the initial image, the initial image can be upsampled. If a relatively small resource quantization parameter is set, and the fineness of the corresponding scene component combination is less than the initial image, the initial image can be downsampled.
  • the number of image areas can be determined according to the resource quantization parameter, that is, the number of pixels after sampling can be determined.
  • the resource quantization parameter of the scene component combination includes the number of scene components of the scene component combination, and the number of scene components may be an exact value.
  • the above-mentioned sampling of the initial image based on the resource quantization parameter and obtaining the target image based on the sampling result may include the following steps:
  • the initial image is sampled using the number of scene components of the scene component combination as the number of pixels after sampling, and a target image is obtained based on the sampling result.
  • the number of pixels after sampling can be equivalent to the number of image areas.
  • the pre-set number of scene components is used as the number of pixels after sampling, so that the number of image areas is equal to the number of scene components, thereby accurately controlling the number of scene components in the scene component combination.
  • the sampling result is the sampled image. If it is up-sampled, the sampled image is finer than the initial image. If it is down-sampled, the sampled image is more blurred than the initial image.
  • the sampled image can be further pre-processed, such as optimizing or simplifying the color, to obtain the target image.
  • the sampled image can also be directly used as the target image.
  • Each pixel of the target image can be used as an image region, which is equivalent to dividing the image region of the initial image by sampling, and the division method is simple and efficient. Furthermore, the corresponding scene component and the information of the scene component can be determined according to the information of each pixel of the target image.
  • the above-mentioned taking the pixel points of the target image as an image area and determining the scene component and the information of the scene component corresponding to the image area may include the following steps:
  • the initial size of the scene component combination is within the preset size range, the initial size of the scene component is kept unchanged
  • the initial size of the scene component combination exceeds the preset size range, the initial size of the scene component is adjusted so that the initial size of the scene component combination is within the preset size range after the initial size is adjusted.
  • the scene component has an initial size, which can be a default size pre-configured by the game program. If the size of the scene component is not set or adjusted, the initial size is used by default, that is, when the scene component is generated in the game editing scene, the size of the scene component is equal to the initial size.
  • the number of pixels of the target image is the number of scene components in the scene component combination. By multiplying this number by the initial size of the scene component, the size of the scene component combination formed by the initial size of the scene component can be calculated, which is recorded as the initial size of the scene component combination. It should be noted that since the scene component and the scene component combination have not yet been generated at this time, the initial size of the scene component combination calculated here is an estimated size.
  • the preset size range is the allowable size range set for the scene component combination, which can prevent the scene component combination from being too large or too small.
  • the user can set the preset size range in advance, or the game program can automatically set the preset size range, such as determining the corresponding preset size range according to the type of the scene component combination, or determining the preset size range of the scene component combination according to the size of the game scene.
  • the preset size range can be set for the width and height of the scene component combination as: 0 to 20 meters, that is, the width and height of the scene component combination cannot exceed 20 meters. If the initial size of the scene component combination is within the preset size range, the initial size of the scene component can be kept unchanged.
  • the initial size of the scene component combination exceeds the preset size range, the initial size of the scene component is adjusted. If the initial size of the scene component combination is too large, the initial size of the scene component is adjusted down, and if the initial size of the scene component combination is too small, the initial size of the scene component is adjusted up. After adjusting the initial size, the initial size of the scene component combination can be calculated again to ensure that it is within the preset size range.
  • the size information is quickly and accurately determined for the scene component, and the initial size or the adjusted size can be used as the size of the subsequently generated scene component, without the user manually setting the size each time. Furthermore, by adjusting the mechanism of the initial size, it is possible to ensure that the size of the subsequently generated scene component combination is within a preset size range, thereby controlling the size of the scene component combination at an appropriate level.
  • the component generation method may further include the following steps:
  • the target color quantity indicates how many colors the scene component combination has, so that the color of the scene component combination can be controlled to avoid being too complex or too simple.
  • the user can set the target color number for the scene component combination. As shown in Figure 5 above, the user can set the target color number when importing the initial image. For example, if it is set to 12, it means that the scene component combination has no more than 12 colors.
  • the game program can automatically set the target color number, such as automatically setting a suitable target color number for the scene component combination based on the user's permissions, game scene resources, and fineness. Number of colors.
  • the above-mentioned sampling of the initial image based on the resource quantization parameter and obtaining the target image based on the sampling result may include the following steps:
  • the color values of the pixels of the sampled image are color mapped based on the target color quantity to obtain the target image.
  • the number of colors in the sampled image is usually not equal to the target number of colors set for the scene component combination.
  • the color distribution of the sampled image may be complex, while the scene component combination can approximate the sampled image or the initial image, and its colors do not need to be so complex. Therefore, the target number of colors is usually less than the number of colors in the sampled image.
  • the number of colors in the sampled image can be increased or decreased to make it equal to the target number of colors.
  • Method 1 Color mapping by clustering pixel color values. Specifically, the above-mentioned color mapping of the pixel color values of the sampled image based on the number of target colors to obtain the target image may include the following steps:
  • the preset colors may be optional colors of scene components provided by the game program.
  • the game program may provide a set of optional colors for scene components, such as primary colors of 12 or 24 hues, as preset colors.
  • scene components such as primary colors of 12 or 24 hues
  • preset colors When determining or editing the color of a scene component, it is necessary to select from the preset colors. In this way, the scene component will not have colors other than the preset colors, which is conducive to simplifying the complexity and resource amount of the game scene and further improving the efficiency of component generation.
  • the pixel color values of the sampled image can be clustered into 12 color categories, such as using the K-Means algorithm, with K being 12 for clustering. Then, the preset color closest to each color category is determined among the preset colors, such as the preset color with the smallest distance to the cluster center color of the color category. Finally, the pixel color values in each color category are mapped to the preset color closest to the color category.
  • the cluster center color of the color category can be the center color of the color category (such as the center point color in the color space), or it can be the cluster center color calculated by weighting the pixel color values in the color category (such as determining the weight according to the number of pixels of the pixel color value, and weighting the pixel color value using the weight).
  • Method 2 Determine the preset colors of the target color quantity that match the color distribution of the sampled image from all the preset colors, and map the pixel color value of the sampled image to the closest preset color.
  • the color space can be divided into multiple color intervals according to all preset colors in advance, and each preset color corresponds to a color interval, such as the preset color can be the center color of the color interval.
  • the color distribution of the sampled image is statistically analyzed, and each pixel is divided into the corresponding color interval to determine the number of pixels in each color interval. If the number of target colors is 12, the preset colors in the 12 color intervals with the largest number of pixels are selected, thereby determining 12 preset colors.
  • the color value of each pixel of the sampled image is mapped to the preset color closest to it among the 12 preset colors.
  • Color mapping can realize the conversion between the color of the sampled image and the color of the scene component. Further, in one embodiment, the above-mentioned determination of the scene component corresponding to the image area and the information of the scene component may include the following steps:
  • the color value of the pixel of the target image is used as the color value of the scene component corresponding to the pixel.
  • the color value of each pixel of the target image is already a preset color that can be selected by the scene component, the color value of each pixel can be directly used as the color value of the scene component corresponding to each pixel, without further processing the color value of the scene component, and the processing process is relatively convenient and efficient. This ensures that the color of the target image, the color of the scene component and the color style of the game scene itself are coordinated.
  • the above-mentioned multiple scene component selection controls include a first scene component selection control and a second scene component selection control, and the shape of the second scene component corresponding to the second scene component selection control corresponds to the shape of the component spliced together by the first scene components corresponding to a preset number of first scene component selection controls.
  • the first scene component can be a square component or a rectangular component, the shape of the square component is a cube, and the shape of the rectangular component is a rectangular parallelepiped.
  • the second scene component can be a rectangular parallelepiped component, a stepped component, a cross component, etc., and its shape can be a rectangular parallelepiped, stepped, cross-shaped, etc., and these shapes can be spliced together by multiple cubes or rectangular parallelepipeds.
  • the scene component corresponding to the above-mentioned pixel point can be the first scene component.
  • the above-mentioned determination of the scene component corresponding to the image area and the information of the scene component can also include the following steps:
  • the multiple adjacent first scene components with the same color value are merged into a second scene component.
  • the first scene component is a square component
  • the second scene component is a cuboid component. If multiple square components with the same color value correspond to adjacent positions in the same row or column in the target image, these square components can be merged into a cuboid component, and the color value of the cuboid component is the same as the color value of the square component. This can reduce the number of scene components and help simplify the generation process of scene component combinations.
  • the plurality of scene component selection controls include a first scene component selection control, and the pixel point corresponding to the first scene component selection control
  • the scene component is the first scene component corresponding to the first scene component selection control.
  • the multiple adjacent first scene components with the same color value are merged into one first scene component, and the size of the merged first scene component is determined according to the number of the multiple adjacent first scene components with the same color value.
  • the size of the first scene component is adjustable and can be adjusted in any direction.
  • the first scene component is a rectangular parallelepiped component
  • any one or more of the length, width, and height of the rectangular parallelepiped component can be adjusted.
  • these cuboid components can be merged into a larger cuboid component, and the color value of the cuboid component after merging is the same as the color value of the cuboid component before merging, and the size along the merging direction is the size of the cuboid component before merging multiplied by the number of mergers.
  • N adjacent cuboid components with the same color value are merged, and the width of the cuboid component after merging is N times the width before merging, and the length and height can remain unchanged. This can also reduce the number of scene components, which is conducive to simplifying the generation process of scene component combination.
  • a virtual camera is provided in the game editing scene for real-time shooting and displaying the current screen of the game editing scene.
  • the above-mentioned determination of the scene component and the information of the scene component corresponding to the image area may include the following steps:
  • the position of the scene component corresponding to the image area is determined according to the relative position of the scene component corresponding to the image area in the scene component combination and the reference point position.
  • the scene component combination has a certain volume.
  • a reference point can be selected in the scene component combination, and the position of each scene component in the scene component combination can be determined by determining the position of the reference point.
  • the reference point can be the center point of the scene component, any corner point, or the center point of any edge, etc.
  • the reference point position of the scene component combination is determined according to the position of the virtual camera, so that the reference point is located at a suitable position in the virtual camera field of view, so that the user can see the entire scene component combination. More specifically, the reference point position of the scene component combination can be determined according to the position of the virtual camera when the initial image is imported or the scene component combination is generated (such as importing the initial image and clicking "Generate" in Figure 5).
  • the reference point may be the center point of the scene component, and the optical axis direction may be determined according to the position of the virtual camera, and the reference point position may be determined to be located in the optical axis direction.
  • the distance between the reference point position and the virtual camera (the distance along the optical axis direction) may also be determined according to the size of the scene component combination, so that the entire scene component combination can be displayed in the field of view of the virtual camera.
  • the position of each scene component can be determined according to the relative position of the scene component corresponding to each image area in the scene component combination.
  • the relative position can be an offset position relative to the reference point.
  • the position of the scene component can be calculated by adding the reference point position to the relative position.
  • a suitable generation position is determined for the scene component, so that when the scene component and the scene component combination are generated, the user can directly see the scene component and the scene component combination in the game editing scene.
  • step S440 based on the information of the scene components corresponding to the image area, the scene components corresponding to the image area are generated in the game editing scene to form a scene component combination corresponding to the initial image.
  • the process of generating a scene component may include: generating a scene component object, which may be a collection of game resources of the scene component, information of the scene component (such as related parameters) and related codes; loading the scene component object in the game editing scene, which may be represented by rendering the scene component. After the scene components corresponding to all image areas are generated, a scene component combination is formed by all the scene components.
  • the above-mentioned step of generating the scene component corresponding to the image area in the game editing scene according to the information of the scene component corresponding to the image area to form a scene component combination corresponding to the initial image may include the following steps:
  • the scene component corresponding to the image area is generated in the game editing scene according to the information of the scene component corresponding to the image area, so as to form a scene component combination corresponding to the initial image.
  • the generation queue can refer to FIG. 7, and can include one or more generation tasks, which can be generation tasks of scene component combination (such as picture wall) or generation tasks of other models.
  • each generation task can be arranged according to the establishment time of each generation task or the time of adding to the generation queue, and executed in the order of arrangement.
  • the user can also specify or adjust the arrangement order or execution order of each generation task.
  • the user can input a priority execution instruction for a certain generation task, and the game program can advance the execution order of the generation task, such as setting it as the next task.
  • the status of each generation task can also be displayed in the generation queue, such as waiting, executing, completed, etc.
  • step S430 when the generation task of the scene component combination is executed, step S430 can be executed.
  • each model in the game scene can be generated in an orderly manner, so that even if the user frequently inputs the instruction of the generation component in a short time (such as importing the initial image or clicking the generated instruction), it can also avoid the game program from loading too much content and causing jamming.
  • the relevant information of the generation task is displayed in the generation queue to realize the visualization of the background processing process of component generation, which is conducive to user perception.
  • the generation queue may show the generation time or completion time of each task, and the resource quantization parameters of the components in each task, such as the resource quantization parameter of the first picture wall is 3250.
  • the user can delete the task by operating the delete control (the control with the trash can icon in the figure), or trigger the generation process of the corresponding component by operating the “Generate to Scene” control.
  • a virtual camera is provided in the game editing scene for real-time shooting and displaying the current screen of the game editing scene.
  • the above-mentioned generation of a scene component corresponding to the image area in the game editing scene may include the following steps:
  • rendering can be performed synchronously, and the generation position of the scene component is placed within the field of view of the virtual camera, so that the user can see the generation of the scene component.
  • the above-mentioned generation of a scene component corresponding to an image area within the field of view of the virtual camera may include the following steps:
  • a scene component corresponding to the image area is generated on a plane within the field of view of the virtual camera, perpendicular to the optical axis of the virtual camera, and at a preset distance from the virtual camera.
  • the plane perpendicular to the optical axis of the virtual camera is the plane facing the field of view of the virtual camera.
  • the distance between the plane and the virtual camera (specifically, it can be the distance between the optical center of the virtual camera) is a preset distance, and the preset distance can be determined based on experience or the size of the game scene, the size of the scene component combination, etc. Exemplarily, the width and height of the scene component combination do not exceed 20 meters, and the preset distance can be 50 meters.
  • each scene component can be generated on the plane, that is, each scene component is tangent to or intersects with the plane. In this way, the scene component combination can be placed as a whole within the field of view of the virtual camera, and its size in the field of view is appropriate, and it will not fill the entire screen, nor will it appear too small.
  • the above-mentioned step of generating the scene component corresponding to the image area in the game editing scene according to the information of the scene component corresponding to the image area may include the following steps:
  • the scene components corresponding to the image area are generated in the game editing scene.
  • the generation order of scene components corresponding to different image areas can be randomly determined according to the processing power of the game program. For example, when the game program generates scene components, M threads responsible for generation can be run in parallel, so that M scene components can be generated at the same time.
  • the scene components corresponding to each image area can be divided into M groups, and each group is set with the same generation order, or the scene components corresponding to each image area are divided into M sets, and the generation order is set for the scene components in each set in the order of 1, 2, 3, ...
  • each scene component can be gradually generated according to the generation order, so that the generation process of scene components is more orderly and the game program is prevented from loading too much data at one time.
  • the above-mentioned step of generating the scene components corresponding to the image area in the game editing scene according to the information of the scene components corresponding to the image area and in the generation order of the scene components corresponding to the image area may include the following steps:
  • information of scene components corresponding to the image area is dynamically displayed, and the process of generating scene components corresponding to the image area in the game editing scene is carried out in the order of generating the scene components corresponding to the image area; wherein the current field of view is a picture formed by shooting the game editing scene with a virtual camera set in the game editing scene.
  • the generation process of the scene components can be dynamically displayed in the current field of view.
  • a screen without any generated scene components is displayed, and the screen may only have the background of the game editing scene, or other generated models; then, the generated scene components are gradually displayed, and Figure 8 shows a screen in which some scene components have been generated; finally, when all scene components have been generated, a complete scene component combination is formed, as shown in Figure 9, and the scene component combination is embodied in the form of a picture wall, a pixel image, etc.
  • the user can watch the complete generation process, avoid the user from waiting meaninglessly during the generation process, and make the user have a stronger perception of the generation process, and the user experience is better.
  • the process of dynamically displaying information of scene components corresponding to the image area in the current field of view, and generating scene components corresponding to the image area in the game editing scene according to the generation order of the scene components corresponding to the image area may include the following steps:
  • the game editing scene is captured according to the adjusted virtual camera to form an adjusted field of view;
  • the information of the scene components corresponding to the image area is dynamically displayed, and the process of generating the scene components corresponding to the image area in the game editing scene is carried out according to the generation order of the scene components corresponding to the image area.
  • the generation position of the scene component is fixed in the game editing scene.
  • the field of view can be adjusted through the field of view adjustment instruction.
  • the game program can control the virtual camera to adjust the field of view.
  • the generation process of the scene component is displayed with an adjusted field of view, so that the user can watch the generation process from different perspectives.
  • the field of view adjustment instruction can be an instruction input by the user, or it can be an instruction automatically implemented by the game program.
  • the user can adjust the field of view of the virtual camera, including moving the virtual camera position, rotating the virtual camera to change its field of view direction, adjusting the focal length or field of view of the virtual camera to change the center position of the field of view or the size of the field of view, etc. If the user wants to observe a certain scene component up close, the virtual camera can be moved to a position closer to the scene component.
  • the game program can automatically adjust the virtual camera according to the pre-set logic, such as controlling the virtual camera to rotate around the scene component to achieve a 360-degree viewing dynamic display effect of the scene component.
  • the display effect of the dynamic display generation process can be enriched, further enhancing the user experience.
  • the component generation method may further include the following steps:
  • the operation of adding a component refers to the operation of adding a new component in the game editing scene
  • the operation of editing a component refers to the operation of editing an existing component in the game editing scene.
  • These two types of operations can be locked during the generation process of the scene component, that is, components cannot be added or edited.
  • the control for adding a component or editing a component can be set to an inoperable state, such as being displayed in gray, or adding a prohibition icon to the control, so that the user cannot click on the control, etc., or the control can be hidden so that the user cannot operate it.
  • the form of the control can be left unchanged, and when the user adds a component or edits a component, the game program will not execute it, such as discarding the operation information, or displaying relevant prompts, such as "this operation cannot be performed at present".
  • the locks on the two operations may be released.
  • the information of the scene component may include at least one of the following: size, position, direction, color, texture, and shape.
  • the above-mentioned step of generating the scene component corresponding to the image area in the game editing scene according to the scene component information corresponding to the image area may include the following steps:
  • a scene component corresponding to the image area is generated in the game editing scene.
  • the above-mentioned various information of the scene component may be a null value (such as the game program will not set a default position for the scene component. If the position is not determined, the value of the position is a null value) or a default value (such as the game program can set an initial size for the scene component, and the initial size is the default value of the size).
  • a null value such as the game program will not set a default position for the scene component. If the position is not determined, the value of the position is a null value
  • a default value such as the game program can set an initial size for the scene component, and the initial size is the default value of the size.
  • Editing instructions may include but are not limited to: scaling instructions for adjusting the size of scene components; moving instructions for changing the position of scene components; rotation instructions for changing the direction of scene components; color editing instructions for adjusting the color of scene components; texture editing instructions for adjusting the texture of scene components; morphology adjustment instructions for adjusting the morphology of scene components, such as adjusting the scene components to a static morphology, or a dynamic morphology with automatically changing transparency, or a dynamic morphology that periodically disappears and appears, or a rotating dynamic morphology, etc.
  • the editing instructions can be open to users, that is, users can implement the above one or more editing instructions through manual operation. If the user generates a scene component through the editing instructions manually operated, a large amount of manual operation is required.
  • the game program can automatically call the required editing instructions and execute them according to the information of the scene component to quickly realize component generation.
  • the editing instructions manually operated by the user and the editing instructions automatically executed by the game program can come from the same instruction set, so there is no need to set two sets of instruction sets for the manual operation of the user and the automatic operation of the game program, which is conducive to reducing overhead.
  • this exemplary embodiment after the initial image is imported, it usually takes only a few seconds to tens of seconds (determined by the performance of the terminal device or server, resource investment, etc.) to generate the corresponding scene component combination, while manual editing and modeling generally takes several hours. It can be seen that this exemplary embodiment can greatly reduce the component generation time and improve the component generation efficiency.
  • the component generation method may further include the following steps:
  • the position of the scene component assembly in the game editing scene is moved.
  • the user can move the scene component combination as a whole, such as single-clicking, double-clicking, or long pressing any position or specific position in the scene component combination to select the entire scene component combination, and then move it to other locations in the game editing scene by dragging and other operations.
  • the three axes of the world coordinate system can be displayed in the game editing scene, which are the x-axis, y-axis, and z-axis, respectively, and the scene component combination can be controlled to move along any one or more axes.
  • the projection position of the scene component combination can also be displayed on the three axes, such as being highlighted or displayed in other colors, or as shown in FIG10 , displaying the coordinates on one or more axes, so that the user can see the position of the scene component combination in different directions, which is convenient for the user to accurately move it to the target position.
  • the component generation method may further include the following steps:
  • a size of the scene component assembly is changed.
  • the user can scale the scene component combination as a whole, such as single-clicking, double-clicking, or long pressing any position or specific position in the scene component combination to select the entire scene component combination, and then scale it to the desired size by spreading or pinching two fingers.
  • the three axes of the reference coordinate system of the scene component combination can be displayed in the game editing scene.
  • the three axes of the reference coordinate system are recorded as x' axis, y' axis, and z' axis.
  • the size of the scene component combination is proportionally changed on the three axes;
  • the size of the scene component combination is changed within the preset plane in response to a scaling operation along the preset plane;
  • the preset plane is a plane formed by two axes out of the three axes;
  • the scene component assembly is set to single-axis scaling, then in response to a scaling operation along one of the three axes, the size of the scene component assembly is changed on that axis.
  • three-axis scaling, plane scaling (i.e., dual-axis scaling), and single-axis scaling are three scaling methods set for scene component combinations.
  • the scaling method can be set for the scene component combination individually, or for the game scene. In this case, all models in the game scene will use this scaling method.
  • the scene component combination will be scaled proportionally on the three axes. For example, if the user reduces the size of the scene component combination by 1/2 along the x' axis, the size of the scene component combination on the y' and z' axes will also be reduced by 1/2 synchronously.
  • the three-axis proportional scaling mode can improve the efficiency of the user's scaling operation, so that the user does not need to scale on different axes separately, but can achieve the scaling target by scaling on one axis.
  • the scene component combination will be scaled within the preset plane without changing the size on the third axis (such as the z' axis).
  • Scaling within the preset plane can be proportional scaling on the two axes of the preset plane, or non-proportional scaling.
  • the user's scaling operation parameters can be mapped to the two axes of the preset plane and quantified into the scaling ratios on the two axes (the scaling ratios on the two axes can be different), thereby controlling the scaling of the scene component combination on the two axes.
  • a preset plane can be formed with any two axes, and the preset plane includes the x'-y' plane, the x'-z' plane, and the y'-z' plane, allowing the scene component combination to be scaled in any preset plane.
  • the preset plane can also be formed with two fixed axes, such as setting the preset plane to be only the x'-y' plane, so that the scene component combination is only allowed to be scaled in the x'-y' plane, but not in the x'-z' plane and the y'-z' plane.
  • the image plane of the scene component combination can be used as the preset plane, and the image plane is a side used to present the initial image.
  • the wall surface is the image plane.
  • the scene component combination can be set to be able to scale in the image plane, but not along the third axis (the axis perpendicular to the image plane). This makes the scaling of the scene component combination more consistent with the positioning of the model itself.
  • the component generation method may further include the following steps:
  • the scene component assembly In response to a rotation operation on the scene component assembly, the scene component assembly is controlled to rotate.
  • the user can rotate the scene component combination as a whole, such as single-clicking, double-clicking, or long pressing any position or specific position in the scene component combination to select the entire scene component combination, and then controlling its rotation to the desired direction or angle through operations such as sliding along a specific trajectory.
  • three arcs for indicating the rotation direction may be displayed in the game editing scene, which may be recorded as a yaw angle arc, a pitch angle arc, and a roll angle arc.
  • controlling the scene component combination to rotate may include the following steps:
  • the normal line of the plane where the arc is located is used as the rotation axis, and the scene component combination is controlled to rotate around the rotation axis.
  • the rotation axis is perpendicular to the plane where the arc is located and can pass through the center point of the scene component combination. For example, if the user performs a rotation operation along the yaw angle arc, the z-axis passing through the center point of the scene component combination can be used as the rotation axis to control the scene component combination to rotate around the rotation axis.
  • the positions of the three arcs can be kept unchanged, or any one or more of the arcs can be rotated synchronously.
  • the user can be guided to perform the rotation operation in the correct direction so as to accurately rotate to the desired direction or angle.
  • the component generation method may further include the following steps:
  • At least one of the following information of the scene component is adjusted: size, position, direction, color, texture, and shape.
  • the user can edit a single scene component in the scene component combination, such as single-clicking, double-clicking or long-pressing the scene component to be edited to select the scene component combination, and generate editing instructions through further manual operations to adjust the information of the scene component.
  • its size can be changed by operations such as separating or closing two fingers, the scene component can be dragged to move its position, and it can be slid along a specific rotation trajectory to control the scene component to change direction.
  • Another color, texture or form can be selected for the scene component in the interface of the game editing scene to change its color, texture or form.
  • an associated game event may be set for the scene component combination, such as triggering a specific game plot when a game character approaches the scene component combination, or hiding or removing the scene component combination when a specific game time is reached.
  • game scene information corresponding to the game editing scene can be generated, and the game scene information can be saved in a preset location, which can be a map file.
  • the map file can not only save the game scene information, but also save other map information (including but not limited to screenshots, map names, logs, etc.). After the map file saves the game scene information, it can be uploaded to the server.
  • the game scene generated by the game scene information can be published in the preset map pool, so that the terminal device connected to the server can download the corresponding game scene information from the server, and generate the corresponding game scene according to the downloaded game scene information through the game program, and then experience the game in the game scene.
  • This method can publish the game scene information in the game editor and be experienced by other players, thereby realizing a fast UGC (User Generated Content) function.
  • FIG. 13 shows a system architecture diagram of the operating environment of this exemplary embodiment.
  • the system architecture 1300 may include a terminal device 1310 and a server 1320.
  • the server 1320 may be a background system that provides game services, which may be a server or a cluster of multiple servers.
  • the terminal device 1310 and the server 1320 may be connected by a wired or wireless link to perform data transmission and interaction.
  • the component generation method in this exemplary embodiment may be performed entirely by the terminal device 1310, or may be performed partially by the terminal device 1310 and partially by the server 1320.
  • the terminal device 1310 sends the initial image to the server 1320, and the server 1320 may process the initial image and related user instructions (such as instructions for generating components) by pre-configured logical rules or artificial intelligence engines, etc., to divide the image area, determine the scene components and scene component information corresponding to the image area, return the image area, scene components and scene component information to the terminal device 1310, and the terminal device 1310 generates scene components based on these information and forms a scene component combination.
  • the server 1320 may process the initial image and related user instructions (such as instructions for generating components) by pre-configured logical rules or artificial intelligence engines, etc., to divide the image area, determine the scene components and scene component information corresponding to the image area, return the image area, scene components and scene component information to the terminal device 1310, and the terminal device 1310 generates scene components based on these information and forms a scene component combination.
  • the exemplary embodiment of the present disclosure also provides a component generation device in a game scene.
  • the component generation device 1400 in the game scene may include the following program modules:
  • the graphical user interface processing module 1410 is configured to execute and display a graphical user interface provided by the running game program, display a game editing scene to be edited and a plurality of scene component selection controls in the graphical user interface, and the scene component selection controls are used to respond to and generate corresponding scene components in the game editing scene according to the operation instructions;
  • the information acquisition module 1420 is configured to provide an image import entry in the graphical user interface and accept an initial image imported based on the image import entry;
  • the scene component determination module 1430 is configured to divide the initial image into a plurality of image regions, and determine the scene components corresponding to the image regions and the information of the scene components;
  • the component generation module 1440 is configured to generate the scene component corresponding to the image area in the game editing scene according to the information of the scene component corresponding to the image area, so as to form a scene component combination corresponding to the initial image.
  • the above-mentioned generating the scene components corresponding to the image area in the game editing scene according to the information of the scene components corresponding to the image area includes: determining the generation order of the scene components corresponding to different image areas, wherein at least part of the image areas correspond to The generation order of the scene components of the image area is different from the generation order of the scene components corresponding to other image areas; according to the information of the scene components corresponding to the image area and in accordance with the generation order of the scene components corresponding to the image area, the scene components corresponding to the image area are generated in the game editing scene.
  • the above-mentioned process of generating scene components corresponding to the image area in the game editing scene according to the information of the scene components corresponding to the image area and in the generation order of the scene components corresponding to the image area includes: dynamically displaying the information of the scene components corresponding to the image area in the current field of view screen, and generating the scene components corresponding to the image area in the game editing scene according to the generation order of the scene components corresponding to the image area; wherein the current field of view screen is a screen formed by shooting the game editing scene with a virtual camera set in the game editing scene.
  • the above-mentioned process of dynamically displaying information of scene components corresponding to the image area in the current field of view, and generating scene components corresponding to the image area in the game editing scene according to the generation order of the scene components corresponding to the image area includes: responding to the field of view adjustment instruction, controlling to adjust at least one of the following information of the virtual camera: position, direction, focal length, and field of view angle; capturing the game editing scene according to the adjusted virtual camera to form an adjusted field of view picture; in the adjusted field of view picture, dynamically displaying information of scene components corresponding to the image area, and generating scene components corresponding to the image area in the game editing scene according to the generation order of the scene components corresponding to the image area.
  • the component generation module 1440 is further configured to: dynamically display information of scene components corresponding to the image area within the current field of view, and when generating the scene components corresponding to the image area in the game editing scene according to the generation order of the scene components corresponding to the image area, lock the operation of adding or editing the model in the game editing scene to prohibit adding or editing the model in the game editing scene.
  • the information of the scene component includes at least one of the following: size, position, direction, color, texture, and shape.
  • the above-mentioned generating the scene component corresponding to the image area in the game editing scene according to the scene component information corresponding to the image area includes: calling the corresponding editing instruction according to the information of the scene component, wherein the editing instruction is an instruction pre-provided by the game program; generating the scene component corresponding to the image area in the game editing scene according to the editing instruction.
  • the information acquisition module 1420 is further configured to: obtain resource quantization parameters of the scene component combination; the above-mentioned dividing the initial image into multiple image areas, determining the scene components and information of the scene components corresponding to the image areas, including: sampling the initial image based on the resource quantization parameters, and obtaining the target image based on the sampling results; taking the pixel points of the target image as an image area, determining the scene components and information of the scene components corresponding to the image area.
  • the resource quantization parameter includes the number of scene components of the scene component combination.
  • the above-mentioned sampling of the initial image based on the resource quantization parameter and obtaining the target image based on the sampling result includes: sampling the initial image using the number of scene components of the scene component combination as the number of pixels after sampling, and obtaining the target image based on the sampling result.
  • the above-mentioned method of taking the pixel points of the target image as an image area and determining the scene components and information of the scene components corresponding to the image area includes: determining the initial size of the scene component combination according to the number of pixels of the target image and the initial size of the scene component; if the initial size of the scene component combination is within a preset size range, keeping the initial size of the scene component unchanged; if the initial size of the scene component combination exceeds the preset size range, adjusting the initial size of the scene component so that after the initial size is adjusted, the initial size of the scene component combination is within the preset size range.
  • the information acquisition module 1420 is further configured to: obtain the target color quantity of the scene component combination.
  • the above-mentioned sampling of the initial image based on the resource quantization parameter and obtaining the target image based on the sampling result include: sampling the initial image based on the resource quantization parameter to obtain the sampled image; color mapping the pixel color values of the sampled image based on the target color quantity to obtain the target image; the above-mentioned determination of the scene component corresponding to the image area and the information of the scene component include: using the color value of the pixel of the target image as the color value of the scene component corresponding to the pixel.
  • the above-mentioned color mapping of the pixel color values of the sampled image based on the target color number to obtain the target image includes: clustering the pixel color values of the sampled image based on the target color number to obtain multiple color categories, and the number of color categories is equal to the target color number; mapping the pixel color values in the color category to a preset color closest to the color category.
  • the plurality of scene component selection controls include a first scene component selection control and a second scene component selection control
  • the shape of the second scene component corresponding to the second scene component selection control corresponds to the shape of the component spliced by the first scene components corresponding to the preset number of first scene component selection controls
  • the scene component corresponding to the pixel point is the first scene component.
  • the above-mentioned determination of the scene component corresponding to the image area and the information of the scene component also includes: after determining the color value of the first scene component corresponding to the pixel point, if there are multiple adjacent first scene components with the same color value, the multiple adjacent first scene components with the same color value are merged into the second scene component.
  • the plurality of scene component selection controls include a first scene component selection control, and the scene component corresponding to the pixel point is the first scene component corresponding to the first scene component selection control.
  • the above-mentioned determination of the scene component corresponding to the image area and the information of the scene component also includes: after determining the color value of the first scene component corresponding to the pixel point, if there are multiple adjacent first scene components with the same color value, the multiple adjacent first scene components with the same color value are merged into one first scene component, and the size of the merged first scene component is determined according to the number of the multiple adjacent first scene components with the same color value.
  • the first scene component is a block component
  • the shape of the block component is a cube
  • a virtual camera is provided in the game editing scene for real-time shooting and displaying the current screen of the game editing scene.
  • the above-mentioned generating a scene component corresponding to the image area in the game editing scene includes: generating a scene component corresponding to the image area within the field of view of the virtual camera.
  • the above-mentioned scene component corresponding to the image area is generated within the field of view of the virtual camera, including: generating the scene component corresponding to the image area on a plane within the field of view of the virtual camera, perpendicular to the optical axis of the virtual camera, and at a preset distance from the virtual camera.
  • the above-mentioned determination of the scene component and the information of the scene component corresponding to the image area includes: determining the reference point position of the scene component combination according to the posture of the virtual camera; determining the position of the scene component corresponding to the image area according to the relative position of the scene component corresponding to the image area in the scene component combination and the reference point position.
  • the above-mentioned generating scene components corresponding to the image area in the game editing scene according to the information of the scene components corresponding to the image area to form a scene component combination corresponding to the initial image includes: adding the generation task of the scene component combination to the generation queue; when the generation task of the scene component combination is executed, generating the scene components corresponding to the image area in the game editing scene according to the information of the scene components corresponding to the image area to form a scene component combination corresponding to the initial image.
  • the component generation device 1400 in the game scene may also include a model editing module, which is configured to: after the component generation module 1440 forms a scene component combination corresponding to the initial image, in response to a move operation on the scene component combination, move the position of the scene component combination in the game editing scene.
  • a model editing module which is configured to: after the component generation module 1440 forms a scene component combination corresponding to the initial image, in response to a move operation on the scene component combination, move the position of the scene component combination in the game editing scene.
  • the component generation device 1400 in the game scene may also include a model editing module, which is configured to: after the component generation module 1440 forms a scene component combination corresponding to the initial image, change the size of the scene component combination in response to a scaling operation on the scene component combination.
  • a model editing module configured to: after the component generation module 1440 forms a scene component combination corresponding to the initial image, change the size of the scene component combination in response to a scaling operation on the scene component combination.
  • three axes of the reference coordinate system of the scene component combination are displayed in the game editing scene.
  • the above-mentioned changing the size of the scene component combination in response to the scaling operation on the scene component combination includes: if the scene component combination is set to three-axis scaling, then in response to the scaling operation, the size of the scene component combination is proportionally changed on the three axes; if the scene component combination is set to plane scaling, then in response to the scaling operation along the preset plane, the size of the scene component combination is changed within the preset plane; the preset plane is a plane formed by two axes of the three axes; if the scene component combination is set to single-axis scaling, then in response to the scaling operation along one of the three axes, the size of the scene component combination is changed on the axis.
  • the component generation device 1400 in the game scene may also include a model editing module, which is configured to: after the component generation module 1440 forms a scene component combination corresponding to the initial image, in response to a rotation operation on the scene component combination, control the scene component combination to rotate.
  • a model editing module configured to: after the component generation module 1440 forms a scene component combination corresponding to the initial image, in response to a rotation operation on the scene component combination, control the scene component combination to rotate.
  • controlling the scene component combination to rotate includes: in response to the rotation operation along any of the three arcs, taking the normal of the plane where any of the arcs is located as the rotation axis, controlling the scene component combination to rotate around the rotation axis.
  • the component generation device 1400 in the game scene may also include a model editing module, which is configured to: after the component generation module 1440 forms a scene component combination corresponding to the initial image, in response to an editing instruction for any scene component in the scene component combination, adjust at least one of the following information of the scene component: size, position, direction, color, texture, and shape.
  • a model editing module configured to: after the component generation module 1440 forms a scene component combination corresponding to the initial image, in response to an editing instruction for any scene component in the scene component combination, adjust at least one of the following information of the scene component: size, position, direction, color, texture, and shape.
  • the exemplary embodiments of the present disclosure also provide a computer-readable storage medium, which can be implemented in the form of a program product, which includes a program code, and when the program product is run on an electronic device, the program code is used to cause the electronic device to perform the steps described in the above "Exemplary Method" section of this specification according to various exemplary embodiments of the present disclosure.
  • the program product can be implemented as a portable compact disk read-only memory (CD-ROM) and includes program code, and can be run on an electronic device, such as a personal computer.
  • CD-ROM portable compact disk read-only memory
  • the program product of the present disclosure is not limited to this, and in this document, the readable storage medium can be any tangible medium containing or storing a program, which can be used by or in combination with an instruction execution system, device or device.
  • the program product may use any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above. More specific examples of readable storage media (a non-exhaustive list) include: an electrical connection with one or more wires, a portable disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above.
  • Computer readable signal media may include data signals propagated in baseband or as part of a carrier wave, which carry readable program code. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • the readable signal medium may also be any readable medium other than a readable storage medium, which may be sent, propagated, or transmitted for A program for use by or in connection with an instruction execution system, apparatus, or device.
  • the program code embodied on the readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the foregoing.
  • Program code for performing the operations of the present disclosure may be written in any combination of one or more programming languages, including object-oriented programming languages such as Java, C++, etc., and conventional procedural programming languages such as "C" or similar programming languages.
  • the program code may be executed entirely on the user computing device, partially on the user device, as a separate software package, partially on the user computing device and partially on a remote computing device, or entirely on a remote computing device or server.
  • the remote computing device may be connected to the user computing device through any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (e.g., through the Internet using an Internet service provider).
  • LAN local area network
  • WAN wide area network
  • Internet service provider e.g., AT&T, MCI, Sprint, EarthLink, etc.
  • the exemplary embodiment of the present disclosure also provides an electronic device, such as the terminal device 1310 or the server 1320 described above.
  • the electronic device may include a processor and a memory.
  • the memory stores executable instructions of the processor, such as program codes.
  • the processor executes the method in the exemplary embodiment by executing the executable instructions.
  • the electronic device may also include a display for displaying a graphical user interface.
  • FIG. 15 an electronic device is exemplarily described in the form of a general computing device. It should be understood that the electronic device 1500 shown in Fig. 15 is only an example and should not limit the functions and scope of use of the embodiments of the present disclosure.
  • the electronic device 1500 may include a processor 1510 , a memory 1520 , a bus 1530 , an I/O (input/output) interface 1540 , a network adapter 1550 , and a display 1560 .
  • the memory 1520 may include a volatile memory, such as a RAM 1521, a cache unit 1522, and may also include a non-volatile memory, such as a ROM 1523.
  • the memory 1520 may also include one or more program modules 1524, such program modules 1524 include but are not limited to: an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may include the implementation of a network environment.
  • the program module 1524 may include each module in the above-mentioned device.
  • the bus 1530 is used to realize the connection between different components of the electronic device 1500, and may include a data bus, an address bus, and a control bus.
  • the electronic device 1500 can communicate with one or more external devices 1600 (eg, a keyboard, a mouse, an external controller, etc.) through the I/O interface 1540 .
  • external devices 1600 eg, a keyboard, a mouse, an external controller, etc.
  • the electronic device 1500 can communicate with one or more networks through the network adapter 1550.
  • the network adapter 1550 can provide mobile communication solutions such as 3G/4G/5G, or provide wireless communication solutions such as wireless LAN, Bluetooth, near field communication, etc.
  • the network adapter 1550 can communicate with other modules of the electronic device 1500 through the bus 1530.
  • the electronic device 1500 may display a graphical user interface through the display 1560 , such as displaying a game editing scene.
  • other hardware and/or software modules may be provided in the electronic device 1500 , including but not limited to microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A component generation method and apparatus in a game scene, a storage medium, and electronic device, relating to the technical field of computer graphics. The method comprises: displaying a graphical user interface provided for running a game program, and displaying a game editing scene to be edited and a plurality of scene component selection controls in the graphical user interface, the scene component selection controls being used for generating scene components in the game editing scene in response to and according to operation instructions; providing an image import entrance in the graphical user interface, and receiving an initial image imported by the image import entrance; dividing the initial image into a plurality of image areas, and determining scene components corresponding to the image areas, and information about the scene components; and according to the information about the scene components corresponding to the image areas, generating, in the game editing scene, the scene components corresponding to the image areas, and forming a scene component combination corresponding to the initial image. According to an imported image, a corresponding scene component combination is generated in a game scene, so that the component generation efficiency is improved.

Description

游戏场景中的组件生成方法、装置、存储介质与电子设备Component generation method, device, storage medium and electronic device in game scene

相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS

本申请要求申请日为2023年06月15日,申请号为202310718159.6,名称为“游戏场景中的组件生成方法、装置、存储介质与电子设备”的中国专利申请的优先权,该中国专利申请的全部内容通过引用结合在本文中。This application claims priority to a Chinese patent application filed on June 15, 2023, with application number 202310718159.6 and entitled “Component Generation Method, Device, Storage Medium and Electronic Device in Game Scenes”, the entire contents of which are incorporated herein by reference.

技术领域Technical Field

本公开涉及计算机图形技术领域,尤其涉及一种游戏场景中的组件生成方法、游戏场景中的组件生成装置、计算机可读存储介质与电子设备。The present disclosure relates to the field of computer graphics technology, and in particular to a component generation method in a game scene, a component generation device in a game scene, a computer-readable storage medium, and an electronic device.

背景技术Background Art

游戏场景通常由一定数量的组件(或模型)组成,每一个组件可能是游戏场景中的一个物或人。对游戏场景中的物或人进行建模与编辑,是建立游戏场景的主要工作之一。A game scene is usually composed of a certain number of components (or models), each of which may be an object or a person in the game scene. Modeling and editing objects or people in the game scene is one of the main tasks in building a game scene.

目前,游戏场景中的建模与编辑主要依赖于相关人员的手动操作来完成。例如,单个组件的生成过程一般包括:手动编辑外形,并调整大小、位置,渲染颜色、纹理等,最终生成所需的组件。这样的方式耗费较高的人力与时间成本,效率低下。At present, modeling and editing in game scenes mainly rely on manual operations by relevant personnel. For example, the generation process of a single component generally includes: manually editing the shape, adjusting the size, position, rendering color, texture, etc., and finally generating the required component. This method consumes high manpower and time costs and is inefficient.

发明内容Summary of the invention

本公开提供一种游戏场景中的组件生成方法、游戏场景中的组件生成装置、计算机可读存储介质与电子设备。The present disclosure provides a component generation method in a game scene, a component generation device in a game scene, a computer-readable storage medium, and an electronic device.

根据本公开的第一方面,提供一种游戏场景中的组件生成方法,所述方法包括:显示运行游戏程序所提供的图形用户界面,在所述图形用户界面中显示待编辑的游戏编辑场景以及多个场景组件选择控件,所述场景组件选择控件用于响应并根据操作指令在所述游戏编辑场景中生成对应场景组件;在所述图形用户界面中提供一图像导入入口,并接受基于所述图像导入入口导入的初始图像;将所述初始图像划分为多个图像区域,确定图像区域对应的场景组件和所述场景组件的信息;根据所述图像区域对应的场景组件的信息,在所述游戏编辑场景中生成所述图像区域对应的场景组件,形成与所述初始图像对应的场景组件组合。According to a first aspect of the present disclosure, a method for generating components in a game scene is provided, the method comprising: displaying a graphical user interface provided by running a game program, displaying a game editing scene to be edited and a plurality of scene component selection controls in the graphical user interface, the scene component selection controls being used to respond to and generate corresponding scene components in the game editing scene according to operation instructions; providing an image import entry in the graphical user interface, and accepting an initial image imported based on the image import entry; dividing the initial image into a plurality of image areas, determining the scene components corresponding to the image areas and information of the scene components; generating the scene components corresponding to the image areas in the game editing scene according to the information of the scene components corresponding to the image areas, to form a scene component combination corresponding to the initial image.

根据本公开的第二方面,提供一种游戏场景中的组件生成装置,所述装置包括:图形用户界面处理模块,被配置为显示运行游戏程序所提供的图形用户界面,在所述图形用户界面中显示待编辑的游戏编辑场景以及多个场景组件选择控件,所述场景组件选择控件用于响应并根据操作指令在所述游戏编辑场景中生成对应场景组件;信息获取模块,被配置为在所述图形用户界面中提供一图像导入入口,并接受基于所述图像导入入口导入的初始图像;场景组件确定模块,被配置为将所述初始图像划分为多个图像区域,确定图像区域对应的场景组件和所述场景组件的信息;组件生成模块,被配置为根据所述图像区域对应的场景组件的信息,在所述游戏编辑场景中生成所述图像区域对应的场景组件,形成与所述初始图像对应的场景组件组合。According to a second aspect of the present disclosure, a device for generating components in a game scene is provided, the device comprising: a graphical user interface processing module, configured to display a graphical user interface provided by running a game program, displaying a game editing scene to be edited and a plurality of scene component selection controls in the graphical user interface, the scene component selection controls being used to respond to and generate corresponding scene components in the game editing scene according to operation instructions; an information acquisition module, configured to provide an image import entry in the graphical user interface, and accept an initial image imported based on the image import entry; a scene component determination module, configured to divide the initial image into a plurality of image areas, determine the scene components corresponding to the image areas and information of the scene components; a component generation module, configured to generate the scene components corresponding to the image areas in the game editing scene according to the information of the scene components corresponding to the image areas, to form a scene component combination corresponding to the initial image.

根据本公开的第三方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面的游戏场景中的组件生成方法及其可能的实现方式。According to a third aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored. When the computer program is executed by a processor, the component generation method in the game scene of the first aspect and its possible implementation methods are implemented.

根据本公开的第四方面,提供一种电子设备,包括:处理器;以及存储器,用于存储所述处理器的可执行指令;其中,所述处理器配置为经由执行所述可执行指令,来执行上述第一方面的游戏场景中的组件生成方法及其可能的实现方式。According to a fourth aspect of the present disclosure, an electronic device is provided, comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to execute the component generation method in the game scene of the above-mentioned first aspect and its possible implementation method by executing the executable instructions.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1示出本示例性实施方式其中之一的游戏编辑场景与场景组件选择控件的示意图;FIG1 is a schematic diagram showing a game editing scene and a scene component selection control according to one of the exemplary embodiments;

图2示出本示例性实施方式其中之一的设置游戏编辑场景的视角的示意图;FIG2 is a schematic diagram showing a perspective of setting a game editing scene according to one of the exemplary embodiments;

图3A示出本示例性实施方式其中之一的观察视角的示意图;FIG3A is a schematic diagram showing an observation angle of one of the exemplary embodiments;

图3B示出本示例性实施方式其中之一的游戏视角的示意图;FIG3B is a schematic diagram showing a game perspective of one of the exemplary embodiments;

图4示出本示例性实施方式其中之一的游戏场景中的组件生成方法的流程图;FIG4 is a flowchart showing a method for generating components in a game scene according to one of the exemplary embodiments;

图5示出本示例性实施方式其中之一的导入初始图像的示意图;FIG5 is a schematic diagram showing one of the steps of importing an initial image according to the exemplary embodiment;

图6示出本示例性实施方式其中之一的确定场景组件和场景组件的信息的流程图;FIG6 shows a flowchart of determining a scene component and information of the scene component according to one of the exemplary embodiments;

图7示出本示例性实施方式其中之一的生成队列的示意图;FIG. 7 is a schematic diagram showing a generation queue according to one of the exemplary embodiments;

图8示出本示例性实施方式其中之一的已生成部分场景组件的示意图;FIG8 is a schematic diagram showing a generated partial scene component according to one of the exemplary embodiments;

图9示出本示例性实施方式其中之一的已生成全部场景组件的示意图; FIG9 is a schematic diagram showing all scene components generated according to one of the exemplary embodiments;

图10示出本示例性实施方式其中之一的对场景组件组合进行移动的示意图;FIG10 is a schematic diagram showing one of the exemplary embodiments of moving a scene component combination;

图11示出本示例性实施方式其中之一的对场景组件组合进行缩放的示意图;FIG. 11 is a schematic diagram showing scaling of a scene component combination according to one of the exemplary embodiments;

图12示出本示例性实施方式其中之一的对场景组件组合进行旋转的示意图;FIG. 12 is a schematic diagram showing a method of rotating a scene component combination according to one of the exemplary embodiments;

图13示出本示例性实施方式其中之一的运行环境系统架构图;FIG13 shows a system architecture diagram of an operating environment according to one of the exemplary embodiments;

图14示出本示例性实施方式其中之一的游戏场景中的组件生成装置的结构示意图;FIG14 is a schematic diagram showing the structure of a component generation device in a game scene in one of the exemplary embodiments;

图15示出本示例性实施方式其中之一的一种电子设备的结构示意图。FIG. 15 is a schematic structural diagram of an electronic device according to one of the exemplary embodiments.

具体实施方式DETAILED DESCRIPTION

下文将结合附图更全面地描述本公开的示例性实施方式。Exemplary embodiments of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings.

附图为本公开的示意性图解,并非一定是按比例绘制。附图中所示的一些方框图可能是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在硬件模块或集成电路中实现这些功能实体,或在网络、处理器或微控制器中实现这些功能实体。实施方式能够以多种形式实施,不应被理解为限于在此阐述的范例。本公开所描述的特征、结构或特性可以以任何合适的方式结合在一个或多个实施方式中。在下文的描述中,提供许多具体细节从而给出对本公开实施方式的充分说明。然而,本领域技术人员应意识到,可以在实现本公开的技术方案时省略其中的一个或多个特定细节,或者可以采用其它的方法、组元、装置、步骤等替代一个或多个特定细节。The accompanying drawings are schematic diagrams of the present disclosure and are not necessarily drawn to scale. Some of the block diagrams shown in the accompanying drawings may be functional entities and do not necessarily correspond to physically or logically independent entities. These functional entities can be implemented in software form, or in hardware modules or integrated circuits, or in networks, processors or microcontrollers. The embodiments can be implemented in various forms and should not be construed as being limited to the examples set forth herein. The features, structures or characteristics described in the present disclosure may be combined in one or more embodiments in any suitable manner. In the description below, many specific details are provided to provide a full description of the embodiments of the present disclosure. However, those skilled in the art should appreciate that one or more specific details may be omitted when implementing the technical solution of the present disclosure, or one or more specific details may be replaced by other methods, components, devices, steps, etc.

在游戏场景中设置多样化的组件,能够丰富游戏场景,提高玩家的游戏体验。其中,图片墙是一种可呈现图像内容的组件。相关技术中,将原图片直接放置在游戏场景中墙体组件的墙面上,当玩家在游戏中的视角对准墙面时,可以看到原图片,由此将墙体组件实现为图片墙。然而,若图片与游戏场景的风格不协调,如图片为真实世界中拍摄的照片,游戏场景为动画风格,则这样的图片墙会造成玩家视觉上的突兀感,即图片墙看上去不像是游戏场景的一部分。Setting up various components in the game scene can enrich the game scene and improve the player's gaming experience. Among them, the picture wall is a component that can present image content. In the related art, the original picture is directly placed on the wall of the wall component in the game scene. When the player's perspective in the game is aimed at the wall, the original picture can be seen, thereby realizing the wall component as a picture wall. However, if the picture is inconsistent with the style of the game scene, such as the picture is a photo taken in the real world and the game scene is in an animation style, such a picture wall will cause a visual abruptness to the player, that is, the picture wall does not look like a part of the game scene.

为了建立与游戏场景风格协调的图片墙,可能需要相关人员利用游戏场景中的元素手动对图片墙的每个部分进行编辑、调色与拼接等操作,由于图片内容通常较为复杂,若想要在图片墙中呈现或近似呈现图片内容,会耗费大量的时间。并且每次想要对新图片生成图片墙时,仍然需要手动进行上述操作,效率低下。In order to build a picture wall that is in harmony with the style of the game scene, relevant personnel may need to manually edit, adjust the color, and splice each part of the picture wall using elements in the game scene. Since the picture content is usually complex, it will take a lot of time to present or approximate the picture content in the picture wall. And every time you want to generate a picture wall for a new picture, you still need to manually perform the above operations, which is inefficient.

鉴于上述问题,本公开的示例性实施方式提供一种游戏场景中的组件生成方法,能够提高图片墙等组件的生成效率。In view of the above problems, an exemplary embodiment of the present disclosure provides a component generation method in a game scene, which can improve the generation efficiency of components such as a picture wall.

本示例性实施方式中,可以通过终端设备显示图形用户界面,终端设备可以是手机、个人电脑、平板电脑、智能穿戴设备、游戏机等,其具有显示功能,可显示图形用户界面。图形用户界面可以包括终端设备运行操作系统的画面,如桌面、系统设置界面、应用程序界面等。当终端设备运行游戏程序时,图形用户界面中可以显示运行游戏程序提供的游戏编辑场景。该游戏程序可以是游戏主程序,游戏主程序中提供游戏场景编辑功能(如游戏程序内置有游戏编辑器),用户使用该功能时,可以进入游戏编辑场景中。或者,该游戏程序也可以是游戏主程序关联的游戏场景编辑程序,如不依赖于游戏主程序而可以独立运行的游戏编辑器。用户可以选择新建一个游戏场景并进行编辑,也可以选择对已有的游戏场景进行编辑。当用户使用游戏程序进行游戏场景编辑时,参考图1所示,图像用户界面中可以显示待编辑的游戏编辑场景以及多个场景组件选择控件。游戏编辑场景可以包括场景的背景和已生成的组件。场景组件选择控件可以包括“方块组件”、“圆柱组件”、“半圆柱组件”等的控件,当用户使用这些控件进行操作时,如用户点击“方块组件”控件,可以在游戏编辑场景中生成对应的方块组件。In this exemplary embodiment, a graphical user interface can be displayed by a terminal device, and the terminal device can be a mobile phone, a personal computer, a tablet computer, an intelligent wearable device, a game console, etc., which has a display function and can display a graphical user interface. The graphical user interface may include a screen of the terminal device running an operating system, such as a desktop, a system setting interface, an application program interface, etc. When the terminal device runs a game program, a game editing scene provided by the running game program can be displayed in the graphical user interface. The game program may be a main game program, and a game scene editing function is provided in the main game program (such as a game editor built into the game program). When the user uses this function, the game editing scene can be entered. Alternatively, the game program may also be a game scene editing program associated with the main game program, such as a game editor that can run independently without relying on the main game program. The user can choose to create a new game scene and edit it, or can choose to edit an existing game scene. When the user uses the game program to edit the game scene, as shown in FIG. 1, the game editing scene to be edited and multiple scene component selection controls can be displayed in the graphical user interface. The game editing scene may include the background of the scene and the generated components. The scene component selection controls may include controls such as "block component", "cylinder component", "semi-cylinder component", etc. When the user uses these controls to perform operations, such as the user clicks the "block component" control, the corresponding block component can be generated in the game editing scene.

本示例性实施方式支持玩家自定义编辑场景,因此,本文中的用户可以指游戏厂商的游戏制作人员(如美术人员),也可以指玩家。This exemplary embodiment supports players to customize editing scenes. Therefore, the user in this article can refer to the game production staff (such as artists) of the game manufacturer, or it can refer to the player.

在一种实施方式中,游戏编辑场景中可以设置有虚拟摄像机。虚拟摄像机是游戏程序中模拟真实摄像机,以拍摄游戏场景画面的工具,其可以设置于游戏编辑场景中的任何位置,以任何视角拍摄游戏场景,即虚拟摄像机在游戏场景中可以具有任意位姿,其位姿可以是固定的,也可以是动态变化的。并且,游戏编辑场景中可以设置任意数量的虚拟摄像机,不同的虚拟摄像机可以拍摄出不同的游戏场景画面。In one embodiment, a virtual camera may be provided in the game editing scene. The virtual camera is a tool in the game program that simulates a real camera to shoot the game scene. The virtual camera may be provided at any position in the game editing scene and shoot the game scene from any perspective. That is, the virtual camera may have any position in the game scene, and the position may be fixed or dynamically changed. Furthermore, any number of virtual cameras may be provided in the game editing scene, and different virtual cameras may shoot different game scene images.

参考图2所示,游戏编辑场景可以呈现两种不同的视角,分别为观察视角和游戏视角。观察视角是指以第三人称的视角观察游戏编辑场景,参考图3A所示,在观察视角下,用户在游戏编辑场景中可以不操控游戏角色,而是直接操控虚拟摄像机以移动视角。游戏视角是指以第一人称的视角观察游戏编辑场景,参考图3B所示,在游戏视角下,用户在游戏编辑场景中可以操控某个游戏角色,该游戏角色可以绑定虚拟摄像机,即该游戏角色与虚拟摄像机的位置关系固定,例如该游戏角色可以位于虚拟摄像机的焦点处,用户操控游戏角色移动时,虚拟摄像机同步移动,由此移动视角。在观察视角或游戏视角下,游戏编辑场景中可以设置虚拟摇杆、上移或下移的控件等,用户可以通过操作这些控件来移动虚拟摄像机或移动游戏角色。 As shown in reference to FIG2, the game editing scene can present two different perspectives, namely the observation perspective and the game perspective. The observation perspective refers to observing the game editing scene from a third-person perspective. As shown in reference to FIG3A, under the observation perspective, the user can directly control the virtual camera to move the perspective instead of controlling the game character in the game editing scene. The game perspective refers to observing the game editing scene from a first-person perspective. As shown in reference to FIG3B, under the game perspective, the user can control a certain game character in the game editing scene, and the game character can be bound to a virtual camera, that is, the positional relationship between the game character and the virtual camera is fixed, for example, the game character can be located at the focus of the virtual camera, and when the user controls the game character to move, the virtual camera moves synchronously, thereby moving the perspective. Under the observation perspective or the game perspective, a virtual joystick, upward or downward controls, etc. can be set in the game editing scene, and the user can move the virtual camera or move the game character by operating these controls.

游戏程序提供多个不同的场景组件,如可以以图1中的场景组件选择控件的方式示出。场景组件是组成游戏场景的虚拟模型,可以是物、人或局部的物或人。游戏程序提供的场景组件可以包括基础场景组件和基础场景组件组合。基础场景组件是指不可分割的场景组件,可视为组成游戏场景的最小单位。例如,在三维游戏场景中,基础场景组件可以包括方块组件、长方体组件、圆柱组件、球体组件等。用户对基础场景组件进行编辑时,会改变整个基础场景组件的信息,而无法仅改变基础场景组件的局部信息。基础场景组件组合是由多个基础场景组件组成的场景组件。例如,可以将方块组件或长方体组件组合到圆柱组件的圆面上,组成一个滚轮形式的场景机关,该场景机关为基础场景组件组合。The game program provides a plurality of different scene components, as can be shown in the form of a scene component selection control in FIG. 1. A scene component is a virtual model that constitutes a game scene, which can be an object, a person, or a partial object or person. The scene components provided by the game program can include a basic scene component and a basic scene component combination. A basic scene component refers to an indivisible scene component, which can be regarded as the smallest unit that constitutes a game scene. For example, in a three-dimensional game scene, a basic scene component can include a block component, a cuboid component, a cylindrical component, a spherical component, and the like. When a user edits a basic scene component, the information of the entire basic scene component will be changed, and only the local information of the basic scene component cannot be changed. A basic scene component combination is a scene component composed of multiple basic scene components. For example, a block component or a cuboid component can be combined on the circular surface of a cylindrical component to form a scene mechanism in the form of a roller, which is a combination of basic scene components.

在一种实施方式中,游戏程序可以自带多个不同的场景组件,可以是美术人员预先配置并存储在游戏程序中的,使得玩家可以方便地使用这些场景组件进行场景编辑。In one embodiment, the game program may come with a plurality of different scene components, which may be pre-configured by artists and stored in the game program, so that players can conveniently use these scene components to edit scenes.

在一种实施方式中,可以由玩家预先配置场景组件,玩家可以通过在游戏编辑场景或其他编辑界面中进行建模,得到游戏程序中原本没有的场景组件。玩家配置的场景组件可以包括基础场景组件和基础场景组件组合。例如,若玩家在配置某个场景组件时,将其设置为一个不可分割的整体,则该场景组件为基础场景组件,否则为基础场景组件组合。玩家配置的场景组件可以仅供本人使用,也可以分享给其他玩家使用。In one embodiment, the scene components can be pre-configured by the player, and the player can obtain the scene components that are not originally in the game program by modeling in the game editing scene or other editing interfaces. The scene components configured by the player may include basic scene components and basic scene component combinations. For example, if the player sets a scene component as an indivisible whole when configuring it, the scene component is a basic scene component, otherwise it is a basic scene component combination. The scene components configured by the player can be used only by the player himself, or they can be shared with other players.

预先配置场景组件时,可以对场景组件的尺寸、位置、方向、颜色、纹理、形态等的一种或多种信息进行配置。这样当用户在游戏编辑场景中使用这些场景组件时,可以直接调用已配置的信息,非常方便与高效。当然,用户也可以对场景组件中的已配置的信息进行调整,如调整上述一种或多种信息,使其更符合自己的需求与偏好。When pre-configuring scene components, one or more information such as the size, position, direction, color, texture, and shape of the scene components can be configured. In this way, when users use these scene components in the game editing scene, they can directly call the configured information, which is very convenient and efficient. Of course, users can also adjust the configured information in the scene components, such as adjusting one or more of the above information to make it more in line with their needs and preferences.

图4示出了游戏场景中的组件生成方法的示例性流程,可以包括以下步骤S410至S440:FIG4 shows an exemplary process of a component generation method in a game scene, which may include the following steps S410 to S440:

步骤S410,显示运行游戏程序所提供的图形用户界面,在图形用户界面中显示待编辑的游戏编辑场景以及多个场景组件选择控件,场景组件选择控件用于响应并根据操作指令在游戏编辑场景中生成对应场景组件;Step S410, displaying a graphical user interface provided by running the game program, displaying a game editing scene to be edited and a plurality of scene component selection controls in the graphical user interface, wherein the scene component selection controls are used to respond to and generate corresponding scene components in the game editing scene according to the operation instructions;

步骤S420,在图形用户界面中提供一图像导入入口,并接受基于图像导入入口导入的初始图像;Step S420, providing an image import entry in the graphical user interface, and accepting an initial image imported based on the image import entry;

步骤S430,将初始图像划分为多个图像区域,确定图像区域对应的场景组件和场景组件的信息;Step S430, dividing the initial image into a plurality of image regions, and determining scene components corresponding to the image regions and information of the scene components;

步骤S440,根据图像区域对应的场景组件的信息,在游戏编辑场景中生成图像区域对应的场景组件,形成与初始图像对应的场景组件组合。Step S440, generating a scene component corresponding to the image area in the game editing scene according to the information of the scene component corresponding to the image area, and forming a scene component combination corresponding to the initial image.

其中,场景组件组合是由多个场景组件形成的组合式虚拟模型,可以将场景组件组合本身看作为一个模型。场景组件组合是游戏场景中用于呈现初始图像的虚拟模型。本文中,呈现初始图像(或采样图像、目标图像等其他图像)可以是完全呈现初始图像,也可以是近似呈现初始图像,近似呈现可以指呈现初始图像模糊化后的图案。场景组件组合与初始图像具有对应关系,该对应关系体现为两者在图案上相同或相似,也体现为场景组件组合是响应于导入初始图像的指令而最终生成的组件。Among them, the scene component combination is a combined virtual model formed by multiple scene components, and the scene component combination itself can be regarded as a model. The scene component combination is a virtual model used to present the initial image in the game scene. In this article, presenting the initial image (or other images such as sampled images, target images, etc.) can be a complete presentation of the initial image, or it can be an approximate presentation of the initial image. The approximate presentation can refer to presenting a blurred pattern of the initial image. The scene component combination has a corresponding relationship with the initial image, and the corresponding relationship is reflected in the fact that the two are the same or similar in pattern, and it is also reflected in the fact that the scene component combination is the component finally generated in response to the instruction to import the initial image.

场景组件组合具有至少一个可呈现初始图像的面。在一种实施方式中,场景组件组合可以是墙体或类墙体的模型,如可以是图片墙、像素图像等形式。本公开对场景组件组合的外形不做具体限定。示例性的,场景组件组合的用于呈现初始图像的面可以是平整平面,如场景组件组合可以是长方体,其表面积最大的两个面可用于呈现初始图像。或者,场景组件组合的用于呈现初始图像的面可以是非平整面,如场景组件组合中的各个场景组件在该面的法线轴上的坐标不完全相同,呈现出各个场景组件该面上有一定起伏的效果,这样能够营造出视觉上的立体感。The scene component combination has at least one surface that can present an initial image. In one embodiment, the scene component combination can be a wall or a wall-like model, such as a picture wall, a pixel image, etc. The present disclosure does not specifically limit the appearance of the scene component combination. Exemplarily, the surface of the scene component combination used to present the initial image can be a flat plane, such as the scene component combination can be a rectangular parallelepiped, and its two surfaces with the largest surface area can be used to present the initial image. Alternatively, the surface of the scene component combination used to present the initial image can be a non-flat surface, such as the coordinates of each scene component in the scene component combination on the normal axis of the surface are not exactly the same, presenting an effect that each scene component has a certain undulation on the surface, which can create a visual sense of three-dimensionality.

基于图4的方法,一方面,提供了一种通过导入图像对应生成场景组件组合(即虚拟模型)的方案,根据所导入的初始图像中的图像区域确定对应的场景组件和场景组件的信息,进而自动生成场景组件并形成场景组件组合,无需用户进行手动操作编辑,极大地降低了人力与时间成本,提高了组件生成效率。另一方面,用户可以根据自身的需求与偏好导入初始图像,最终生成对应图案的场景组件组合,即所生成的场景组件组合用于呈现所导入的初始图像,由此得到游戏场景中的图片墙、像素图像等形式的虚拟模型,能够丰富游戏场景,并满足用户的个性化需求。Based on the method of FIG. 4 , on the one hand, a solution is provided for generating a scene component combination (i.e., a virtual model) by importing corresponding images, determining the corresponding scene components and the information of the scene components according to the image area in the imported initial image, and then automatically generating the scene components and forming a scene component combination, without the need for the user to perform manual operation and editing, which greatly reduces the manpower and time costs and improves the efficiency of component generation. On the other hand, the user can import the initial image according to his own needs and preferences, and finally generate a scene component combination of the corresponding pattern, that is, the generated scene component combination is used to present the imported initial image, thereby obtaining a virtual model in the form of a picture wall, a pixel image, etc. in the game scene, which can enrich the game scene and meet the personalized needs of the user.

下面对图4中的每个步骤进行具体说明。Each step in FIG4 is described in detail below.

参考图4,在步骤S410中,显示运行游戏程序所提供的图形用户界面,在图形用户界面中显示待编辑的游戏编辑场景以及多个场景组件选择控件,场景组件选择控件用于响应并根据操作指令在游戏编辑场景中生成对应场景组件。Referring to Figure 4, in step S410, a graphical user interface provided by running the game program is displayed, and a game editing scene to be edited and multiple scene component selection controls are displayed in the graphical user interface. The scene component selection controls are used to respond to and generate corresponding scene components in the game editing scene according to operation instructions.

其中,待编辑的游戏编辑场景可以新建的游戏编辑场景,也可以已存储的游戏编辑场景。场景组件选择控件可以如图1所示,但不限于图1所示的场景组件类型。当用户对场景组件选择控件进行操作时,可以在游戏编辑场景中生成对应场景组件。如在图1中,用户可以按住“方块组件”控件,将其拖动到游戏编辑场景中,可以在游戏编辑场景中生成对应的方块组件。然后,用户还可以对方块组件的尺寸、位置、方向、颜色、纹理、形态等进行编辑与调整。 Among them, the game editing scene to be edited can be a newly created game editing scene or a stored game editing scene. The scene component selection control can be as shown in Figure 1, but is not limited to the scene component type shown in Figure 1. When the user operates the scene component selection control, the corresponding scene component can be generated in the game editing scene. As shown in Figure 1, the user can press and hold the "block component" control and drag it to the game editing scene to generate the corresponding block component in the game editing scene. Then, the user can also edit and adjust the size, position, direction, color, texture, shape, etc. of the block component.

继续参考图4,在步骤S420中,在图形用户界面中提供一图像导入入口,并接受基于图像导入入口导入的初始图像。Continuing to refer to FIG. 4 , in step S420 , an image import portal is provided in the graphical user interface, and an initial image imported based on the image import portal is accepted.

其中,图像导入入口可以在相关的游戏编辑界面中提供。例如,在游戏编辑场景中,可以选择生成器的选项,以触发打开生成器界面,该界面可以参考图5所示,其中提供有图像导入入口。用户可以基于图像导入入口选择初始图像,导入到当前的待编辑游戏场景中。或者,用户可以在未打开游戏编辑场景的情况下,通过游戏程序的其他界面中提供的图像导入入口导入初始图像,在导入时可以选择游戏场景,游戏程序还可以打开游戏编辑场景并加载用户选择的游戏场景。或者,可以由游戏程序自动选择初始图像,导入到游戏编辑场景中。Among them, the image import entrance can be provided in the relevant game editing interface. For example, in the game editing scene, the option of the generator can be selected to trigger the opening of the generator interface, which can be shown in Figure 5, where an image import entrance is provided. The user can select the initial image based on the image import entrance and import it into the current game scene to be edited. Alternatively, the user can import the initial image through the image import entrance provided in other interfaces of the game program without opening the game editing scene, and can select the game scene when importing. The game program can also open the game editing scene and load the game scene selected by the user. Alternatively, the initial image can be automatically selected by the game program and imported into the game editing scene.

初始图像用于为最终生成的场景组件组合提供图案信息,使得场景组件组合能够呈现初始图像。初始图像可以是导入的原始图像,也可以是原始图像经过处理后的图像,如可以将原始图像处理为规定的尺寸,得到初始图像。本公开对初始图像的来源、格式、内容等不做限定。例如,初始图像可以来源于终端设备的相册,也可以来源于互联网。用户可以根据自己的需求与偏好,选择任意内容的初始图像。The initial image is used to provide pattern information for the final generated scene component combination, so that the scene component combination can present the initial image. The initial image can be an imported original image, or it can be an image after the original image is processed. For example, the original image can be processed into a specified size to obtain the initial image. The present disclosure does not limit the source, format, content, etc. of the initial image. For example, the initial image can come from the photo album of the terminal device, or it can come from the Internet. Users can select an initial image of any content according to their needs and preferences.

在一种实施方式中,用户导入初始图像后,服务器可以对初始图像进行审核,以确保初始图像的内容符合相关要求。In one implementation, after the user imports the initial image, the server may review the initial image to ensure that the content of the initial image meets relevant requirements.

继续参考图4,在步骤S430中,将初始图像划分为多个图像区域,确定图像区域对应的场景组件和场景组件的信息。Continuing to refer to FIG. 4 , in step S430 , the initial image is divided into a plurality of image regions, and scene components corresponding to the image regions and information of the scene components are determined.

在游戏编辑场景中,以场景组件来代表初始图像的图像区域,从而以场景组件组合的形式对整张初始图像建模。在生成场景组件前,可以先划分图像区域,并确定图像区域对应的场景组件和场景组件的信息。In the game editing scene, the image area of the initial image is represented by a scene component, so that the entire initial image is modeled in the form of a combination of scene components. Before generating the scene component, the image area can be divided, and the scene component corresponding to the image area and the information of the scene component can be determined.

在划分图像区域时,可以将相邻、颜色相同或相近的像素点划分到一个图像区域内,即每个图像区域呈现出单色或单色系,这样便于为每个图像区域对应的场景组件确定颜色或纹理等信息。或者,也可以根据预先设置的图像区域的尺寸、数量、形状等信息来划分图像区域。When dividing the image area, adjacent pixels with the same or similar colors can be divided into one image area, that is, each image area presents a single color or a single color system, which makes it easy to determine the color or texture information of the scene component corresponding to each image area. Alternatively, the image area can be divided according to the size, number, shape and other information of the image area set in advance.

在得到图像区域后,可以确定每个图像区域对应哪个场景组件。示例性的,可以根据图像区域的形状,若图像区域为正方形,则确定图像区域对应的场景组件为方块组件,若图像区域为长方形,则确定图像区域对应的场景组件为长方体组件。进而,可以确定每个图像区域对应的场景组件的信息,包括但不限于尺寸、位置、方向、颜色、纹理、形态等,使得后续基于该信息生成的场景组件能够呈现出图像区域的外观或形态效果。示例性的,可以根据图像区域的尺寸确定图像区域对应的场景组件的尺寸,如可以预先建立图像区域的尺寸与场景组件的尺寸间的映射关系,由该映射关系计算出图像区域的尺寸所对应的场景组件的尺寸。可以根据图像区域在初始图像中的位置确定图像区域对应的场景组件的位置。可以根据图像区域的颜色或纹理确定图像区域对应的场景组件的颜色或纹理,如可以从图像区域的颜色中提取一种主色(可以是图像区域中的所有像素点颜色值计算出的平均色),将该主色作为场景组件的颜色,或者从图像区域提取纹理特征,根据该纹理特征生成场景组件的纹理。After the image area is obtained, it is possible to determine which scene component each image area corresponds to. Exemplarily, based on the shape of the image area, if the image area is a square, the scene component corresponding to the image area is determined to be a square component, and if the image area is a rectangle, the scene component corresponding to the image area is determined to be a cuboid component. Furthermore, the information of the scene component corresponding to each image area can be determined, including but not limited to size, position, direction, color, texture, morphology, etc., so that the scene component subsequently generated based on the information can present the appearance or morphological effect of the image area. Exemplarily, the size of the scene component corresponding to the image area can be determined based on the size of the image area, such as the mapping relationship between the size of the image area and the size of the scene component can be pre-established, and the size of the scene component corresponding to the size of the image area can be calculated from the mapping relationship. The position of the scene component corresponding to the image area can be determined based on the position of the image area in the initial image. The color or texture of the scene component corresponding to the image area can be determined based on the color or texture of the image area, such as extracting a primary color from the color of the image area (which can be the average color calculated from the color values of all pixels in the image area), using the primary color as the color of the scene component, or extracting texture features from the image area, and generating the texture of the scene component based on the texture features.

在一种实施方式中,组件生成方法还可以包括以下步骤:In one embodiment, the component generation method may further include the following steps:

获取场景组件组合的资源量化参数。Gets the resource quantization parameters of the scene component combination.

其中,资源量化参数表征场景组件组合所使用到的游戏资源的多少,可以通过游戏资源数量进行量化,也可以通过游戏资源的数据量、内存量等进行量化。资源量化参数可以是确切的数值,也可以是数值范围。若资源量化参数是数值范围,则表示场景组件组合所使用到的游戏资源的量化值应当处于该数值范围内。示例性的,资源量化参数可以包括场景组件组合的场景组件数量,其可以是数量值或数量范围,表示场景组件组合所包含的场景组件的数量或数量范围。用户可以为场景组件组合设置资源量化参数,参考上述图5所示,可以设置场景组件组合的场景组件数量最大值(即图5中的最大占用值)为3425,表示场景组件组合所包含的场景组件的数量不超过3425。或者,用户可以为当前的待编辑游戏场景设置资源量化参数,表示该游戏场景所使用到的游戏资源的多少,进而可以根据游戏场景的资源量化参数确定场景组件组合的资源量化参数,如可以将游戏场景的资源量化参数减去已生成的其他模型的资源量化参数,得到场景组件组合的资源量化参数,也可以将游戏场景的资源量化参数乘以一定的比例(表示为场景组件组合分配的资源比例),得到场景组件组合的资源量化参数。当然,也可以由游戏程序自动确定场景组件组合的资源量化参数,如根据用户的权限、游戏程度的缓存数据量、终端设备的资源情况(如内存剩余量)等计算场景组件组合的资源量化参数,或先计算游戏场景的资源量化参数,再由游戏场景的资源量化参数计算场景组件组合的资源量化参数。Among them, the resource quantification parameter characterizes the amount of game resources used by the scene component combination, which can be quantified by the number of game resources, or by the amount of data, memory, etc. of the game resources. The resource quantification parameter can be an exact value or a numerical range. If the resource quantification parameter is a numerical range, it means that the quantified value of the game resources used by the scene component combination should be within the numerical range. Exemplarily, the resource quantification parameter may include the number of scene components of the scene component combination, which may be a numerical value or a numerical range, indicating the number or numerical range of scene components contained in the scene component combination. The user can set the resource quantification parameter for the scene component combination. Referring to FIG. 5 above, the maximum number of scene components of the scene component combination (i.e., the maximum occupancy value in FIG. 5) can be set to 3425, indicating that the number of scene components contained in the scene component combination does not exceed 3425. Alternatively, the user can set resource quantification parameters for the current game scene to be edited, indicating the amount of game resources used by the game scene, and then determine the resource quantification parameters of the scene component combination based on the resource quantification parameters of the game scene, such as subtracting the resource quantification parameters of other generated models from the resource quantification parameters of the game scene to obtain the resource quantification parameters of the scene component combination, or multiplying the resource quantification parameters of the game scene by a certain ratio (expressed as the resource ratio allocated to the scene component combination) to obtain the resource quantification parameters of the scene component combination. Of course, the resource quantification parameters of the scene component combination can also be automatically determined by the game program, such as calculating the resource quantification parameters of the scene component combination based on the user's authority, the amount of cached data of the game level, the resource status of the terminal device (such as the remaining memory), or first calculating the resource quantification parameters of the game scene, and then calculating the resource quantification parameters of the scene component combination from the resource quantification parameters of the game scene.

一般的,资源量化参数越大,表示允许场景组件组合使用越多的游戏资源,即允许场景组件组合包含越多的场景组件,使得场景组件组合越精细,这样在将初始图像划分图像区域时,可以划分地越精细,图像区域的数量越多。由此,在一种实施方式中,可以基于资源量化参数确定图像区域的数量,进而根据图像区域的数量将初始图像划分为多个图像区域。 Generally, the larger the resource quantization parameter is, the more game resources the scene component combination is allowed to use, that is, the scene component combination is allowed to contain more scene components, making the scene component combination more refined, so that when the initial image is divided into image areas, the division can be finer and the number of image areas is greater. Therefore, in one embodiment, the number of image areas can be determined based on the resource quantization parameter, and then the initial image can be divided into multiple image areas according to the number of image areas.

在一种实施方式中,参考图6所示,上述将初始图像划分为多个图像区域,确定图像区域对应的场景组件和场景组件的信息,可以包括以下步骤S610和S620:In one implementation, referring to FIG. 6 , the above-mentioned dividing the initial image into multiple image regions and determining the scene components and the information of the scene components corresponding to the image regions may include the following steps S610 and S620:

步骤S610,基于资源量化参数对初始图像进行采样,并基于采样结果得到目标图像;Step S610, sampling the initial image based on the resource quantization parameter, and obtaining the target image based on the sampling result;

步骤S620,以目标图像的像素点作为一个图像区域,确定图像区域对应的场景组件和场景组件的信息。Step S620, taking the pixels of the target image as an image area, determining the scene components corresponding to the image area and the information of the scene components.

其中,对初始图像进行采样可以是下采样或上采样。若设置相对较大的资源量化参数,其对应的场景组件组合的精细度超过初始图像,则可以对初始图像上采样。若设置相对较小的资源量化参数,其对应的场景组件组合的精细度小于初始图像,则可以对初始图像下采样。根据资源量化参数可以确定图像区域的数量,即确定采样后的像素数量。Among them, the sampling of the initial image can be downsampling or upsampling. If a relatively large resource quantization parameter is set, and the fineness of the corresponding scene component combination exceeds the initial image, the initial image can be upsampled. If a relatively small resource quantization parameter is set, and the fineness of the corresponding scene component combination is less than the initial image, the initial image can be downsampled. The number of image areas can be determined according to the resource quantization parameter, that is, the number of pixels after sampling can be determined.

在一种实施方式中,场景组件组合的资源量化参数包括场景组件组合的场景组件数量,该场景组件数量可以是确切的数量值。上述基于资源量化参数对初始图像进行采样,并基于采样结果得到目标图像,可以包括以下步骤:In one embodiment, the resource quantization parameter of the scene component combination includes the number of scene components of the scene component combination, and the number of scene components may be an exact value. The above-mentioned sampling of the initial image based on the resource quantization parameter and obtaining the target image based on the sampling result may include the following steps:

以场景组件组合的场景组件数量作为采样后的像素数量,对初始图像进行采样,并基于采样结果得到目标图像。The initial image is sampled using the number of scene components of the scene component combination as the number of pixels after sampling, and a target image is obtained based on the sampling result.

其中,采样后的像素数量可以等同于图像区域的数量,将预先设置的场景组件数量作为采样后的像素数量,使得图像区域的数量等于该场景组件数量,由此准确控制场景组件组合中的场景组件数量。Among them, the number of pixels after sampling can be equivalent to the number of image areas. The pre-set number of scene components is used as the number of pixels after sampling, so that the number of image areas is equal to the number of scene components, thereby accurately controlling the number of scene components in the scene component combination.

在对初始图像采样后,得到的采样结果为采样图像。若为上采样,则采样图像相较于初始图像更加精细。若为下采样,则采样图像相较于初始图像更加模糊。可以进一步对采样图像进行预处理,如对颜色进行优化或简化等,得到目标图像。也可以将采样图像直接作为目标图像。After sampling the initial image, the sampling result is the sampled image. If it is up-sampled, the sampled image is finer than the initial image. If it is down-sampled, the sampled image is more blurred than the initial image. The sampled image can be further pre-processed, such as optimizing or simplifying the color, to obtain the target image. The sampled image can also be directly used as the target image.

目标图像的每个像素点均可以作为一个图像区域,相当于通过采样的方式对初始图像进行了图像区域的划分,划分方式简单、高效。进而,可以根据目标图像的每个像素点的信息确定对应的场景组件以及场景组件的信息。Each pixel of the target image can be used as an image region, which is equivalent to dividing the image region of the initial image by sampling, and the division method is simple and efficient. Furthermore, the corresponding scene component and the information of the scene component can be determined according to the information of each pixel of the target image.

在一种实施方式中,上述以目标图像的像素点作为一个图像区域,确定图像区域对应的场景组件和场景组件的信息,可以包括以下步骤:In one implementation, the above-mentioned taking the pixel points of the target image as an image area and determining the scene component and the information of the scene component corresponding to the image area may include the following steps:

根据目标图像的像素数量和场景组件的初始尺寸,确定场景组件组合的初始尺寸;Determine the initial size of the scene component combination according to the number of pixels of the target image and the initial size of the scene component;

若场景组件组合的初始尺寸处于预设尺寸范围内,则保持场景组件的初始尺寸不变;If the initial size of the scene component combination is within the preset size range, the initial size of the scene component is kept unchanged;

若场景组件组合的初始尺寸超出预设尺寸范围,则对场景组件的初始尺寸进行调整,使得在调整初始尺寸后场景组件组合的初始尺寸处于预设尺寸范围内。If the initial size of the scene component combination exceeds the preset size range, the initial size of the scene component is adjusted so that the initial size of the scene component combination is within the preset size range after the initial size is adjusted.

其中,场景组件具有初始尺寸,可以是游戏程序预先配置的默认尺寸,若不对场景组件的尺寸进行设置或调整,则默认使用该初始尺寸,即在游戏编辑场景中生成场景组件时,场景组件的尺寸等于该初始尺寸。目标图像的像素数量即场景组件组合中的场景组件数量,将该数量乘以场景组件的初始尺寸,可以计算出采用场景组件的初始尺寸形成的场景组件组合的尺寸,记为场景组件组合的初始尺寸。需要说明的是,由于此时尚未生成场景组件与场景组件组合,这里计算的场景组件组合的初始尺寸是一种预计的尺寸。Among them, the scene component has an initial size, which can be a default size pre-configured by the game program. If the size of the scene component is not set or adjusted, the initial size is used by default, that is, when the scene component is generated in the game editing scene, the size of the scene component is equal to the initial size. The number of pixels of the target image is the number of scene components in the scene component combination. By multiplying this number by the initial size of the scene component, the size of the scene component combination formed by the initial size of the scene component can be calculated, which is recorded as the initial size of the scene component combination. It should be noted that since the scene component and the scene component combination have not yet been generated at this time, the initial size of the scene component combination calculated here is an estimated size.

预设尺寸范围是针对场景组件组合设置的允许的尺寸范围,可以避免场景组件组合过大或过小。用户可以预先设置预设尺寸范围,或者游戏程序可以自动设置预设尺寸范围,如可以根据场景组件组合的类型确定对应的预设尺寸范围,或者根据游戏场景的大小确定场景组件组合的预设尺寸范围。示例性的,可以针对场景组件组合的宽度和高度设置预设尺寸范围为:0~20米,即场景组件组合的宽度和高度均不能超过20米。若场景组件组合的初始尺寸处于预设尺寸范围内,则可以保持场景组件的初始尺寸不变。若场景组件组合的初始尺寸超出预设尺寸范围,则对场景组件的初始尺寸进行调整,如场景组件组合的初始尺寸过大,则下调场景组件的初始尺寸,场景组件组合的初始尺寸过小,则上调场景组件的初始尺寸。在调整初始尺寸后,可以再次计算场景组件组合的初始尺寸,以保证其处于预设尺寸范围内。由此,为场景组件快速、准确地确定尺寸信息,初始尺寸或调整后的尺寸可以作为后续生成的场景组件的尺寸,无需用户每次手动设置尺寸。并且,通过调整初始尺寸的机制,能够保证后续生成的场景组件组合的尺寸处于预设尺寸范围内,由此将场景组件组合的尺寸控制在合适的水平。The preset size range is the allowable size range set for the scene component combination, which can prevent the scene component combination from being too large or too small. The user can set the preset size range in advance, or the game program can automatically set the preset size range, such as determining the corresponding preset size range according to the type of the scene component combination, or determining the preset size range of the scene component combination according to the size of the game scene. Exemplarily, the preset size range can be set for the width and height of the scene component combination as: 0 to 20 meters, that is, the width and height of the scene component combination cannot exceed 20 meters. If the initial size of the scene component combination is within the preset size range, the initial size of the scene component can be kept unchanged. If the initial size of the scene component combination exceeds the preset size range, the initial size of the scene component is adjusted. If the initial size of the scene component combination is too large, the initial size of the scene component is adjusted down, and if the initial size of the scene component combination is too small, the initial size of the scene component is adjusted up. After adjusting the initial size, the initial size of the scene component combination can be calculated again to ensure that it is within the preset size range. Thus, the size information is quickly and accurately determined for the scene component, and the initial size or the adjusted size can be used as the size of the subsequently generated scene component, without the user manually setting the size each time. Furthermore, by adjusting the mechanism of the initial size, it is possible to ensure that the size of the subsequently generated scene component combination is within a preset size range, thereby controlling the size of the scene component combination at an appropriate level.

在一种实施方式中,组件生成方法还可以包括以下步骤:In one embodiment, the component generation method may further include the following steps:

获取场景组件组合的目标颜色数量。Gets the target color number of the scene component combination.

其中,目标颜色数量表示场景组件组合具有多少种颜色。由此可以控制场景组件组合的颜色,避免其颜色过于复杂或过于简单。The target color quantity indicates how many colors the scene component combination has, so that the color of the scene component combination can be controlled to avoid being too complex or too simple.

用户可以为场景组件组合设置目标颜色数量,参考上述图5所示,用户可以在导入初始图像时一并设置目标颜色数量,如设为12,表示场景组件组合具有不超过12种颜色。或者,游戏程序可以自动设置目标颜色数量,如根据用户的权限、游戏场景的资源、精细度等自动为场景组件组合设置合适的目标 颜色数量。The user can set the target color number for the scene component combination. As shown in Figure 5 above, the user can set the target color number when importing the initial image. For example, if it is set to 12, it means that the scene component combination has no more than 12 colors. Alternatively, the game program can automatically set the target color number, such as automatically setting a suitable target color number for the scene component combination based on the user's permissions, game scene resources, and fineness. Number of colors.

在一种实施方式中,上述基于资源量化参数对初始图像进行采样,并基于采样结果得到目标图像,可以包括以下步骤:In one implementation, the above-mentioned sampling of the initial image based on the resource quantization parameter and obtaining the target image based on the sampling result may include the following steps:

基于资源量化参数对初始图像进行采样,得到采样图像;Sampling the initial image based on the resource quantization parameter to obtain a sampled image;

基于目标颜色数量对采样图像的像素点颜色值进行颜色映射,得到目标图像。The color values of the pixels of the sampled image are color mapped based on the target color quantity to obtain the target image.

其中,采样图像的颜色数量通常不等于为场景组件组合设置的目标颜色数量,例如,采样图像的颜色分布可能较为复杂,而场景组件组合能够近似呈现采样图像或初始图像即可,其颜色不需要那么复杂,因此目标颜色数量通常少于采样图像的颜色数量。通过对采样图像的像素点颜色值进行颜色映射,可以增加或减少采样图像的颜色数量,使其等于目标颜色数量。The number of colors in the sampled image is usually not equal to the target number of colors set for the scene component combination. For example, the color distribution of the sampled image may be complex, while the scene component combination can approximate the sampled image or the initial image, and its colors do not need to be so complex. Therefore, the target number of colors is usually less than the number of colors in the sampled image. By color mapping the pixel color values of the sampled image, the number of colors in the sampled image can be increased or decreased to make it equal to the target number of colors.

本公开对颜色映射的具体方式不做限定,下面通过两种具体方式进行示例性说明:The present disclosure does not limit the specific method of color mapping, and two specific methods are exemplified below:

方式一:通过对像素点颜色值进行聚类以实现颜色映射。具体地,上述基于目标颜色数量对采样图像的像素点颜色值进行颜色映射,得到目标图像,可以包括以下步骤:Method 1: Color mapping by clustering pixel color values. Specifically, the above-mentioned color mapping of the pixel color values of the sampled image based on the number of target colors to obtain the target image may include the following steps:

基于目标颜色数量对采样图像的像素点颜色值进行聚类,得到多个颜色类别,颜色类别的数量等于目标颜色数量;Clustering the pixel color values of the sampled image based on the number of target colors to obtain multiple color categories, where the number of color categories is equal to the number of target colors;

将颜色类别中的像素点颜色值映射为与颜色类别最接近的预设颜色。Map the color value of the pixel in the color category to the preset color closest to the color category.

其中,预设颜色可以是游戏程序提供的场景组件的可选颜色。例如,游戏程序可以为场景组件提供一组可选颜色,如可以是12或24个色相的主色,作为预设颜色,当确定或编辑场景组件的颜色时,需要在预设颜色中选择。这样场景组件不会出现预设颜色以外的颜色,有利于简化游戏场景的复杂度与资源量,进一步提高组件生成效率。The preset colors may be optional colors of scene components provided by the game program. For example, the game program may provide a set of optional colors for scene components, such as primary colors of 12 or 24 hues, as preset colors. When determining or editing the color of a scene component, it is necessary to select from the preset colors. In this way, the scene component will not have colors other than the preset colors, which is conducive to simplifying the complexity and resource amount of the game scene and further improving the efficiency of component generation.

举例来说,若目标颜色数量为12,则可以将采样图像的像素点颜色值聚类为12个颜色类别,如可以采用K均值(K-Means)算法,以K为12进行聚类。然后在预设颜色中确定每个颜色类别最接近的预设颜色,如可以是与颜色类别的聚类中心颜色距离最小的预设颜色。最后将每个颜色类别中的像素点颜色值映射为与该颜色类别最接近的预设颜色。颜色类别的聚类中心颜色可以是颜色类别的中心颜色(如颜色空间上的中心点颜色),也可以是对颜色类别中的像素点颜色值进行加权(如可以根据像素点颜色值的像素点数量确定权重,利用权重对像素点颜色值加权)所计算出的聚类中心颜色。For example, if the number of target colors is 12, the pixel color values of the sampled image can be clustered into 12 color categories, such as using the K-Means algorithm, with K being 12 for clustering. Then, the preset color closest to each color category is determined among the preset colors, such as the preset color with the smallest distance to the cluster center color of the color category. Finally, the pixel color values in each color category are mapped to the preset color closest to the color category. The cluster center color of the color category can be the center color of the color category (such as the center point color in the color space), or it can be the cluster center color calculated by weighting the pixel color values in the color category (such as determining the weight according to the number of pixels of the pixel color value, and weighting the pixel color value using the weight).

方式二:在全部预设颜色中确定与采样图像的颜色分布相匹配的目标颜色数量的预设颜色,将采样图像的像素点颜色值映射至其中最接近的预设颜色。Method 2: Determine the preset colors of the target color quantity that match the color distribution of the sampled image from all the preset colors, and map the pixel color value of the sampled image to the closest preset color.

举例来说,可以预先根据全部预设颜色将颜色空间划分为多个颜色区间,每个预设颜色对应一个颜色区间,如预设颜色可以是颜色区间的中心颜色。统计采样图像的颜色分布,将每个像素点划分到所属的颜色区间内,确定每个颜色区间内的像素点数量。若目标颜色数量为12,则选取像素点数量最多的12个颜色区间内的预设颜色,由此确定了12个预设颜色。将采样图像的每个像素点颜色值映射至12个预设颜色中与其最接近的预设颜色。For example, the color space can be divided into multiple color intervals according to all preset colors in advance, and each preset color corresponds to a color interval, such as the preset color can be the center color of the color interval. The color distribution of the sampled image is statistically analyzed, and each pixel is divided into the corresponding color interval to determine the number of pixels in each color interval. If the number of target colors is 12, the preset colors in the 12 color intervals with the largest number of pixels are selected, thereby determining 12 preset colors. The color value of each pixel of the sampled image is mapped to the preset color closest to it among the 12 preset colors.

颜色映射能够实现由采样图像的颜色到场景组件的颜色之间的转换。进一步的,在一种实施方式中,上述确定图像区域对应的场景组件和场景组件的信息,可以包括以下步骤:Color mapping can realize the conversion between the color of the sampled image and the color of the scene component. Further, in one embodiment, the above-mentioned determination of the scene component corresponding to the image area and the information of the scene component may include the following steps:

将目标图像的像素点的颜色值作为像素点对应的场景组件的颜色值。The color value of the pixel of the target image is used as the color value of the scene component corresponding to the pixel.

其中,由于目标图像的每个像素点的颜色值已经是场景组件可选的预设颜色,可以直接将每个像素点的颜色值作为每个像素点对应的场景组件的颜色值,而无需对场景组件的颜色值做进一步处理,处理过程较为方便、高效。由此保证了目标图像的颜色、场景组件的颜色与游戏场景本身的颜色风格相协调。Among them, since the color value of each pixel of the target image is already a preset color that can be selected by the scene component, the color value of each pixel can be directly used as the color value of the scene component corresponding to each pixel, without further processing the color value of the scene component, and the processing process is relatively convenient and efficient. This ensures that the color of the target image, the color of the scene component and the color style of the game scene itself are coordinated.

在一种实施方式中,上述多个场景组件选择控件中包括第一场景组件选择控件和第二场景组件选择控件,第二场景组件选择控件对应的第二场景组件的形状与通过预设数量的第一场景组件选择控件对应的第一场景组件拼接成的组件的形状相对应。例如,第一场景组件可以是方块组件或长方体组件,方块组件的形状为立方体,长方体组件的形状为长方体。第二场景组件可以是长方体组件、阶梯形组件、十字组件等,其形状可以是长方体、阶梯形、十字形等,这些形状均可以由多个立方体或长方体拼接而成。上述像素点对应的场景组件可以是第一场景组件。上述确定图像区域对应的场景组件和场景组件的信息,还可以包括以下步骤:In one embodiment, the above-mentioned multiple scene component selection controls include a first scene component selection control and a second scene component selection control, and the shape of the second scene component corresponding to the second scene component selection control corresponds to the shape of the component spliced together by the first scene components corresponding to a preset number of first scene component selection controls. For example, the first scene component can be a square component or a rectangular component, the shape of the square component is a cube, and the shape of the rectangular component is a rectangular parallelepiped. The second scene component can be a rectangular parallelepiped component, a stepped component, a cross component, etc., and its shape can be a rectangular parallelepiped, stepped, cross-shaped, etc., and these shapes can be spliced together by multiple cubes or rectangular parallelepipeds. The scene component corresponding to the above-mentioned pixel point can be the first scene component. The above-mentioned determination of the scene component corresponding to the image area and the information of the scene component can also include the following steps:

在确定像素点对应的第一场景组件的颜色值后,若存在多个相邻、且颜色值相同的第一场景组件,则将该多个相邻、且颜色值相同的第一场景组件合并为第二场景组件。After determining the color value of the first scene component corresponding to the pixel point, if there are multiple adjacent first scene components with the same color value, the multiple adjacent first scene components with the same color value are merged into a second scene component.

举例来说,第一场景组件为方块组件,第二场景组件为长方体组件。多个颜色值相同的方块组件对应于目标图像中的同一行或同一列中的相邻位置,则可以将这些方块组件合并为长方体组件,长方体组件的颜色值与方块组件的颜色值相同。这样能够减少场景组件的数量,有利于简化场景组件组合的生成过程。For example, the first scene component is a square component, and the second scene component is a cuboid component. If multiple square components with the same color value correspond to adjacent positions in the same row or column in the target image, these square components can be merged into a cuboid component, and the color value of the cuboid component is the same as the color value of the square component. This can reduce the number of scene components and help simplify the generation process of scene component combinations.

在一种实施方式中,上述多个场景组件选择控件中包括第一场景组件选择控件,上述像素点对应的 场景组件为第一场景组件选择控件对应的第一场景组件。上述确定图像区域对应的场景组件和场景组件的信息,还可以包括以下步骤:In one embodiment, the plurality of scene component selection controls include a first scene component selection control, and the pixel point corresponding to the first scene component selection control The scene component is the first scene component corresponding to the first scene component selection control. The above-mentioned determination of the scene component corresponding to the image area and the information of the scene component may also include the following steps:

在确定像素点对应的第一场景组件的颜色值后,若存在多个相邻、且颜色值相同的第一场景组件,则将该多个相邻、且颜色值相同的第一场景组件合并为一个第一场景组件,并根据该多个相邻、且颜色值相同的第一场景组件的数量确定合并后的第一场景组件的尺寸。After determining the color value of the first scene component corresponding to the pixel point, if there are multiple adjacent first scene components with the same color value, the multiple adjacent first scene components with the same color value are merged into one first scene component, and the size of the merged first scene component is determined according to the number of the multiple adjacent first scene components with the same color value.

其中,第一场景组件的尺寸可调,并可以沿任意方向调整。如第一场景组件为长方体组件,可以对长方体组件的长、宽、高中的任意一种或多种尺寸进行调整。The size of the first scene component is adjustable and can be adjusted in any direction. For example, if the first scene component is a rectangular parallelepiped component, any one or more of the length, width, and height of the rectangular parallelepiped component can be adjusted.

举例来说,多个颜色值相同的长方体组件对应于目标图像中的同一行或同一列中的相邻位置,则可以将这些长方体组件合并为一个更大的长方体组件,合并后长方体组件的颜色值与合并前长方体组件的颜色值相同,沿合并方向的尺寸是合并前长方体组件的尺寸乘以合并的数量。如,沿目标图像的宽度方向(即长方体组件的宽度方向)将N个相邻、颜色值相同的长方体组件合并,合并后长方体组件的宽度是合并前宽度的N倍,长度与高度可以保持不变。这样同样能够减少场景组件的数量,有利于简化场景组件组合的生成过程。For example, if multiple cuboid components with the same color value correspond to adjacent positions in the same row or column in the target image, these cuboid components can be merged into a larger cuboid component, and the color value of the cuboid component after merging is the same as the color value of the cuboid component before merging, and the size along the merging direction is the size of the cuboid component before merging multiplied by the number of mergers. For example, along the width direction of the target image (i.e., the width direction of the cuboid component), N adjacent cuboid components with the same color value are merged, and the width of the cuboid component after merging is N times the width before merging, and the length and height can remain unchanged. This can also reduce the number of scene components, which is conducive to simplifying the generation process of scene component combination.

在一种实施方式中,游戏编辑场景中设置有虚拟摄像机,用于实时拍摄并显示游戏编辑场景的当前画面。上述确定图像区域对应的场景组件和场景组件的信息,可以包括以下步骤:In one embodiment, a virtual camera is provided in the game editing scene for real-time shooting and displaying the current screen of the game editing scene. The above-mentioned determination of the scene component and the information of the scene component corresponding to the image area may include the following steps:

根据虚拟摄像机的位姿确定场景组件组合的基准点位置;Determine the reference point position of the scene component combination according to the position and posture of the virtual camera;

根据图像区域对应的场景组件在场景组件组合中的相对位置以及基准点位置,确定图像区域对应的场景组件的位置。The position of the scene component corresponding to the image area is determined according to the relative position of the scene component corresponding to the image area in the scene component combination and the reference point position.

其中,场景组件组合具有一定的体积,在确定场景组件组合在游戏编辑场景中的位置时,可以在场景组件组合中选取基准点,通过确定基准点位置,来确定场景组件组合中的各个场景组件的位置。基准点可以是场景组件的中心点,任意一个角点,或任意一条边上的中心点等等。The scene component combination has a certain volume. When determining the position of the scene component combination in the game editing scene, a reference point can be selected in the scene component combination, and the position of each scene component in the scene component combination can be determined by determining the position of the reference point. The reference point can be the center point of the scene component, any corner point, or the center point of any edge, etc.

根据虚拟摄像机的位姿确定场景组件组合的基准点位置,使得基准点位于虚拟摄像机视野画面中的合适位置,这样用户能够看到整个场景组件组合。更具体地,可以根据导入初始图像或确定生成场景组件组合时(如在图5中导入初始图像并点击“生成”)虚拟摄像机的位姿确定场景组件组合的基准点位置。The reference point position of the scene component combination is determined according to the position of the virtual camera, so that the reference point is located at a suitable position in the virtual camera field of view, so that the user can see the entire scene component combination. More specifically, the reference point position of the scene component combination can be determined according to the position of the virtual camera when the initial image is imported or the scene component combination is generated (such as importing the initial image and clicking "Generate" in Figure 5).

示例性的,基准点可以是场景组件的中心点,可以根据虚拟摄像机的位姿,确定光轴方向,并确定基准点位置位于光轴方向上。还可以根据场景组件组合的尺寸,确定基准点位置与虚拟摄像机的距离(沿光轴方向的距离),使得整个场景组件组合能够显示在虚拟摄像机的视野画面中。Exemplarily, the reference point may be the center point of the scene component, and the optical axis direction may be determined according to the position of the virtual camera, and the reference point position may be determined to be located in the optical axis direction. The distance between the reference point position and the virtual camera (the distance along the optical axis direction) may also be determined according to the size of the scene component combination, so that the entire scene component combination can be displayed in the field of view of the virtual camera.

在基准点位置的基础上,根据每个图像区域对应的场景组件在场景组件组合中的相对位置,可以确定每个场景组件的位置。相对位置可以是相对于基准点的偏移位置,将基准点位置加上相对位置,可以计算出场景组件的位置。由此,为场景组件确定合适的生成位置,这样在生成场景组件与场景组件组合时,使得用户可以在游戏编辑场景中直接看到场景组件与场景组件组合。Based on the reference point position, the position of each scene component can be determined according to the relative position of the scene component corresponding to each image area in the scene component combination. The relative position can be an offset position relative to the reference point. The position of the scene component can be calculated by adding the reference point position to the relative position. Thus, a suitable generation position is determined for the scene component, so that when the scene component and the scene component combination are generated, the user can directly see the scene component and the scene component combination in the game editing scene.

继续参考图4,在步骤S440中,根据图像区域对应的场景组件的信息,在游戏编辑场景中生成图像区域对应的场景组件,形成与初始图像对应的场景组件组合。Continuing to refer to FIG. 4 , in step S440 , based on the information of the scene components corresponding to the image area, the scene components corresponding to the image area are generated in the game editing scene to form a scene component combination corresponding to the initial image.

其中,生成场景组件的过程可以包括:生成场景组件的对象,该对象可以是场景组件的游戏资源、场景组件的信息(如相关的参数)以及相关代码的集合;在游戏编辑场景中加载场景组件的对象,可以表现为渲染出场景组件。在生成全部图像区域对应的场景组件后,由全部场景组件形成场景组件组合。The process of generating a scene component may include: generating a scene component object, which may be a collection of game resources of the scene component, information of the scene component (such as related parameters) and related codes; loading the scene component object in the game editing scene, which may be represented by rendering the scene component. After the scene components corresponding to all image areas are generated, a scene component combination is formed by all the scene components.

在一种实施方式中,上述根据图像区域对应的场景组件的信息,在游戏编辑场景中生成图像区域对应的场景组件,形成与初始图像对应的场景组件组合,可以包括以下步骤:In one embodiment, the above-mentioned step of generating the scene component corresponding to the image area in the game editing scene according to the information of the scene component corresponding to the image area to form a scene component combination corresponding to the initial image may include the following steps:

将场景组件组合的生成任务添加到生成队列中;Add the generation task of the scene component combination to the generation queue;

当执行到场景组件组合的生成任务时,根据图像区域对应的场景组件的信息,在游戏编辑场景中生成图像区域对应的场景组件,形成与初始图像对应的场景组件组合。When the task of generating a scene component combination is executed, the scene component corresponding to the image area is generated in the game editing scene according to the information of the scene component corresponding to the image area, so as to form a scene component combination corresponding to the initial image.

其中,生成队列可以参考图7所示,可以包括一个或多个生成任务,这些任务可以是场景组件组合(如图片墙)的生成任务,也可以是其他模型的生成任务。在生成队列中,可以根据各个生成任务的建立时间或添加到生成队列的时间排列各个生成任务,并按照排列的顺序执行。当然,用户也可以指定或调整各个生成任务的排列顺序或执行顺序,例如用户可以对某个生成任务输入优先执行的指令,则游戏程序可以将该生成任务的执行顺序提前,如可以将其设置为下一个任务。生成队列中也可以显示各个生成任务的状态,如等待中、执行中、已完成等。根据生成队列中的执行顺序,执行到场景组件组合的生成任务时,可以执行步骤S430。由此可以保证游戏场景中的各个模型能够有序地生成,这样即使用户在短时间内频繁输入生成组件的指令(如导入初始图像或点击生成的指令),也能够避免游戏程序加载过多内容而导致卡顿。并且,在生成队列中显示生成任务的相关信息,实现组件生成的后台处理过程的可视化,有利于用户感知。 Wherein, the generation queue can refer to FIG. 7, and can include one or more generation tasks, which can be generation tasks of scene component combination (such as picture wall) or generation tasks of other models. In the generation queue, each generation task can be arranged according to the establishment time of each generation task or the time of adding to the generation queue, and executed in the order of arrangement. Of course, the user can also specify or adjust the arrangement order or execution order of each generation task. For example, the user can input a priority execution instruction for a certain generation task, and the game program can advance the execution order of the generation task, such as setting it as the next task. The status of each generation task can also be displayed in the generation queue, such as waiting, executing, completed, etc. According to the execution order in the generation queue, when the generation task of the scene component combination is executed, step S430 can be executed. Thus, it can be ensured that each model in the game scene can be generated in an orderly manner, so that even if the user frequently inputs the instruction of the generation component in a short time (such as importing the initial image or clicking the generated instruction), it can also avoid the game program from loading too much content and causing jamming. In addition, the relevant information of the generation task is displayed in the generation queue to realize the visualization of the background processing process of component generation, which is conducive to user perception.

参考图7所示,生成队列中可以示出每个任务的生成时间或完成时间,每个任务中组件的资源量化参数,如第一个图片墙的资源量化参数为3250。用户可以通过操作删除控件(图中垃圾桶图标的控件)删除任务,也可以通过操作“生成至场景”的控件,触发显示相应组件的生成过程。As shown in FIG7 , the generation queue may show the generation time or completion time of each task, and the resource quantization parameters of the components in each task, such as the resource quantization parameter of the first picture wall is 3250. The user can delete the task by operating the delete control (the control with the trash can icon in the figure), or trigger the generation process of the corresponding component by operating the “Generate to Scene” control.

在一种实施方式中,游戏编辑场景中设置有虚拟摄像机,用于实时拍摄并显示游戏编辑场景的当前画面。上述在游戏编辑场景中生成图像区域对应的场景组件,可以包括以下步骤:In one embodiment, a virtual camera is provided in the game editing scene for real-time shooting and displaying the current screen of the game editing scene. The above-mentioned generation of a scene component corresponding to the image area in the game editing scene may include the following steps:

于虚拟摄像机的视野范围内生成图像区域对应的场景组件。Generate a scene component corresponding to the image area within the field of view of the virtual camera.

其中,在生成场景组件时,可以同步进行渲染。将场景组件的生成位置置于虚拟摄像机的视野范围内,这样用户能够看到场景组件的生成。When the scene component is generated, rendering can be performed synchronously, and the generation position of the scene component is placed within the field of view of the virtual camera, so that the user can see the generation of the scene component.

在一种实施方式中,上述于虚拟摄像机的视野范围内生成图像区域对应的场景组件,可以包括以下步骤:In one embodiment, the above-mentioned generation of a scene component corresponding to an image area within the field of view of the virtual camera may include the following steps:

于虚拟摄像机的视野范围内、与虚拟摄像机的光轴垂直、且与虚拟摄像机间的距离为预设距离的平面上,生成图像区域对应的场景组件。A scene component corresponding to the image area is generated on a plane within the field of view of the virtual camera, perpendicular to the optical axis of the virtual camera, and at a preset distance from the virtual camera.

其中,与虚拟摄像机的光轴垂直的平面,即正对虚拟摄像机的视野方向的平面。该平面与虚拟摄像机间的距离(具体可以是与虚拟摄像机的光心间的距离)为预设距离,预设距离可以根据经验或游戏场景的尺寸、场景组件组合的尺寸等确定。示例性的,场景组件组合的宽和高不超过20米,预设距离可以是50米。在确定与虚拟摄像机的光轴垂直、且与虚拟摄像机间的距离为预设距离的平面后,可以在该平面上生成每个场景组件,即每个场景组件均与该平面相切或相交。这样能够将场景组件组合整体放置到虚拟摄像机的视野范围内,并且其在视野画面中的大小合适,不会占满整个画面,也不会显得过小。Among them, the plane perpendicular to the optical axis of the virtual camera is the plane facing the field of view of the virtual camera. The distance between the plane and the virtual camera (specifically, it can be the distance between the optical center of the virtual camera) is a preset distance, and the preset distance can be determined based on experience or the size of the game scene, the size of the scene component combination, etc. Exemplarily, the width and height of the scene component combination do not exceed 20 meters, and the preset distance can be 50 meters. After determining a plane that is perpendicular to the optical axis of the virtual camera and the distance between the virtual camera and the preset distance, each scene component can be generated on the plane, that is, each scene component is tangent to or intersects with the plane. In this way, the scene component combination can be placed as a whole within the field of view of the virtual camera, and its size in the field of view is appropriate, and it will not fill the entire screen, nor will it appear too small.

在一种实施方式中,上述根据图像区域对应的场景组件的信息,在游戏编辑场景中生成图像区域对应的场景组件,可以包括以下步骤:In one embodiment, the above-mentioned step of generating the scene component corresponding to the image area in the game editing scene according to the information of the scene component corresponding to the image area may include the following steps:

确定不同的图像区域对应的场景组件的生成顺序,其中,至少部分图像区域对应的场景组件的生成顺序与其他图像区域对应的场景组件的生成顺序不同;Determining a generation order of scene components corresponding to different image regions, wherein the generation order of scene components corresponding to at least some of the image regions is different from the generation order of scene components corresponding to other image regions;

根据图像区域对应的场景组件的信息,并按照图像区域对应的场景组件的生成顺序,在游戏编辑场景中生成图像区域对应的场景组件。According to the information of the scene components corresponding to the image area and in accordance with the generation order of the scene components corresponding to the image area, the scene components corresponding to the image area are generated in the game editing scene.

其中,对于全部图像区域对应的场景组件而言,其生成顺序不是完全相同的,即全部的场景组件存在一定的生成顺序先后之分,并不是在同一时间一起生成的。可以根据游戏程序的处理能力,对不同的图像区域对应的场景组件随机确定生成顺序。例如,游戏程序在生成场景组件时,可以并行运行M个负责生成的线程,这样在同一时间可以生成M个场景组件,可以将各个图像区域对应的场景组件以M个为一组进行划分,每一组设置相同的生成顺序,或者将各个图像区域对应的场景组件划分为M个集合中,每个集合内按照1、2、3、…的次序为场景组件设置生成顺序。Among them, for the scene components corresponding to all image areas, their generation order is not exactly the same, that is, all scene components have a certain generation order, and are not generated at the same time. The generation order of scene components corresponding to different image areas can be randomly determined according to the processing power of the game program. For example, when the game program generates scene components, M threads responsible for generation can be run in parallel, so that M scene components can be generated at the same time. The scene components corresponding to each image area can be divided into M groups, and each group is set with the same generation order, or the scene components corresponding to each image area are divided into M sets, and the generation order is set for the scene components in each set in the order of 1, 2, 3, ...

在确定场景组件的生成顺序的情况下,可以按照生成顺序逐渐生成各个场景组件,使得场景组件的生成过程更加有序,避免游戏程序一次性加载过多数据。When the generation order of scene components is determined, each scene component can be gradually generated according to the generation order, so that the generation process of scene components is more orderly and the game program is prevented from loading too much data at one time.

在一种实施方式中,上述根据图像区域对应的场景组件的信息,并按照图像区域对应的场景组件的生成顺序,在游戏编辑场景中生成图像区域对应的场景组件,可以包括以下步骤:In one embodiment, the above-mentioned step of generating the scene components corresponding to the image area in the game editing scene according to the information of the scene components corresponding to the image area and in the generation order of the scene components corresponding to the image area may include the following steps:

在当前视野画面内,动态地显示根据图像区域对应的场景组件的信息,并按照图像区域对应的场景组件的生成顺序,在游戏编辑场景中生成图像区域对应的场景组件的过程;其中,当前视野画面是通过设置于游戏编辑场景中的虚拟摄像机拍摄游戏编辑场景所形成的画面。In the current field of view, information of scene components corresponding to the image area is dynamically displayed, and the process of generating scene components corresponding to the image area in the game editing scene is carried out in the order of generating the scene components corresponding to the image area; wherein the current field of view is a picture formed by shooting the game editing scene with a virtual camera set in the game editing scene.

参考图8和图9所示,可以将场景组件的生成过程动态地显示在当前视野画面内。示例性的,一开始,显示未生成任何场景组件的画面,该画面内可以仅有游戏编辑场景的背景,也可以有其他已生成的模型;随后,逐渐显示已生成的场景组件,图8示出了已生成部分场景组件的画面;最后,当全部场景组件都已生成时,形成完整的场景组件组合,如图9所示,场景组件组合体现为图片墙、像素图像等形式。这样用户可以观看到完整的生成过程,避免用户在生成过程中无意义地等待,并使用户对生成过程有更强的感知,用户体验更好。Referring to Figures 8 and 9, the generation process of the scene components can be dynamically displayed in the current field of view. Exemplarily, at the beginning, a screen without any generated scene components is displayed, and the screen may only have the background of the game editing scene, or other generated models; then, the generated scene components are gradually displayed, and Figure 8 shows a screen in which some scene components have been generated; finally, when all scene components have been generated, a complete scene component combination is formed, as shown in Figure 9, and the scene component combination is embodied in the form of a picture wall, a pixel image, etc. In this way, the user can watch the complete generation process, avoid the user from waiting meaninglessly during the generation process, and make the user have a stronger perception of the generation process, and the user experience is better.

在一种实施方式中,上述在当前视野画面内,动态地显示根据图像区域对应的场景组件的信息,并按照图像区域对应的场景组件的生成顺序,在游戏编辑场景中生成图像区域对应的场景组件的过程,可以包括以下步骤:In one embodiment, the process of dynamically displaying information of scene components corresponding to the image area in the current field of view, and generating scene components corresponding to the image area in the game editing scene according to the generation order of the scene components corresponding to the image area, may include the following steps:

响应视野调整指令,控制调整虚拟摄像机的如下至少一种信息:位置、方向、焦距、视场角;In response to the field of view adjustment instruction, control and adjust at least one of the following information of the virtual camera: position, direction, focal length, and field of view angle;

根据调整后的虚拟摄像机采集游戏编辑场景形成调整后的视野画面;The game editing scene is captured according to the adjusted virtual camera to form an adjusted field of view;

在调整后的视野画面中,动态地显示根据图像区域对应的场景组件的信息,并按照图像区域对应的场景组件的生成顺序,在游戏编辑场景中生成图像区域对应的场景组件的过程。In the adjusted field of view, the information of the scene components corresponding to the image area is dynamically displayed, and the process of generating the scene components corresponding to the image area in the game editing scene is carried out according to the generation order of the scene components corresponding to the image area.

其中,场景组件的生成位置在游戏编辑场景中是固定的。在场景组件的生成过程中,允许通过视野调整指令调整视野画面。具体地,当产生视野调整指令时,游戏程序可以控制虚拟摄像机进行视野调整, 以调整后的视野画面显示场景组件的生成过程,使得用户可以以不同的视角观看生成过程。The generation position of the scene component is fixed in the game editing scene. During the generation process of the scene component, the field of view can be adjusted through the field of view adjustment instruction. Specifically, when the field of view adjustment instruction is generated, the game program can control the virtual camera to adjust the field of view. The generation process of the scene component is displayed with an adjusted field of view, so that the user can watch the generation process from different perspectives.

视野调整指令可以是用户输入的指令,也可以是游戏程序自动实现的指令。示例性的,在场景组件的生成过程中,用户可以对虚拟摄像机的视野进行调整,包括移动虚拟摄像机位置,转动虚拟摄像机以改变其视野方向,调整虚拟摄像机的焦距或视场角,以改变视野中心位置或视野大小,等等。如用户想要对某个场景组件近距离观察时,可以将虚拟摄像机移动到距离该场景组件更近的位置。或者,游戏程序可以按照预先设置的逻辑,自动调整虚拟摄像机,如控制虚拟摄像机围绕场景组件进行转动,以实现360度观看场景组件的动态显示效果。由此,能够丰富动态显示生成过程的显示效果,进一步提升用户体验。The field of view adjustment instruction can be an instruction input by the user, or it can be an instruction automatically implemented by the game program. Exemplarily, during the generation process of the scene component, the user can adjust the field of view of the virtual camera, including moving the virtual camera position, rotating the virtual camera to change its field of view direction, adjusting the focal length or field of view of the virtual camera to change the center position of the field of view or the size of the field of view, etc. If the user wants to observe a certain scene component up close, the virtual camera can be moved to a position closer to the scene component. Alternatively, the game program can automatically adjust the virtual camera according to the pre-set logic, such as controlling the virtual camera to rotate around the scene component to achieve a 360-degree viewing dynamic display effect of the scene component. Thus, the display effect of the dynamic display generation process can be enriched, further enhancing the user experience.

在一种实施方式中,在当前视野画面内,动态地显示根据图像区域对应的场景组件的信息,并按照图像区域对应的场景组件的生成顺序,在游戏编辑场景中生成图像区域对应的场景组件的过程时,组件生成方法还可以包括以下步骤:In one embodiment, in the current field of view, dynamically displaying the information of the scene components corresponding to the image area, and in the process of generating the scene components corresponding to the image area in the game editing scene according to the generation order of the scene components corresponding to the image area, the component generation method may further include the following steps:

锁定在游戏编辑场景中添加组件或编辑组件的操作,以禁止在游戏编辑场景中添加组件或编辑组件。Lock the operation of adding or editing components in the game editing scene to prohibit adding or editing components in the game editing scene.

其中,添加组件的操作是指在游戏编辑场景中添加新组件的操作,编辑组件的操作是指对游戏编辑场景中已有的组件进行编辑的操作。在场景组件的生成过程中可以锁定这两类操作,即无法添加组件或编辑组件。示例性的,可以将添加组件或编辑组件的控件设置为不可操作的状态,如显示为灰色,或在控件上增加禁止图标,用户无法对控件进行点击等操作,也可以将控件隐藏,使得用户无法操作。或者,可以不改变控件的形态,当用户进行添加组件或编辑组件的操作后,游戏程序不予以执行,如可以将操作信息丢弃,也可以显示相关的提示,如“当前无法进行该操作”等内容。Among them, the operation of adding a component refers to the operation of adding a new component in the game editing scene, and the operation of editing a component refers to the operation of editing an existing component in the game editing scene. These two types of operations can be locked during the generation process of the scene component, that is, components cannot be added or edited. Exemplarily, the control for adding a component or editing a component can be set to an inoperable state, such as being displayed in gray, or adding a prohibition icon to the control, so that the user cannot click on the control, etc., or the control can be hidden so that the user cannot operate it. Alternatively, the form of the control can be left unchanged, and when the user adds a component or edits a component, the game program will not execute it, such as discarding the operation information, or displaying relevant prompts, such as "this operation cannot be performed at present".

通过锁定上述两种操作,一方面,能够防止添加组件或编辑组件所产生的信息或信息更新与正在生成的场景组件发生冲突,例如用户在添加组件或编辑组件时,可能占用正在生成的场景组件的位置,锁定两种操作后可以防止这种情况的发生。另一方面,若同时执行生成场景组件以及添加组件或编辑组件,可能导致游戏程序同时加载的数据量过多,处理任务过于繁重,可能导致卡顿,锁定两种操作后可以使游戏程序主要执行生成场景组件的任务,能够保证生成过程的流畅性。By locking the above two operations, on the one hand, it is possible to prevent the information or information updates generated by adding or editing components from conflicting with the scene components being generated. For example, when adding or editing components, the user may occupy the position of the scene component being generated. Locking the two operations can prevent this from happening. On the other hand, if the generation of scene components and the addition or editing of components are performed at the same time, the game program may load too much data at the same time, and the processing tasks may be too heavy, which may cause jamming. Locking the two operations can make the game program mainly perform the task of generating scene components, which can ensure the smoothness of the generation process.

在一种实施方式中,在完成场景组件组合的生成后,可以解除对两种操作的锁定。In one implementation, after the generation of the scene component combination is completed, the locks on the two operations may be released.

在一种实施方式中,场景组件的信息可以包括如下至少一种:尺寸、位置、方向、颜色、纹理、形态。上述根据图像区域对应的场景组件信息,在游戏编辑场景中生成图像区域对应的场景组件,可以包括以下步骤:In one embodiment, the information of the scene component may include at least one of the following: size, position, direction, color, texture, and shape. The above-mentioned step of generating the scene component corresponding to the image area in the game editing scene according to the scene component information corresponding to the image area may include the following steps:

根据场景组件的信息调用对应的编辑指令,其中,编辑指令为游戏程序预先提供的指令;Calling corresponding editing instructions according to the information of the scene component, wherein the editing instructions are instructions provided in advance by the game program;

根据上述编辑指令,在游戏编辑场景中生成图像区域对应的场景组件。According to the above editing instructions, a scene component corresponding to the image area is generated in the game editing scene.

其中,在确定图像区域对应的场景组件后,场景组件的上述各种信息可能是空值(如游戏程序不会为场景组件设置默认的位置,若未确定位置,则位置的数值为空值)或默认值(如游戏程序可以为场景组件设置初始尺寸,该初始尺寸为尺寸的默认值),根据各个图像区域进一步确定场景组件的信息后,需要将场景组件的信息赋值给场景组件的对象,即改变原本的空值或默认值,这一过程可以调用编辑指令来实现。Among them, after determining the scene component corresponding to the image area, the above-mentioned various information of the scene component may be a null value (such as the game program will not set a default position for the scene component. If the position is not determined, the value of the position is a null value) or a default value (such as the game program can set an initial size for the scene component, and the initial size is the default value of the size). After further determining the information of the scene component according to each image area, it is necessary to assign the information of the scene component to the object of the scene component, that is, to change the original null value or default value. This process can be implemented by calling the editing instruction.

编辑指令可以包括但不限于:缩放指令,用于调整场景组件的尺寸;移动指令,用于改变场景组件的位置;旋转指令,用于改变场景组件的方向;颜色编辑指令,用于调整场景组件的颜色;纹理编辑指令,用于调整场景组件的纹理;形态调整指令,用于调整场景组件的形态,如将场景组件调整为静态形态,或透明度可自动变化的动态形态,或周期性消失与出现的动态形态,或旋转的动态形态等。Editing instructions may include but are not limited to: scaling instructions for adjusting the size of scene components; moving instructions for changing the position of scene components; rotation instructions for changing the direction of scene components; color editing instructions for adjusting the color of scene components; texture editing instructions for adjusting the texture of scene components; morphology adjustment instructions for adjusting the morphology of scene components, such as adjusting the scene components to a static morphology, or a dynamic morphology with automatically changing transparency, or a dynamic morphology that periodically disappears and appears, or a rotating dynamic morphology, etc.

编辑指令可以开放给用户使用,即用户可以通过手动操作实现上述一种或多种编辑指令。若用户通过手动操作的编辑指令,生成场景组件,需要进行大量的手动操作。本示例性实施方式中,游戏程序可以根据场景组件的信息自动调用所需的编辑指令并加以执行,以快速实现组件生成。并且,用户手动操作的编辑指令与游戏程序自动执行的编辑指令,可以来自于同一指令集,这样无需为用户的手动操作与游戏程序的自动操作设置两套指令集,有利于降低开销。The editing instructions can be open to users, that is, users can implement the above one or more editing instructions through manual operation. If the user generates a scene component through the editing instructions manually operated, a large amount of manual operation is required. In this exemplary embodiment, the game program can automatically call the required editing instructions and execute them according to the information of the scene component to quickly realize component generation. In addition, the editing instructions manually operated by the user and the editing instructions automatically executed by the game program can come from the same instruction set, so there is no need to set two sets of instruction sets for the manual operation of the user and the automatic operation of the game program, which is conducive to reducing overhead.

本示例性实施方式中,在导入初始图像后,通常仅需要数秒钟至数十秒钟(具体由终端设备或服务器的性能、资源投入情况等决定)即可生成对应的场景组件组合,而人工手动编辑建模则一般需要数个小时。由此可见,本示例性实施方式能够大大降低组件生成时间,提高组件生成效率。In this exemplary embodiment, after the initial image is imported, it usually takes only a few seconds to tens of seconds (determined by the performance of the terminal device or server, resource investment, etc.) to generate the corresponding scene component combination, while manual editing and modeling generally takes several hours. It can be seen that this exemplary embodiment can greatly reduce the component generation time and improve the component generation efficiency.

在形成与初始图像对应的场景组件组合之后,可以允许对场景组件组合进行整体性操作,如移动、缩放、旋转操作等。下面分别对三种操作进行说明:After forming a scene component combination corresponding to the initial image, you can allow the scene component combination to be operated as a whole, such as moving, scaling, rotating, etc. The following describes the three operations respectively:

移动操作Mobile Operations

在一种实施方式中,在形成与初始图像对应的场景组件组合之后,组件生成方法还可以包括以下步骤:In one embodiment, after forming a scene component combination corresponding to the initial image, the component generation method may further include the following steps:

响应于对场景组件组合的移动操作,移动场景组件组合在游戏编辑场景中的位置。 In response to a move operation on the scene component assembly, the position of the scene component assembly in the game editing scene is moved.

其中,用户可以对场景组件组合进行整体移动,如可以单击、双击或长按场景组件组合中的任意位置或特定位置以选中整个场景组件组合,然后通过拖动等操作将其移动到游戏编辑场景中的其他位置。Among them, the user can move the scene component combination as a whole, such as single-clicking, double-clicking, or long pressing any position or specific position in the scene component combination to select the entire scene component combination, and then move it to other locations in the game editing scene by dragging and other operations.

参考图10所示,游戏编辑场景中可以显示世界坐标系的三个轴,分别为x轴、y轴、z轴,可以控制场景组件组合沿任意一个或多个轴进行移动。在移动时,还可以在三个轴上显示场景组件组合的投影位置,如可以高亮显示或显示为其他颜色,或者如图10所示,显示在一个或多个轴上坐标,使用户看到场景组件组合在不同方向上的位置,便于用户将其准确移动到目标位置。As shown in FIG10 , the three axes of the world coordinate system can be displayed in the game editing scene, which are the x-axis, y-axis, and z-axis, respectively, and the scene component combination can be controlled to move along any one or more axes. When moving, the projection position of the scene component combination can also be displayed on the three axes, such as being highlighted or displayed in other colors, or as shown in FIG10 , displaying the coordinates on one or more axes, so that the user can see the position of the scene component combination in different directions, which is convenient for the user to accurately move it to the target position.

缩放操作Zoom Operation

在一种实施方式中,在形成与初始图像对应的场景组件组合之后,组件生成方法还可以包括以下步骤:In one embodiment, after forming a scene component combination corresponding to the initial image, the component generation method may further include the following steps:

响应于对场景组件组合的缩放操作,改变场景组件组合的尺寸。In response to a scaling operation on the scene component assembly, a size of the scene component assembly is changed.

其中,用户可以对场景组件组合进行整体缩放,如可以单击、双击或长按场景组件组合中的任意位置或特定位置以选中整个场景组件组合,然后通过双指分开或合拢等操作将其缩放到想要的尺寸。Among them, the user can scale the scene component combination as a whole, such as single-clicking, double-clicking, or long pressing any position or specific position in the scene component combination to select the entire scene component combination, and then scale it to the desired size by spreading or pinching two fingers.

在一种实施方式中,参考图11所示,游戏编辑场景中可以显示场景组件组合的参考坐标系的三个轴,为区分上述世界坐标系的三个轴,将参考坐标系的三个轴记为x'轴、y'轴、z'轴。上述响应于对场景组件组合的缩放操作,改变场景组件组合的尺寸,可以包括以下步骤:In one embodiment, as shown in FIG11 , the three axes of the reference coordinate system of the scene component combination can be displayed in the game editing scene. To distinguish the three axes of the world coordinate system, the three axes of the reference coordinate system are recorded as x' axis, y' axis, and z' axis. The above-mentioned response to the scaling operation of the scene component combination to change the size of the scene component combination may include the following steps:

若将场景组件组合设置为三轴缩放,则响应于缩放操作,在三个轴上等比例地改变场景组件组合的尺寸;If the scene component combination is set to three-axis scaling, then in response to the scaling operation, the size of the scene component combination is proportionally changed on the three axes;

若将场景组件组合设置为平面缩放,则响应于沿预设平面的缩放操作,在预设平面内改变场景组件组合的尺寸;预设平面是由三个轴中的两个轴形成的平面;If the scene component combination is set to plane scaling, the size of the scene component combination is changed within the preset plane in response to a scaling operation along the preset plane; the preset plane is a plane formed by two axes out of the three axes;

若将场景组件组合设置为单轴缩放,则响应于沿三个轴中的一个轴的缩放操作,在该轴上改变场景组件组合的尺寸。If the scene component assembly is set to single-axis scaling, then in response to a scaling operation along one of the three axes, the size of the scene component assembly is changed on that axis.

其中,三轴缩放、平面缩放(即双轴缩放)、单轴缩放是针对场景组件组合设置的三种缩放方式,可以对场景组件组合单独设置缩放方式,也可以对游戏场景设置缩放方式,则游戏场景中的所有模型都采用该所发方式。Among them, three-axis scaling, plane scaling (i.e., dual-axis scaling), and single-axis scaling are three scaling methods set for scene component combinations. The scaling method can be set for the scene component combination individually, or for the game scene. In this case, all models in the game scene will use this scaling method.

在三轴缩放的方式下,用户沿任一轴进行缩放操作,都会使场景组件组合在三个轴上等比例地缩放,如用户沿x'轴将场景组件组合的尺寸缩小1/2,则会同步地将场景组件组合在y'轴与z'轴上的尺寸也缩小1/2。三轴等比例缩放的方式,能够提高用户进行缩放操作的效率,使得用户无需在不同轴上分别进行缩放操作,而在一个轴上进行缩放操作即可达到缩放目标。In the three-axis scaling mode, if the user scales along any axis, the scene component combination will be scaled proportionally on the three axes. For example, if the user reduces the size of the scene component combination by 1/2 along the x' axis, the size of the scene component combination on the y' and z' axes will also be reduced by 1/2 synchronously. The three-axis proportional scaling mode can improve the efficiency of the user's scaling operation, so that the user does not need to scale on different axes separately, but can achieve the scaling target by scaling on one axis.

在平面缩放的方式下,用户沿两个轴形成的预设平面(如x'-y'平面)进行缩放操作,会使场景组件组合在预设平面内缩放,而不改变在第三个轴(如z'轴)上的尺寸。在预设平面内的缩放,可以是在预设平面的两个轴上进行等比例地缩放,也可以是非等比例地缩放,如可以将用户进行缩放操作的操作参数映射到预设平面的两个轴上,并量化为两个轴上的缩放比例(两个轴上的缩放比例可以不同),进而控制场景组件组合在两个轴上进行缩放。In the plane scaling mode, when the user performs a scaling operation along a preset plane formed by two axes (such as the x'-y' plane), the scene component combination will be scaled within the preset plane without changing the size on the third axis (such as the z' axis). Scaling within the preset plane can be proportional scaling on the two axes of the preset plane, or non-proportional scaling. For example, the user's scaling operation parameters can be mapped to the two axes of the preset plane and quantified into the scaling ratios on the two axes (the scaling ratios on the two axes can be different), thereby controlling the scaling of the scene component combination on the two axes.

应当理解,可以以任意两个轴形成预设平面,则预设平面包括x'-y'平面、x'-z'平面、y'-z'平面,允许场景组件组合在任意预设平面内进行缩放。也可以以固定两个轴形成预设平面,如设置预设平面仅为x'-y'平面,这样仅允许场景组件组合在x'-y'平面内进行缩放,而不能在x'-z'平面、y'-z'平面内进行缩放。在一种实施方式中,可以将场景组件组合的图像平面作为预设平面,图像平面是用于呈现初始图像的一面,如场景组件组合为图片墙时,墙面即为图像平面。可以设置场景组件组合能够在图像平面内缩放,而不能沿第三轴(垂直于图像平面的轴)缩放。这使得场景组件组合的缩放更加符合模型本身的定位。It should be understood that a preset plane can be formed with any two axes, and the preset plane includes the x'-y' plane, the x'-z' plane, and the y'-z' plane, allowing the scene component combination to be scaled in any preset plane. The preset plane can also be formed with two fixed axes, such as setting the preset plane to be only the x'-y' plane, so that the scene component combination is only allowed to be scaled in the x'-y' plane, but not in the x'-z' plane and the y'-z' plane. In one embodiment, the image plane of the scene component combination can be used as the preset plane, and the image plane is a side used to present the initial image. For example, when the scene component combination is a picture wall, the wall surface is the image plane. The scene component combination can be set to be able to scale in the image plane, but not along the third axis (the axis perpendicular to the image plane). This makes the scaling of the scene component combination more consistent with the positioning of the model itself.

在单轴缩放的方式下,用户沿一个轴进行缩放操作,仅会使场景组件组合在该轴上进行缩放,而不会在另外两个轴上进行缩放。这样的缩放方式更加灵活,能够改变场景组件组合在三个轴方向上的尺寸比例,以实现更加多样的视觉效果。In the single-axis scaling mode, when the user scales along one axis, the scene component combination will only scale along that axis, but not along the other two axes. This scaling method is more flexible and can change the size ratio of the scene component combination in the three axis directions to achieve more diverse visual effects.

需要说明的是,可以设置允许场景组件组合任意轴上进行单轴缩放。也可以设置仅允许场景组件组合在特定的一个轴或两个轴上进行缩放。如设置场景组件组合仅能在x'轴、y'轴上缩放,则无法沿z'轴完成缩放操作,这样可以针对特定类型的场景组件组合进行尺寸限制,以达到特定的游戏场景编辑目的。It should be noted that you can set the scene component combination to be allowed to scale on any axis. You can also set the scene component combination to be allowed to scale only on one or two specific axes. For example, if you set the scene component combination to be able to scale only on the x' axis and y' axis, you cannot complete the scaling operation along the z' axis. In this way, you can set size restrictions for specific types of scene component combinations to achieve specific game scene editing purposes.

旋转操作Rotation Operation

在一种实施方式中,在形成与初始图像对应的场景组件组合之后,组件生成方法还可以包括以下步骤:In one embodiment, after forming a scene component combination corresponding to the initial image, the component generation method may further include the following steps:

响应于对场景组件组合的旋转操作,控制场景组件组合进行旋转。In response to a rotation operation on the scene component assembly, the scene component assembly is controlled to rotate.

其中,用户可以对场景组件组合进行整体旋转,如可以单击、双击或长按场景组件组合中的任意位置或特定位置以选中整个场景组件组合,然后通过特定轨迹的滑动等操作控制其旋转到想要的方向或角度。 Among them, the user can rotate the scene component combination as a whole, such as single-clicking, double-clicking, or long pressing any position or specific position in the scene component combination to select the entire scene component combination, and then controlling its rotation to the desired direction or angle through operations such as sliding along a specific trajectory.

在一种实施方式中,参考图12所示,游戏编辑场景中可以显示用于表示旋转方向的三个弧形,可以记为偏航角弧形、俯仰角弧形、滚转角弧形。上述响应于对场景组件组合的旋转操作,控制场景组件组合进行旋转,可以包括以下步骤:In one embodiment, as shown in FIG12 , three arcs for indicating the rotation direction may be displayed in the game editing scene, which may be recorded as a yaw angle arc, a pitch angle arc, and a roll angle arc. In response to the rotation operation of the scene component combination, controlling the scene component combination to rotate may include the following steps:

响应于沿三个弧形中的任一弧形的旋转操作,以该弧形所在平面的法线为旋转轴,控制场景组件组合绕旋转轴旋转。In response to a rotation operation along any of the three arcs, the normal line of the plane where the arc is located is used as the rotation axis, and the scene component combination is controlled to rotate around the rotation axis.

其中,旋转轴垂直于弧形所在平面,并可以穿过场景组件组合的中心点。示例性的,用户沿偏航角弧形进行旋转操作,则可以以穿过场景组件组合的中心点的z轴为旋转轴,控制场景组件组合绕旋转轴旋转。在旋转过程中,可以保持三个弧形的位置不变,也可以同步旋转其中任意一个或多个弧形。通过显示弧形,可以引导用户沿正确地方向进行旋转操作,以便于准确旋转到想要的方向或角度。The rotation axis is perpendicular to the plane where the arc is located and can pass through the center point of the scene component combination. For example, if the user performs a rotation operation along the yaw angle arc, the z-axis passing through the center point of the scene component combination can be used as the rotation axis to control the scene component combination to rotate around the rotation axis. During the rotation process, the positions of the three arcs can be kept unchanged, or any one or more of the arcs can be rotated synchronously. By displaying the arc, the user can be guided to perform the rotation operation in the correct direction so as to accurately rotate to the desired direction or angle.

以上对三种整体性操作进行说明。此外,还可以允许对场景组件组合进行局部操作。在一种实施方式中,在形成与初始图像对应的场景组件组合之后,组件生成方法还可以包括以下步骤:The above describes three overall operations. In addition, local operations may be performed on the scene component combination. In one embodiment, after forming the scene component combination corresponding to the initial image, the component generation method may further include the following steps:

响应于对场景组件组合中的任一场景组件的编辑指令,对场景组件的如下至少一种信息进行调整:尺寸、位置、方向、颜色、纹理、形态。In response to an editing instruction for any scene component in the scene component combination, at least one of the following information of the scene component is adjusted: size, position, direction, color, texture, and shape.

其中,用户可以对场景组件组合中的单个场景组件进行编辑,如可以单击、双击或长按所要编辑的场景组件以选中该场景组件组合,并通过进一步的手动操作生成编辑指令,以调整场景组件的信息。示例性的,在选中场景组件后,可以通过双指分开或合拢等操作改变其尺寸,可以拖动场景组件以移动其位置,可以沿特定的旋转轨迹进行滑动以控制场景组件改变方向,可以在游戏编辑场景的界面中为场景组件选择另一种颜色、纹理或形态,以改变其颜色、纹理或形态。由此,在生成场景组件组合的基础上,允许用户对场景组件进行灵活地编辑优化,使得场景组件组合能够更加符合用户的需求或偏好。Among them, the user can edit a single scene component in the scene component combination, such as single-clicking, double-clicking or long-pressing the scene component to be edited to select the scene component combination, and generate editing instructions through further manual operations to adjust the information of the scene component. Exemplarily, after selecting a scene component, its size can be changed by operations such as separating or closing two fingers, the scene component can be dragged to move its position, and it can be slid along a specific rotation trajectory to control the scene component to change direction. Another color, texture or form can be selected for the scene component in the interface of the game editing scene to change its color, texture or form. Thus, on the basis of generating a scene component combination, the user is allowed to flexibly edit and optimize the scene component so that the scene component combination can better meet the needs or preferences of the user.

在一种实施方式中,在形成与初始图像对应的场景组件组合之后,可以为场景组件组合设置关联的游戏事件。如可以设置游戏角色接近场景组件组合时,触发特定的游戏剧情,或者在达到特定的游戏时间时,将场景组件组合隐藏或移除等。In one embodiment, after forming a scene component combination corresponding to the initial image, an associated game event may be set for the scene component combination, such as triggering a specific game plot when a game character approaches the scene component combination, or hiding or removing the scene component combination when a specific game time is reached.

在一种实施方式中,通过终端设备进行游戏场景建立或发布的相关操作(如点击图2中的“发布地图”控件)后,可生成游戏编辑场景对应的游戏场景信息,该游戏场景信息可保存在预设位置,该预设位置可以是地图文件中,该地图文件不仅可保存游戏场景信息,还可保存其他的地图信息(包括但不限于截图、地图名、日志等信息)。地图文件保存游戏场景信息后可以被上传到服务器。服务器审核通过后可将游戏场景信息生成的游戏场景发布至于预设地图池中,从而与服务器连接的终端设备可从服务器上下载相应的游戏场景信息,并通过游戏程序根据所下载的游戏场景信息生成对应的游戏场景,然后在该游戏场景中进行游戏体验。该方式可将游戏编辑器中的游戏场景信息进行发布,并被其他玩家体验,从而实现快速的UGC(User Generated Content,用户生成内容)功能。In one embodiment, after performing relevant operations of establishing or publishing a game scene through a terminal device (such as clicking the "Publish Map" control in Figure 2), game scene information corresponding to the game editing scene can be generated, and the game scene information can be saved in a preset location, which can be a map file. The map file can not only save the game scene information, but also save other map information (including but not limited to screenshots, map names, logs, etc.). After the map file saves the game scene information, it can be uploaded to the server. After the server reviews and approves it, the game scene generated by the game scene information can be published in the preset map pool, so that the terminal device connected to the server can download the corresponding game scene information from the server, and generate the corresponding game scene according to the downloaded game scene information through the game program, and then experience the game in the game scene. This method can publish the game scene information in the game editor and be experienced by other players, thereby realizing a fast UGC (User Generated Content) function.

图13示出了本示例性实施方式运行环境的系统架构图。该系统架构1300可以包括终端设备1310和服务器1320。服务器1320可以是提供游戏服务的后台系统,其可以是一台服务器,也可以是多台服务器的集群。终端设备1310和服务器1320之间可以通过有线或无线链路形成连接,以进行数据传输与交互。本示例性实施方式中的组件生成方法,可以完全由终端设备1310执行,也可以部分由终端设备1310执行,部分由服务器1320执行。例如,用户在终端设备1310上操作导入初始图像后,终端设备1310将初始图像发送至服务器1320,服务器1320可以通过预先配置的逻辑规则或人工智能引擎等对初始图像以及相关的用户指令(如生成组件的指令)进行处理,以划分图像区域,确定图像区域对应的场景组件和场景组件的信息,将图像区域、场景组件和场景组件的信息返回终端设备1310,终端设备1310基于这些信息生成场景组件,并形成场景组件组合。FIG. 13 shows a system architecture diagram of the operating environment of this exemplary embodiment. The system architecture 1300 may include a terminal device 1310 and a server 1320. The server 1320 may be a background system that provides game services, which may be a server or a cluster of multiple servers. The terminal device 1310 and the server 1320 may be connected by a wired or wireless link to perform data transmission and interaction. The component generation method in this exemplary embodiment may be performed entirely by the terminal device 1310, or may be performed partially by the terminal device 1310 and partially by the server 1320. For example, after the user operates to import the initial image on the terminal device 1310, the terminal device 1310 sends the initial image to the server 1320, and the server 1320 may process the initial image and related user instructions (such as instructions for generating components) by pre-configured logical rules or artificial intelligence engines, etc., to divide the image area, determine the scene components and scene component information corresponding to the image area, return the image area, scene components and scene component information to the terminal device 1310, and the terminal device 1310 generates scene components based on these information and forms a scene component combination.

本公开的示例性实施方式还提供一种游戏场景中的组件生成装置。参考图14所示,游戏场景中的组件生成装置1400可以包括以下程序模块:The exemplary embodiment of the present disclosure also provides a component generation device in a game scene. Referring to FIG. 14 , the component generation device 1400 in the game scene may include the following program modules:

图形用户界面处理模块1410,被配置为执行显示运行游戏程序所提供的图形用户界面,在图形用户界面中显示待编辑的游戏编辑场景以及多个场景组件选择控件,场景组件选择控件用于响应并根据操作指令在游戏编辑场景中生成对应场景组件;The graphical user interface processing module 1410 is configured to execute and display a graphical user interface provided by the running game program, display a game editing scene to be edited and a plurality of scene component selection controls in the graphical user interface, and the scene component selection controls are used to respond to and generate corresponding scene components in the game editing scene according to the operation instructions;

信息获取模块1420,被配置为执行在图形用户界面中提供一图像导入入口,并接受基于图像导入入口导入的初始图像;The information acquisition module 1420 is configured to provide an image import entry in the graphical user interface and accept an initial image imported based on the image import entry;

场景组件确定模块1430,被配置为执行将初始图像划分为多个图像区域,确定图像区域对应的场景组件和场景组件的信息;The scene component determination module 1430 is configured to divide the initial image into a plurality of image regions, and determine the scene components corresponding to the image regions and the information of the scene components;

组件生成模块1440,被配置为执行根据图像区域对应的场景组件的信息,在游戏编辑场景中生成图像区域对应的场景组件,形成与初始图像对应的场景组件组合。The component generation module 1440 is configured to generate the scene component corresponding to the image area in the game editing scene according to the information of the scene component corresponding to the image area, so as to form a scene component combination corresponding to the initial image.

在一种实施方式中,上述根据图像区域对应的场景组件的信息,在游戏编辑场景中生成图像区域对应的场景组件,包括:确定不同的图像区域对应的场景组件的生成顺序,其中,至少部分图像区域对应 的场景组件的生成顺序与其他图像区域对应的场景组件的生成顺序不同;根据图像区域对应的场景组件的信息,并按照图像区域对应的场景组件的生成顺序,在游戏编辑场景中生成图像区域对应的场景组件。In one embodiment, the above-mentioned generating the scene components corresponding to the image area in the game editing scene according to the information of the scene components corresponding to the image area includes: determining the generation order of the scene components corresponding to different image areas, wherein at least part of the image areas correspond to The generation order of the scene components of the image area is different from the generation order of the scene components corresponding to other image areas; according to the information of the scene components corresponding to the image area and in accordance with the generation order of the scene components corresponding to the image area, the scene components corresponding to the image area are generated in the game editing scene.

在一种实施方式中,上述根据图像区域对应的场景组件的信息,并按照图像区域对应的场景组件的生成顺序,在游戏编辑场景中生成图像区域对应的场景组件,包括:在当前视野画面内,动态地显示根据图像区域对应的场景组件的信息,并按照图像区域对应的场景组件的生成顺序,在游戏编辑场景中生成图像区域对应的场景组件的过程;其中,当前视野画面是通过设置于游戏编辑场景中的虚拟摄像机拍摄游戏编辑场景所形成的画面。In one embodiment, the above-mentioned process of generating scene components corresponding to the image area in the game editing scene according to the information of the scene components corresponding to the image area and in the generation order of the scene components corresponding to the image area includes: dynamically displaying the information of the scene components corresponding to the image area in the current field of view screen, and generating the scene components corresponding to the image area in the game editing scene according to the generation order of the scene components corresponding to the image area; wherein the current field of view screen is a screen formed by shooting the game editing scene with a virtual camera set in the game editing scene.

在一种实施方式中,上述在当前视野画面内,动态地显示根据图像区域对应的场景组件的信息,并按照图像区域对应的场景组件的生成顺序,在游戏编辑场景中生成图像区域对应的场景组件的过程,包括:响应视野调整指令,控制调整虚拟摄像机的如下至少一种信息:位置、方向、焦距、视场角;根据调整后的虚拟摄像机采集游戏编辑场景形成调整后的视野画面;在调整后的视野画面中,动态地显示根据图像区域对应的场景组件的信息,并按照图像区域对应的场景组件的生成顺序,在游戏编辑场景中生成图像区域对应的场景组件的过程。In one embodiment, the above-mentioned process of dynamically displaying information of scene components corresponding to the image area in the current field of view, and generating scene components corresponding to the image area in the game editing scene according to the generation order of the scene components corresponding to the image area, includes: responding to the field of view adjustment instruction, controlling to adjust at least one of the following information of the virtual camera: position, direction, focal length, and field of view angle; capturing the game editing scene according to the adjusted virtual camera to form an adjusted field of view picture; in the adjusted field of view picture, dynamically displaying information of scene components corresponding to the image area, and generating scene components corresponding to the image area in the game editing scene according to the generation order of the scene components corresponding to the image area.

在一种实施方式中,组件生成模块1440,还被配置为:在当前视野画面内,动态地显示根据图像区域对应的场景组件的信息,并按照图像区域对应的场景组件的生成顺序,在游戏编辑场景中生成图像区域对应的场景组件的过程时,锁定在游戏编辑场景中添加模型或编辑模型的操作,以禁止在游戏编辑场景中添加模型或编辑模型。In one embodiment, the component generation module 1440 is further configured to: dynamically display information of scene components corresponding to the image area within the current field of view, and when generating the scene components corresponding to the image area in the game editing scene according to the generation order of the scene components corresponding to the image area, lock the operation of adding or editing the model in the game editing scene to prohibit adding or editing the model in the game editing scene.

在一种实施方式中,场景组件的信息包括如下至少一种:尺寸、位置、方向、颜色、纹理、形态。上述根据图像区域对应的场景组件信息,在游戏编辑场景中生成图像区域对应的场景组件,包括:根据场景组件的信息调用对应的编辑指令,其中,编辑指令为游戏程序预先提供的指令;根据编辑指令,在游戏编辑场景中生成图像区域对应的场景组件。In one embodiment, the information of the scene component includes at least one of the following: size, position, direction, color, texture, and shape. The above-mentioned generating the scene component corresponding to the image area in the game editing scene according to the scene component information corresponding to the image area includes: calling the corresponding editing instruction according to the information of the scene component, wherein the editing instruction is an instruction pre-provided by the game program; generating the scene component corresponding to the image area in the game editing scene according to the editing instruction.

在一种实施方式中,信息获取模块1420,还被配置为:获取场景组件组合的资源量化参数;上述将初始图像划分为多个图像区域,确定图像区域对应的场景组件和场景组件的信息,包括:基于资源量化参数对初始图像进行采样,并基于采样结果得到目标图像;以目标图像的像素点作为一个图像区域,确定图像区域对应的场景组件和场景组件的信息。In one embodiment, the information acquisition module 1420 is further configured to: obtain resource quantization parameters of the scene component combination; the above-mentioned dividing the initial image into multiple image areas, determining the scene components and information of the scene components corresponding to the image areas, including: sampling the initial image based on the resource quantization parameters, and obtaining the target image based on the sampling results; taking the pixel points of the target image as an image area, determining the scene components and information of the scene components corresponding to the image area.

在一种实施方式中,资源量化参数包括场景组件组合的场景组件数量。上述基于资源量化参数对初始图像进行采样,并基于采样结果得到目标图像,包括:以场景组件组合的场景组件数量作为采样后的像素数量,对初始图像进行采样,并基于采样结果得到目标图像。In one embodiment, the resource quantization parameter includes the number of scene components of the scene component combination. The above-mentioned sampling of the initial image based on the resource quantization parameter and obtaining the target image based on the sampling result includes: sampling the initial image using the number of scene components of the scene component combination as the number of pixels after sampling, and obtaining the target image based on the sampling result.

在一种实施方式中,上述以目标图像的像素点作为一个图像区域,确定图像区域对应的场景组件和场景组件的信息,包括:根据目标图像的像素数量和场景组件的初始尺寸,确定场景组件组合的初始尺寸;若场景组件组合的初始尺寸处于预设尺寸范围内,则保持场景组件的初始尺寸不变;若场景组件组合的初始尺寸超出预设尺寸范围,则对场景组件的初始尺寸进行调整,使得在调整初始尺寸后场景组件组合的初始尺寸处于预设尺寸范围内。In one embodiment, the above-mentioned method of taking the pixel points of the target image as an image area and determining the scene components and information of the scene components corresponding to the image area includes: determining the initial size of the scene component combination according to the number of pixels of the target image and the initial size of the scene component; if the initial size of the scene component combination is within a preset size range, keeping the initial size of the scene component unchanged; if the initial size of the scene component combination exceeds the preset size range, adjusting the initial size of the scene component so that after the initial size is adjusted, the initial size of the scene component combination is within the preset size range.

在一种实施方式中,信息获取模块1420,还被配置为:获取场景组件组合的目标颜色数量。上述基于资源量化参数对初始图像进行采样,并基于采样结果得到目标图像,包括:基于资源量化参数对初始图像进行采样,得到采样图像;基于目标颜色数量对采样图像的像素点颜色值进行颜色映射,得到目标图像;上述确定图像区域对应的场景组件和场景组件的信息,包括:将目标图像的像素点的颜色值作为像素点对应的场景组件的颜色值。In one embodiment, the information acquisition module 1420 is further configured to: obtain the target color quantity of the scene component combination. The above-mentioned sampling of the initial image based on the resource quantization parameter and obtaining the target image based on the sampling result include: sampling the initial image based on the resource quantization parameter to obtain the sampled image; color mapping the pixel color values of the sampled image based on the target color quantity to obtain the target image; the above-mentioned determination of the scene component corresponding to the image area and the information of the scene component include: using the color value of the pixel of the target image as the color value of the scene component corresponding to the pixel.

在一种实施方式中,上述基于目标颜色数量对采样图像的像素点颜色值进行颜色映射,得到目标图像,包括:基于目标颜色数量对采样图像的像素点颜色值进行聚类,得到多个颜色类别,颜色类别的数量等于目标颜色数量;将颜色类别中的像素点颜色值映射为与颜色类别最接近的预设颜色。In one embodiment, the above-mentioned color mapping of the pixel color values of the sampled image based on the target color number to obtain the target image includes: clustering the pixel color values of the sampled image based on the target color number to obtain multiple color categories, and the number of color categories is equal to the target color number; mapping the pixel color values in the color category to a preset color closest to the color category.

在一种实施方式中,上述多个场景组件选择控件中包括第一场景组件选择控件和第二场景组件选择控件,第二场景组件选择控件对应的第二场景组件的形状与通过预设数量的第一场景组件选择控件对应的第一场景组件拼接成的组件的形状相对应;上述像素点对应的场景组件为第一场景组件。上述确定图像区域对应的场景组件和场景组件的信息,还包括:在确定像素点对应的第一场景组件的颜色值后,若存在多个相邻、且颜色值相同的第一场景组件,则将多个相邻、且颜色值相同的第一场景组件合并为第二场景组件。In one embodiment, the plurality of scene component selection controls include a first scene component selection control and a second scene component selection control, the shape of the second scene component corresponding to the second scene component selection control corresponds to the shape of the component spliced by the first scene components corresponding to the preset number of first scene component selection controls; the scene component corresponding to the pixel point is the first scene component. The above-mentioned determination of the scene component corresponding to the image area and the information of the scene component also includes: after determining the color value of the first scene component corresponding to the pixel point, if there are multiple adjacent first scene components with the same color value, the multiple adjacent first scene components with the same color value are merged into the second scene component.

在一种实施方式中,上述多个场景组件选择控件中包括第一场景组件选择控件,上述像素点对应的场景组件为第一场景组件选择控件对应的第一场景组件。上述确定图像区域对应的场景组件和场景组件的信息,还包括:在确定像素点对应的第一场景组件的颜色值后,若存在多个相邻、且颜色值相同的第一场景组件,则将多个相邻、且颜色值相同的第一场景组件合并为一个第一场景组件,并根据多个相邻、且颜色值相同的第一场景组件的数量确定合并后的第一场景组件的尺寸。 In one embodiment, the plurality of scene component selection controls include a first scene component selection control, and the scene component corresponding to the pixel point is the first scene component corresponding to the first scene component selection control. The above-mentioned determination of the scene component corresponding to the image area and the information of the scene component also includes: after determining the color value of the first scene component corresponding to the pixel point, if there are multiple adjacent first scene components with the same color value, the multiple adjacent first scene components with the same color value are merged into one first scene component, and the size of the merged first scene component is determined according to the number of the multiple adjacent first scene components with the same color value.

在一种实施方式中,第一场景组件为方块组件,方块组件的形状为立方体。In one implementation, the first scene component is a block component, and the shape of the block component is a cube.

在一种实施方式中,游戏编辑场景中设置有虚拟摄像机,用于实时拍摄并显示游戏编辑场景的当前画面。上述在游戏编辑场景中生成图像区域对应的场景组件,包括:于虚拟摄像机的视野范围内生成图像区域对应的场景组件。In one embodiment, a virtual camera is provided in the game editing scene for real-time shooting and displaying the current screen of the game editing scene. The above-mentioned generating a scene component corresponding to the image area in the game editing scene includes: generating a scene component corresponding to the image area within the field of view of the virtual camera.

在一种实施方式中,上述于虚拟摄像机的视野范围内生成图像区域对应的场景组件,包括:于虚拟摄像机的视野范围内、与虚拟摄像机的光轴垂直、且与虚拟摄像机间的距离为预设距离的平面上,生成图像区域对应的场景组件。In one embodiment, the above-mentioned scene component corresponding to the image area is generated within the field of view of the virtual camera, including: generating the scene component corresponding to the image area on a plane within the field of view of the virtual camera, perpendicular to the optical axis of the virtual camera, and at a preset distance from the virtual camera.

在一种实施方式中,上述确定图像区域对应的场景组件和场景组件的信息,包括:根据虚拟摄像机的位姿确定场景组件组合的基准点位置;根据图像区域对应的场景组件在场景组件组合中的相对位置以及基准点位置,确定图像区域对应的场景组件的位置。In one embodiment, the above-mentioned determination of the scene component and the information of the scene component corresponding to the image area includes: determining the reference point position of the scene component combination according to the posture of the virtual camera; determining the position of the scene component corresponding to the image area according to the relative position of the scene component corresponding to the image area in the scene component combination and the reference point position.

在一种实施方式中,上述根据图像区域对应的场景组件的信息,在游戏编辑场景中生成图像区域对应的场景组件,形成与初始图像对应的场景组件组合,包括:将场景组件组合的生成任务添加到生成队列中;当执行到场景组件组合的生成任务时,根据图像区域对应的场景组件的信息,在游戏编辑场景中生成图像区域对应的场景组件,形成与初始图像对应的场景组件组合。In one embodiment, the above-mentioned generating scene components corresponding to the image area in the game editing scene according to the information of the scene components corresponding to the image area to form a scene component combination corresponding to the initial image includes: adding the generation task of the scene component combination to the generation queue; when the generation task of the scene component combination is executed, generating the scene components corresponding to the image area in the game editing scene according to the information of the scene components corresponding to the image area to form a scene component combination corresponding to the initial image.

在一种实施方式中,游戏场景中的组件生成装置1400还可以包括模型编辑模块,被配置为:在组件生成模块1440形成与初始图像对应的场景组件组合之后,响应于对场景组件组合的移动操作,移动场景组件组合在游戏编辑场景中的位置。In one embodiment, the component generation device 1400 in the game scene may also include a model editing module, which is configured to: after the component generation module 1440 forms a scene component combination corresponding to the initial image, in response to a move operation on the scene component combination, move the position of the scene component combination in the game editing scene.

在一种实施方式中,游戏场景中的组件生成装置1400还可以包括模型编辑模块,被配置为:在组件生成模块1440形成与初始图像对应的场景组件组合之后,响应于对场景组件组合的缩放操作,改变场景组件组合的尺寸。In one embodiment, the component generation device 1400 in the game scene may also include a model editing module, which is configured to: after the component generation module 1440 forms a scene component combination corresponding to the initial image, change the size of the scene component combination in response to a scaling operation on the scene component combination.

在一种实施方式中,游戏编辑场景中显示场景组件组合的参考坐标系的三个轴。上述响应于对场景组件组合的缩放操作,改变场景组件组合的尺寸,包括:若将场景组件组合设置为三轴缩放,则响应于缩放操作,在三个轴上等比例地改变场景组件组合的尺寸;若将场景组件组合设置为平面缩放,则响应于沿预设平面的缩放操作,在预设平面内改变场景组件组合的尺寸;预设平面是由三个轴中的两个轴形成的平面;若将场景组件组合设置为单轴缩放,则响应于沿三个轴中的一个轴的缩放操作,在该轴上改变场景组件组合的尺寸。In one embodiment, three axes of the reference coordinate system of the scene component combination are displayed in the game editing scene. The above-mentioned changing the size of the scene component combination in response to the scaling operation on the scene component combination includes: if the scene component combination is set to three-axis scaling, then in response to the scaling operation, the size of the scene component combination is proportionally changed on the three axes; if the scene component combination is set to plane scaling, then in response to the scaling operation along the preset plane, the size of the scene component combination is changed within the preset plane; the preset plane is a plane formed by two axes of the three axes; if the scene component combination is set to single-axis scaling, then in response to the scaling operation along one of the three axes, the size of the scene component combination is changed on the axis.

在一种实施方式中,游戏场景中的组件生成装置1400还可以包括模型编辑模块,被配置为:在组件生成模块1440形成与初始图像对应的场景组件组合之后,响应于对场景组件组合的旋转操作,控制场景组件组合进行旋转。In one embodiment, the component generation device 1400 in the game scene may also include a model editing module, which is configured to: after the component generation module 1440 forms a scene component combination corresponding to the initial image, in response to a rotation operation on the scene component combination, control the scene component combination to rotate.

在一种实施方式中,游戏编辑场景中显示用于表示旋转方向的三个弧形。上述响应于对场景组件组合的旋转操作,控制场景组件组合进行旋转,包括:响应于沿三个弧形中的任一弧形的旋转操作,以任一弧形所在平面的法线为旋转轴,控制场景组件组合绕旋转轴旋转。In one embodiment, three arcs are displayed in the game editing scene to indicate the rotation direction. In response to the rotation operation on the scene component combination, controlling the scene component combination to rotate includes: in response to the rotation operation along any of the three arcs, taking the normal of the plane where any of the arcs is located as the rotation axis, controlling the scene component combination to rotate around the rotation axis.

在一种实施方式中,游戏场景中的组件生成装置1400还可以包括模型编辑模块,被配置为:在组件生成模块1440形成与初始图像对应的场景组件组合之后,响应于对场景组件组合中的任一场景组件的编辑指令,对场景组件的如下至少一种信息进行调整:尺寸、位置、方向、颜色、纹理、形态。In one embodiment, the component generation device 1400 in the game scene may also include a model editing module, which is configured to: after the component generation module 1440 forms a scene component combination corresponding to the initial image, in response to an editing instruction for any scene component in the scene component combination, adjust at least one of the following information of the scene component: size, position, direction, color, texture, and shape.

上述装置中各部分的具体细节在方法部分实施方式中已经详细说明,未披露的细节内容可以参见方法部分的实施方式内容,因而不再赘述。The specific details of each part of the above-mentioned device have been described in detail in the implementation method of the method part. The undisclosed details can be found in the implementation method of the method part, so they will not be repeated here.

本公开的示例性实施方式还提供了一种计算机可读存储介质,可以实现为一种程序产品的形式,其包括程序代码,当程序产品在电子设备上运行时,程序代码用于使电子设备执行本说明书上述“示例性方法”部分中描述的根据本公开各种示例性实施方式的步骤。在一种可选的实施方式中,该程序产品可以实现为便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在电子设备,例如个人电脑上运行。然而,本公开的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。The exemplary embodiments of the present disclosure also provide a computer-readable storage medium, which can be implemented in the form of a program product, which includes a program code, and when the program product is run on an electronic device, the program code is used to cause the electronic device to perform the steps described in the above "Exemplary Method" section of this specification according to various exemplary embodiments of the present disclosure. In an optional embodiment, the program product can be implemented as a portable compact disk read-only memory (CD-ROM) and includes program code, and can be run on an electronic device, such as a personal computer. However, the program product of the present disclosure is not limited to this, and in this document, the readable storage medium can be any tangible medium containing or storing a program, which can be used by or in combination with an instruction execution system, device or device.

程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。The program product may use any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above. More specific examples of readable storage media (a non-exhaustive list) include: an electrical connection with one or more wires, a portable disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above.

计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用 于由指令执行系统、装置或者器件使用或者与其结合使用的程序。Computer readable signal media may include data signals propagated in baseband or as part of a carrier wave, which carry readable program code. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above. The readable signal medium may also be any readable medium other than a readable storage medium, which may be sent, propagated, or transmitted for A program for use by or in connection with an instruction execution system, apparatus, or device.

可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。The program code embodied on the readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the foregoing.

可以以一种或多种程序设计语言的任意组合来编写用于执行本公开操作的程序代码,程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。Program code for performing the operations of the present disclosure may be written in any combination of one or more programming languages, including object-oriented programming languages such as Java, C++, etc., and conventional procedural programming languages such as "C" or similar programming languages. The program code may be executed entirely on the user computing device, partially on the user device, as a separate software package, partially on the user computing device and partially on a remote computing device, or entirely on a remote computing device or server. In cases involving a remote computing device, the remote computing device may be connected to the user computing device through any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (e.g., through the Internet using an Internet service provider).

本公开的示例性实施方式还提供一种电子设备,如可以是上述终端设备1310或服务器1320。该电子设备可以包括处理器与存储器。存储器存储有处理器的可执行指令,如可以是程序代码。处理器通过执行该可执行指令来执行本示例性实施方式中的方法。此外,该电子设备还可以包括显示器,以用于显示图形用户界面。The exemplary embodiment of the present disclosure also provides an electronic device, such as the terminal device 1310 or the server 1320 described above. The electronic device may include a processor and a memory. The memory stores executable instructions of the processor, such as program codes. The processor executes the method in the exemplary embodiment by executing the executable instructions. In addition, the electronic device may also include a display for displaying a graphical user interface.

下面参考图15,以通用计算设备的形式对电子设备进行示例性说明。应当理解,图15显示的电子设备1500仅仅是一个示例,不应对本公开实施方式的功能和使用范围带来限制。Referring to Fig. 15, an electronic device is exemplarily described in the form of a general computing device. It should be understood that the electronic device 1500 shown in Fig. 15 is only an example and should not limit the functions and scope of use of the embodiments of the present disclosure.

如图15所示,电子设备1500可以包括:处理器1510、存储器1520、总线1530、I/O(输入/输出)接口1540、网络适配器1550、显示器1560。As shown in FIG. 15 , the electronic device 1500 may include a processor 1510 , a memory 1520 , a bus 1530 , an I/O (input/output) interface 1540 , a network adapter 1550 , and a display 1560 .

存储器1520可以包括易失性存储器,例如RAM 1521、缓存单元1522,还可以包括非易失性存储器,例如ROM 1523。存储器1520还可以包括一个或多个程序模块1524,这样的程序模块1524包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。例如,程序模块1524可以包括上述装置中的各模块。The memory 1520 may include a volatile memory, such as a RAM 1521, a cache unit 1522, and may also include a non-volatile memory, such as a ROM 1523. The memory 1520 may also include one or more program modules 1524, such program modules 1524 include but are not limited to: an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may include the implementation of a network environment. For example, the program module 1524 may include each module in the above-mentioned device.

总线1530用于实现电子设备1500的不同组件之间的连接,可以包括数据总线、地址总线和控制总线。The bus 1530 is used to realize the connection between different components of the electronic device 1500, and may include a data bus, an address bus, and a control bus.

电子设备1500可以通过I/O接口1540与一个或多个外部设备1600(例如键盘、鼠标、外置控制器等)进行通信。The electronic device 1500 can communicate with one or more external devices 1600 (eg, a keyboard, a mouse, an external controller, etc.) through the I/O interface 1540 .

电子设备1500可以通过网络适配器1550与一个或者多个网络通信,例如网络适配器1550可以提供如3G/4G/5G等移动通信解决方案,或者提供如无线局域网、蓝牙、近场通信等无线通信解决方案。网络适配器1550可以通过总线1530与电子设备1500的其它模块通信。The electronic device 1500 can communicate with one or more networks through the network adapter 1550. For example, the network adapter 1550 can provide mobile communication solutions such as 3G/4G/5G, or provide wireless communication solutions such as wireless LAN, Bluetooth, near field communication, etc. The network adapter 1550 can communicate with other modules of the electronic device 1500 through the bus 1530.

电子设备1500可以通过显示器1560显示图形用户界面,如显示游戏编辑场景等。The electronic device 1500 may display a graphical user interface through the display 1560 , such as displaying a game editing scene.

尽管图15中未示出,还可以在电子设备1500中设置其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理器、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。Although not shown in FIG. 15 , other hardware and/or software modules may be provided in the electronic device 1500 , including but not limited to microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems.

应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的示例性实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。It should be noted that, although several modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory. In fact, according to the exemplary embodiments of the present disclosure, the features and functions of two or more modules or units described above can be embodied in one module or unit. Conversely, the features and functions of one module or unit described above can be further divided into multiple modules or units to be embodied.

本领域技术人员能够理解,本公开的各个方面可以实现为系统、方法或程序产品。因此,本公开的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其他实施方式。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施方式仅被视为示例性的,本公开的真正范围和精神由权利要求指出。It will be appreciated by those skilled in the art that various aspects of the present disclosure may be implemented as a system, method or program product. Therefore, various aspects of the present disclosure may be specifically implemented in the following forms, namely: a complete hardware implementation, a complete software implementation (including firmware, microcode, etc.), or an implementation combining hardware and software aspects, which may be collectively referred to herein as a "circuit", "module" or "system". Those skilled in the art will readily think of other embodiments of the present disclosure after considering the specification and practicing the invention disclosed herein. The present disclosure is intended to cover any variations, uses or adaptive changes of the present disclosure, which follow the general principles of the present disclosure and include common knowledge or customary technical means in the art that are not disclosed in the present disclosure. The specification and implementation are intended to be exemplary only, and the true scope and spirit of the present disclosure are indicated by the claims.

应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限定。 It should be understood that the present disclosure is not limited to the exact structures that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (27)

一种游戏场景中的组件生成方法,所述方法包括:A method for generating components in a game scene, the method comprising: 显示运行游戏程序所提供的图形用户界面,在所述图形用户界面中显示待编辑的游戏编辑场景以及多个场景组件选择控件,所述场景组件选择控件用于响应并根据操作指令在所述游戏编辑场景中生成对应场景组件;Displaying a graphical user interface provided by running the game program, wherein a game editing scene to be edited and a plurality of scene component selection controls are displayed in the graphical user interface, wherein the scene component selection controls are used to respond to and generate corresponding scene components in the game editing scene according to an operation instruction; 在所述图形用户界面中提供一图像导入入口,并接受基于所述图像导入入口导入的初始图像;Providing an image import entry in the graphical user interface, and accepting an initial image imported based on the image import entry; 将所述初始图像划分为多个图像区域,确定图像区域对应的场景组件和所述场景组件的信息;Dividing the initial image into a plurality of image regions, determining scene components corresponding to the image regions and information of the scene components; 根据所述图像区域对应的场景组件的信息,在所述游戏编辑场景中生成所述图像区域对应的场景组件,形成与所述初始图像对应的场景组件组合。According to the information of the scene component corresponding to the image area, the scene component corresponding to the image area is generated in the game editing scene to form a scene component combination corresponding to the initial image. 根据权利要求1所述的方法,其中,所述根据所述图像区域对应的场景组件的信息,在所述游戏编辑场景中生成所述图像区域对应的场景组件,包括:The method according to claim 1, wherein generating the scene component corresponding to the image area in the game editing scene according to the information of the scene component corresponding to the image area comprises: 确定不同的所述图像区域对应的场景组件的生成顺序,其中,至少部分图像区域对应的场景组件的生成顺序与其他图像区域对应的场景组件的生成顺序不同;Determining a generation order of scene components corresponding to different image regions, wherein the generation order of scene components corresponding to at least some of the image regions is different from the generation order of scene components corresponding to other image regions; 根据所述图像区域对应的场景组件的信息,并按照所述图像区域对应的场景组件的生成顺序,在所述游戏编辑场景中生成所述图像区域对应的场景组件。The scene components corresponding to the image area are generated in the game editing scene according to the information of the scene components corresponding to the image area and in accordance with the generation order of the scene components corresponding to the image area. 根据权利要求2所述的方法,其中,所述根据所述图像区域对应的场景组件的信息,并按照所述图像区域对应的场景组件的生成顺序,在所述游戏编辑场景中生成所述图像区域对应的场景组件,包括:The method according to claim 2, wherein the step of generating the scene component corresponding to the image area in the game editing scene according to the information of the scene component corresponding to the image area and in the generation order of the scene component corresponding to the image area comprises: 在当前视野画面内,动态地显示根据所述图像区域对应的场景组件的信息,并按照所述图像区域对应的场景组件的生成顺序,在所述游戏编辑场景中生成所述图像区域对应的场景组件的过程;其中,所述当前视野画面是通过设置于所述游戏编辑场景中的虚拟摄像机拍摄所述游戏编辑场景所形成的画面。In the current field of view, information of scene components corresponding to the image area is dynamically displayed, and the process of generating the scene components corresponding to the image area in the game editing scene is carried out in the order of generating the scene components corresponding to the image area; wherein the current field of view is a picture formed by shooting the game editing scene with a virtual camera set in the game editing scene. 根据权利要求3所述的方法,其中,所述在当前视野画面内,动态地显示根据所述图像区域对应的场景组件的信息,并按照所述图像区域对应的场景组件的生成顺序,在所述游戏编辑场景中生成所述图像区域对应的场景组件的过程,包括:The method according to claim 3, wherein the process of dynamically displaying information of the scene components corresponding to the image area in the current field of view, and generating the scene components corresponding to the image area in the game editing scene according to the generation order of the scene components corresponding to the image area, comprises: 响应视野调整指令,控制调整所述虚拟摄像机的如下至少一种信息:位置、方向、焦距、视场角;In response to the field of view adjustment instruction, control and adjust at least one of the following information of the virtual camera: position, direction, focal length, and field of view angle; 根据调整后的所述虚拟摄像机采集所述游戏编辑场景形成调整后的视野画面;Capturing the game editing scene according to the adjusted virtual camera to form an adjusted field of view; 在所述调整后的视野画面中,动态地显示根据所述图像区域对应的场景组件的信息,并按照所述图像区域对应的场景组件的生成顺序,在所述游戏编辑场景中生成所述图像区域对应的场景组件的过程。In the adjusted field of view screen, information of scene components corresponding to the image area is dynamically displayed, and a process of generating scene components corresponding to the image area in the game editing scene is performed in the order in which the scene components corresponding to the image area are generated. 根据权利要求3所述的方法,其中,在当前视野画面内,动态地显示根据所述图像区域对应的场景组件的信息,并按照所述图像区域对应的场景组件的生成顺序,在所述游戏编辑场景中生成所述图像区域对应的场景组件的过程时,所述方法还包括:The method according to claim 3, wherein, in the current field of view, information of the scene components corresponding to the image area is dynamically displayed, and in the process of generating the scene components corresponding to the image area in the game editing scene according to the generation order of the scene components corresponding to the image area, the method further comprises: 锁定在所述游戏编辑场景中添加组件或编辑组件的操作,以禁止在所述游戏编辑场景中添加组件或编辑组件。The operation of adding or editing a component in the game editing scene is locked to prohibit adding or editing a component in the game editing scene. 根据权利要求1所述的方法,其中,所述场景组件的信息包括如下至少一种:尺寸、位置、方向、颜色、纹理、形态;The method according to claim 1, wherein the information of the scene component includes at least one of the following: size, position, direction, color, texture, and shape; 所述根据所述图像区域对应的场景组件信息,在所述游戏编辑场景中生成所述图像区域对应的场景组件,包括:The step of generating the scene component corresponding to the image area in the game editing scene according to the scene component information corresponding to the image area includes: 根据所述场景组件的信息调用对应的编辑指令,其中,所述编辑指令为所述游戏程序预先提供的指令;Calling corresponding editing instructions according to the information of the scene component, wherein the editing instructions are instructions pre-provided by the game program; 根据所述编辑指令,在所述游戏编辑场景中生成所述图像区域对应的场景组件。According to the editing instruction, a scene component corresponding to the image area is generated in the game editing scene. 根据权利要求1所述的方法,其中,所述方法还包括:The method according to claim 1, wherein the method further comprises: 获取所述场景组件组合的资源量化参数;Obtain resource quantization parameters of the scene component combination; 所述将所述初始图像划分为多个图像区域,确定图像区域对应的场景组件和所述场景组件的信息,包括:The step of dividing the initial image into a plurality of image regions and determining scene components corresponding to the image regions and information of the scene components includes: 基于所述资源量化参数对所述初始图像进行采样,并基于采样结果得到目标图像;Sampling the initial image based on the resource quantization parameter, and obtaining a target image based on the sampling result; 以所述目标图像的像素点作为一个图像区域,确定图像区域对应的场景组件和所述场景组件的信息。The pixels of the target image are taken as an image area, and the scene components corresponding to the image area and the information of the scene components are determined. 根据权利要求7所述的方法,其中,所述场景组件组合的资源量化参数包括所述场景组件组合的场景组件数量;所述基于所述资源量化参数对所述初始图像进行采样,并基于采样结果得到目标图像,包括:The method according to claim 7, wherein the resource quantization parameter of the scene component combination includes the number of scene components of the scene component combination; sampling the initial image based on the resource quantization parameter and obtaining the target image based on the sampling result comprises: 以所述场景组件组合的场景组件数量作为采样后的像素数量,对所述初始图像进行采样,并基于采样结果得到所述目标图像。The initial image is sampled using the number of scene components of the scene component combination as the number of pixels after sampling, and the target image is obtained based on the sampling result. 根据权利要求7所述的方法,其中,所述以所述目标图像的像素点作为一个图像区域,确定图 像区域对应的场景组件和所述场景组件的信息,包括:The method according to claim 7, wherein the pixel points of the target image are used as an image area to determine the image area. The scene component corresponding to the image area and information about the scene component include: 根据所述目标图像的像素数量和所述场景组件的初始尺寸,确定所述场景组件组合的初始尺寸;Determining an initial size of the scene component combination according to the number of pixels of the target image and the initial size of the scene component; 若所述场景组件组合的初始尺寸处于预设尺寸范围内,则保持所述场景组件的初始尺寸不变;If the initial size of the scene component combination is within a preset size range, keeping the initial size of the scene component unchanged; 若所述场景组件组合的初始尺寸超出所述预设尺寸范围,则对所述场景组件的初始尺寸进行调整,使得在调整所述初始尺寸后所述场景组件组合的初始尺寸处于所述预设尺寸范围内。If the initial size of the scene component combination exceeds the preset size range, the initial size of the scene component is adjusted so that the initial size of the scene component combination is within the preset size range after the initial size is adjusted. 根据权利要求7所述的方法,其中,所述方法还包括:The method according to claim 7, wherein the method further comprises: 获取所述场景组件组合的目标颜色数量;Get the target color quantity of the scene component combination; 所述基于所述资源量化参数对所述初始图像进行采样,并基于采样结果得到目标图像,包括:The sampling of the initial image based on the resource quantization parameter and obtaining the target image based on the sampling result includes: 基于所述资源量化参数对所述初始图像进行采样,得到采样图像;Sampling the initial image based on the resource quantization parameter to obtain a sampled image; 基于所述目标颜色数量对所述采样图像的像素点颜色值进行颜色映射,得到所述目标图像;Performing color mapping on the pixel color values of the sampled image based on the target color quantity to obtain the target image; 所述确定图像区域对应的场景组件和所述场景组件的信息,包括:The determining of the scene component corresponding to the image area and the information of the scene component includes: 将所述目标图像的像素点的颜色值作为所述像素点对应的场景组件的颜色值。The color value of the pixel point of the target image is used as the color value of the scene component corresponding to the pixel point. 根据权利要求10所述的方法,其中,所述基于所述目标颜色数量对所述采样图像的像素点颜色值进行颜色映射,得到所述目标图像,包括:The method according to claim 10, wherein the step of performing color mapping on the pixel color values of the sampled image based on the target color quantity to obtain the target image comprises: 基于所述目标颜色数量对所述采样图像的像素点颜色值进行聚类,得到多个颜色类别,所述颜色类别的数量等于所述目标颜色数量;Clustering the pixel color values of the sampled image based on the number of target colors to obtain a plurality of color categories, where the number of the color categories is equal to the number of target colors; 将所述颜色类别中的像素点颜色值映射为与所述颜色类别最接近的预设颜色。The color values of the pixels in the color category are mapped to the preset colors that are closest to the color category. 根据权利要求10所述的方法,其中,所述多个场景组件选择控件中包括第一场景组件选择控件和第二场景组件选择控件,所述第二场景组件选择控件对应的第二场景组件的形状与通过预设数量的所述第一场景组件选择控件对应的第一场景组件拼接成的组件的形状相对应;所述像素点对应的场景组件为第一场景组件;The method according to claim 10, wherein the plurality of scene component selection controls include a first scene component selection control and a second scene component selection control, the shape of the second scene component corresponding to the second scene component selection control corresponds to the shape of a component formed by splicing the first scene components corresponding to a preset number of the first scene component selection controls; the scene component corresponding to the pixel point is the first scene component; 所述确定图像区域对应的场景组件和所述场景组件的信息,还包括:The step of determining the scene component corresponding to the image area and the information of the scene component further includes: 在确定所述像素点对应的颜色值后,若存在多个相邻、且颜色值相同的像素点,则将所述多个相邻、且颜色值相同的像素点对应的第一场景组件合并为第二场景组件。After determining the color value corresponding to the pixel point, if there are multiple adjacent pixels with the same color value, the first scene components corresponding to the multiple adjacent pixels with the same color value are merged into the second scene component. 根据权利要求10所述的方法,其中,所述多个场景组件选择控件中包括第一场景组件选择控件,所述像素点对应的场景组件为所述第一场景组件选择控件对应的第一场景组件;The method according to claim 10, wherein the plurality of scene component selection controls include a first scene component selection control, and the scene component corresponding to the pixel point is the first scene component corresponding to the first scene component selection control; 所述确定图像区域对应的场景组件和所述场景组件的信息,还包括:The step of determining the scene component corresponding to the image area and the information of the scene component further includes: 在确定所述像素点对应的颜色值后,若存在多个相邻、且颜色值相同的像素点,则将所述多个相邻、且颜色值相同的像素点对应的第一场景组件合并为一个第一场景组件,并根据所述多个相邻、且颜色值相同的像素点的数量确定合并后的第一场景组件的尺寸。After determining the color value corresponding to the pixel point, if there are multiple adjacent pixels with the same color value, the first scene components corresponding to the multiple adjacent pixels with the same color value are merged into one first scene component, and the size of the merged first scene component is determined according to the number of the multiple adjacent pixels with the same color value. 根据权利要求12或13所述的方法,其中,所述第一场景组件为方块组件,所述方块组件的形状为立方体。The method according to claim 12 or 13, wherein the first scene component is a block component, and the shape of the block component is a cube. 根据权利要求1所述的方法,其中,所述游戏编辑场景中设置有虚拟摄像机,用于实时拍摄并显示所述游戏编辑场景的当前画面;The method according to claim 1, wherein a virtual camera is provided in the game editing scene for real-time shooting and displaying a current screen of the game editing scene; 所述在所述游戏编辑场景中生成所述图像区域对应的场景组件,包括:The step of generating a scene component corresponding to the image area in the game editing scene includes: 于所述虚拟摄像机的视野范围内生成所述图像区域对应的场景组件。A scene component corresponding to the image area is generated within the field of view of the virtual camera. 根据权利要求15所述的方法,其中,所述于所述虚拟摄像机的视野范围内生成所述图像区域对应的场景组件,包括:The method according to claim 15, wherein generating a scene component corresponding to the image area within the field of view of the virtual camera comprises: 于所述虚拟摄像机的视野范围内、与所述虚拟摄像机的光轴垂直、且与所述虚拟摄像机间的距离为预设距离的平面上,生成所述图像区域对应的场景组件。A scene component corresponding to the image area is generated on a plane within the field of view of the virtual camera, perpendicular to the optical axis of the virtual camera, and at a preset distance from the virtual camera. 根据权利要求15或16所述的方法,其中,所述确定图像区域对应的场景组件和所述场景组件的信息,包括:The method according to claim 15 or 16, wherein the determining the scene component corresponding to the image area and the information of the scene component comprises: 根据所述虚拟摄像机的位姿确定所述场景组件组合的基准点位置;Determining the reference point position of the scene component combination according to the position and posture of the virtual camera; 根据所述图像区域对应的场景组件在所述场景组件组合中的相对位置以及所述基准点位置,确定所述图像区域对应的场景组件的位置。The position of the scene component corresponding to the image area is determined according to the relative position of the scene component corresponding to the image area in the scene component combination and the reference point position. 根据权利要求1所述的方法,其中,所述根据所述图像区域对应的场景组件的信息,在所述游戏编辑场景中生成所述图像区域对应的场景组件,形成与所述初始图像对应的场景组件组合,包括:The method according to claim 1, wherein generating the scene component corresponding to the image area in the game editing scene according to the information of the scene component corresponding to the image area to form a scene component combination corresponding to the initial image comprises: 将所述场景组件组合的生成任务添加到生成队列中;Adding the generation task of the scene component combination to the generation queue; 当执行到所述场景组件组合的生成任务时,根据所述图像区域对应的场景组件的信息,在所述游戏编辑场景中生成所述图像区域对应的场景组件,形成与所述初始图像对应的场景组件组合。When the task of generating the scene component combination is executed, the scene component corresponding to the image area is generated in the game editing scene according to the information of the scene component corresponding to the image area, so as to form a scene component combination corresponding to the initial image. 根据权利要求1所述的方法,其中,在形成与所述初始图像对应的场景组件组合之后,所述方法还包括: The method according to claim 1, wherein after forming a scene component combination corresponding to the initial image, the method further comprises: 响应于对所述场景组件组合的移动操作,移动所述场景组件组合在所述游戏编辑场景中的位置。In response to a moving operation on the scene component combination, the position of the scene component combination in the game editing scene is moved. 根据权利要求1所述的方法,其中,在形成与所述初始图像对应的场景组件组合之后,所述方法还包括:The method according to claim 1, wherein after forming a scene component combination corresponding to the initial image, the method further comprises: 响应于对所述场景组件组合的缩放操作,改变所述场景组件组合的尺寸。In response to a scaling operation on the scene component assembly, a size of the scene component assembly is changed. 根据权利要求20所述的方法,其中,所述游戏编辑场景中显示所述场景组件组合的参考坐标系的三个轴;所述响应于对所述场景组件组合的缩放操作,改变所述场景组件组合的尺寸,包括:The method according to claim 20, wherein three axes of a reference coordinate system of the scene component combination are displayed in the game editing scene; and the step of changing the size of the scene component combination in response to a scaling operation on the scene component combination comprises: 若将所述场景组件组合设置为三轴缩放,则响应于所述缩放操作,在所述三个轴上等比例地改变所述场景组件组合的尺寸;If the scene component combination is set to three-axis scaling, then in response to the scaling operation, the size of the scene component combination is proportionally changed on the three axes; 若将所述场景组件组合设置为平面缩放,则响应于沿预设平面的所述缩放操作,在所述预设平面内改变所述场景组件组合的尺寸;所述预设平面是由所述三个轴中的两个轴形成的平面;If the scene component combination is set to plane scaling, in response to the scaling operation along a preset plane, the size of the scene component combination is changed within the preset plane; the preset plane is a plane formed by two axes of the three axes; 若将所述场景组件组合设置为单轴缩放,则响应于沿所述三个轴中的一个轴的所述缩放操作,在所述一个轴上改变所述场景组件组合的尺寸。If the scene component combination is set to single-axis scaling, in response to the scaling operation along one of the three axes, the size of the scene component combination is changed on the one axis. 根据权利要求1所述的方法,其中,在形成与所述初始图像对应的场景组件组合之后,所述方法还包括:The method according to claim 1, wherein after forming a scene component combination corresponding to the initial image, the method further comprises: 响应于对所述场景组件组合的旋转操作,控制所述场景组件组合进行旋转。In response to a rotation operation on the scene component assembly, the scene component assembly is controlled to rotate. 根据权利要求22所述的方法,其中,所述游戏编辑场景中显示用于表示旋转方向的三个弧形;所述响应于对所述场景组件组合的旋转操作,控制所述场景组件组合进行旋转,包括:The method according to claim 22, wherein three arcs for indicating rotation directions are displayed in the game editing scene; and in response to the rotation operation on the scene component combination, controlling the scene component combination to rotate comprises: 响应于沿所述三个弧形中的任一弧形的所述旋转操作,以所述任一弧形所在平面的法线为旋转轴,控制所述场景组件组合绕所述旋转轴旋转。In response to the rotation operation along any one of the three arcs, the normal line of the plane where any one of the arcs is located is used as the rotation axis, and the scene component combination is controlled to rotate around the rotation axis. 根据权利要求1所述的方法,其中,在形成与所述初始图像对应的场景组件组合之后,所述方法还包括:The method according to claim 1, wherein after forming a scene component combination corresponding to the initial image, the method further comprises: 响应于对所述场景组件组合中的任一场景组件的编辑指令,对所述场景组件的如下至少一种信息进行调整:尺寸、位置、方向、颜色、纹理、形态。In response to an editing instruction for any scene component in the scene component combination, at least one of the following information of the scene component is adjusted: size, position, direction, color, texture, and shape. 一种游戏场景中的组件生成装置,所述装置包括:A component generation device in a game scene, the device comprising: 图形用户界面处理模块,被配置为执行显示运行游戏程序所提供的图形用户界面,在所述图形用户界面中显示待编辑的游戏编辑场景以及多个场景组件选择控件,所述场景组件选择控件用于响应并根据操作指令在所述游戏编辑场景中生成对应场景组件;A graphical user interface processing module is configured to execute and display a graphical user interface provided by the running game program, wherein a game editing scene to be edited and a plurality of scene component selection controls are displayed in the graphical user interface, wherein the scene component selection controls are used to respond to and generate corresponding scene components in the game editing scene according to an operation instruction; 信息获取模块,被配置为执行在所述图形用户界面中提供一图像导入入口,并接受基于所述图像导入入口导入的初始图像;An information acquisition module is configured to provide an image import entry in the graphical user interface and accept an initial image imported based on the image import entry; 场景组件确定模块,被配置为执行将所述初始图像划分为多个图像区域,确定图像区域对应的场景组件和所述场景组件的信息;A scene component determination module is configured to divide the initial image into a plurality of image regions, determine scene components corresponding to the image regions and information of the scene components; 组件生成模块,被配置为执行根据所述图像区域对应的场景组件的信息,在所述游戏编辑场景中生成所述图像区域对应的场景组件,形成与所述初始图像对应的场景组件组合。The component generation module is configured to generate the scene component corresponding to the image area in the game editing scene according to the information of the scene component corresponding to the image area, so as to form a scene component combination corresponding to the initial image. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至24任一项所述的方法。A computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method according to any one of claims 1 to 24. 一种电子设备,包括:An electronic device, comprising: 处理器;以及Processor; and 存储器,用于存储所述处理器的可执行指令;A memory, configured to store executable instructions of the processor; 其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1至24任一项所述的方法。 The processor is configured to perform the method of any one of claims 1 to 24 by executing the executable instructions.
PCT/CN2024/096124 2023-06-15 2024-05-29 Component generation method and apparatus in game scene, storage medium, and electronic device Pending WO2024255597A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310718159.6A CN116672720A (en) 2023-06-15 2023-06-15 Method and device for generating components in game scene, storage medium and electronic equipment
CN202310718159.6 2023-06-15

Publications (1)

Publication Number Publication Date
WO2024255597A1 true WO2024255597A1 (en) 2024-12-19

Family

ID=87781932

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/096124 Pending WO2024255597A1 (en) 2023-06-15 2024-05-29 Component generation method and apparatus in game scene, storage medium, and electronic device

Country Status (2)

Country Link
CN (1) CN116672720A (en)
WO (1) WO2024255597A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116672720A (en) * 2023-06-15 2023-09-01 网易(杭州)网络有限公司 Method and device for generating components in game scene, storage medium and electronic equipment
CN119925940A (en) * 2023-11-02 2025-05-06 网易(杭州)网络有限公司 Component editing method, device and electronic device
CN117654051A (en) * 2023-12-13 2024-03-08 网易(杭州)网络有限公司 Game event editing method, game event editing device, storage medium and electronic equipment
CN117717784A (en) * 2023-12-20 2024-03-19 网易(杭州)网络有限公司 Game scene component generation method, device, storage medium and electronic equipment
CN117717785A (en) * 2023-12-20 2024-03-19 网易(杭州)网络有限公司 Game scene component method, device, storage medium and electronic equipment
CN117899489A (en) * 2023-12-20 2024-04-19 网易(杭州)网络有限公司 Game information display control method and device and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080035287A (en) * 2006-10-19 2008-04-23 주식회사 넥슨 Gameplay data editing and sharing system and method
US20140256389A1 (en) * 2013-03-06 2014-09-11 Ian Wentling Mobile game application
US20200316476A1 (en) * 2017-12-29 2020-10-08 Netease (Hangzhou) Network Co.,Ltd. Information Processing Method and Apparatus, Storage Medium, and Electronic Device
CN114201167A (en) * 2021-12-03 2022-03-18 完美世界互动(北京)科技有限公司 Method, device and storage medium for editing user interface in game
CN115359168A (en) * 2022-07-12 2022-11-18 网易(杭州)网络有限公司 Model rendering method and device and electronic equipment
CN115779413A (en) * 2022-09-26 2023-03-14 网易(杭州)网络有限公司 Game scene editing method and device, storage medium and electronic equipment
US20230086477A1 (en) * 2021-09-22 2023-03-23 Nintendo Co., Ltd. Storage medium, information processing system, information processing apparatus, and game processing method
CN116672720A (en) * 2023-06-15 2023-09-01 网易(杭州)网络有限公司 Method and device for generating components in game scene, storage medium and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080035287A (en) * 2006-10-19 2008-04-23 주식회사 넥슨 Gameplay data editing and sharing system and method
US20140256389A1 (en) * 2013-03-06 2014-09-11 Ian Wentling Mobile game application
US20200316476A1 (en) * 2017-12-29 2020-10-08 Netease (Hangzhou) Network Co.,Ltd. Information Processing Method and Apparatus, Storage Medium, and Electronic Device
US20230086477A1 (en) * 2021-09-22 2023-03-23 Nintendo Co., Ltd. Storage medium, information processing system, information processing apparatus, and game processing method
CN114201167A (en) * 2021-12-03 2022-03-18 完美世界互动(北京)科技有限公司 Method, device and storage medium for editing user interface in game
CN115359168A (en) * 2022-07-12 2022-11-18 网易(杭州)网络有限公司 Model rendering method and device and electronic equipment
CN115779413A (en) * 2022-09-26 2023-03-14 网易(杭州)网络有限公司 Game scene editing method and device, storage medium and electronic equipment
CN116672720A (en) * 2023-06-15 2023-09-01 网易(杭州)网络有限公司 Method and device for generating components in game scene, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN116672720A (en) 2023-09-01

Similar Documents

Publication Publication Date Title
WO2024255597A1 (en) Component generation method and apparatus in game scene, storage medium, and electronic device
WO2024255598A1 (en) Method and apparatus for generating component in game scene, storage medium, and electronic device
WO2024255596A1 (en) Method and apparatus for generating components in game scene, and storage medium and electronic device
US20220249949A1 (en) Method and apparatus for displaying virtual scene, device, and storage medium
CN107545788B (en) Goods electronic sand map system is deduced based on the operation that augmented reality is shown
US9098647B2 (en) Dynamic viewing of a three dimensional space
CN113952720B (en) Game scene rendering method, device, electronic device and storage medium
CN111142967B (en) Augmented reality display method and device, electronic equipment and storage medium
KR102579463B1 (en) Media art system based on extended reality technology
CN117036562A (en) Three-dimensional display method and related device
CN118135081A (en) Model generation method, device, computer equipment and computer readable storage medium
CN109145688A (en) The processing method and processing device of video image
CN117788689A (en) Interactive virtual cloud exhibition hall construction method and system based on three-dimensional modeling
WO2025082148A1 (en) Game building editing method and apparatus, storage medium and electronic device
US12406447B2 (en) Multi-sided 3D portal
CN117853662A (en) Method and device for realizing real-time interaction of three-dimensional model in demonstration text by player
CN117274475A (en) Halo effect rendering method and device, electronic equipment and readable storage medium
CN114663560B (en) Method, device, storage medium and electronic device for realizing animation of target model
CN117111742A (en) Image interaction method, device, electronic equipment and storage medium
CN115880402A (en) Flow animation generation method and device, electronic equipment and readable storage medium
EP2930621B1 (en) Network-based Render Services and Local Rendering for Collaborative Environments
JPH1166351A (en) Object movement control method and apparatus in three-dimensional virtual space and recording medium recording object movement control program
CN111882639B (en) Picture rendering method, device, equipment and medium
CN114419233B (en) Model generation method, device, computer equipment and storage medium
CN118105709A (en) Game scene component editing method, device, program product and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24822549

Country of ref document: EP

Kind code of ref document: A1