[go: up one dir, main page]

CN120689494A - Model generation method, device, storage medium and electronic device - Google Patents

Model generation method, device, storage medium and electronic device

Info

Publication number
CN120689494A
CN120689494A CN202510668710.XA CN202510668710A CN120689494A CN 120689494 A CN120689494 A CN 120689494A CN 202510668710 A CN202510668710 A CN 202510668710A CN 120689494 A CN120689494 A CN 120689494A
Authority
CN
China
Prior art keywords
model
detection
point
points
detection point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510668710.XA
Other languages
Chinese (zh)
Inventor
肖威威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202510668710.XA priority Critical patent/CN120689494A/en
Publication of CN120689494A publication Critical patent/CN120689494A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明提供一种模型生成方法、装置、存储介质和电子设备,通过获取三维场景中的建筑模型;在三维场景中确定一空间范围;在空间范围内按照预设密度生成多个探测点;对多个探测点进行筛选,剔除位于所述建筑模型内部的探测点,得到有效探测点;对每个所述有效探测点进行多向射线检测,获取射线检测结果;基于所述射线检测结果识别位于环境光遮蔽区域的目标探测点;以及在所述目标探测点位置生成预设模型。使得能够根据三维场景中建筑模型的分布特性自动识别环境光遮蔽区域并在适当位置生成模型,从而无需美术工程师手动摆放每一个小型模型,大幅提高场景创建效率,同时保证生成模型的自然度和真实感,符合自然规律中物体在角落位置的分布特点。

The present invention provides a model generation method, device, storage medium, and electronic device. The method comprises the following steps: obtaining a building model in a three-dimensional scene; determining a spatial range in the three-dimensional scene; generating multiple detection points within the spatial range at a preset density; screening the multiple detection points, eliminating detection points located within the building model, and obtaining valid detection points; performing multi-directional ray detection on each valid detection point to obtain ray detection results; identifying a target detection point located in an ambient light shading area based on the ray detection results; and generating a preset model at the location of the target detection point. The method enables automatic identification of ambient light shading areas based on the distribution characteristics of the building model in the three-dimensional scene and generation of a model at an appropriate location. This eliminates the need for an art engineer to manually place each small model, significantly improving scene creation efficiency while ensuring the naturalness and realism of the generated model, conforming to the distribution characteristics of objects in corners according to natural laws.

Description

Model generation method, device, storage medium and electronic equipment
Technical Field
The invention relates to the technical field of games, in particular to a model generation method and device, a storage medium and electronic equipment.
Background
In games and virtual reality scene production, ambient light shielding (Ambient Occlusion, AO for short) is an important visual effect, and is usually represented by a dark angle formed at the boundary of objects due to the rebound absorption of light at the boundary. Traditionally, AO is mainly applied as a two-dimensional map to improve picture fidelity. However, in nature, in the environment light shielding areas such as corners, wall roots or both sides of a building, objects such as vegetation or garbage are usually grown, and this rule needs to be simulated in game scene making. At present, the placement of such models in a game scene mainly depends on manual operation of art engineers, for example, vegetation or sundries need to be placed one by one at positions such as between buildings, around wall roots and the like in a ruin scene at last day. The manual placement mode not only consumes a great deal of manpower and time, but also is difficult to ensure the natural randomness of the layout, and seriously influences the game development efficiency. The prior art lacks a systematic solution capable of automatically identifying the ambient light shielding region in the three-dimensional space and automatically generating the corresponding model, and limits the efficiency and quality of game scene production.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure is directed to a model generating method and apparatus, a storage medium, and an electronic device that, at least in part, overcome one or more of the problems due to the limitations and disadvantages of the related art.
According to one aspect of the present disclosure, there is provided a model generation method, the method further comprising:
acquiring a building model in a three-dimensional scene;
Determining a spatial range in the three-dimensional scene;
Generating a plurality of detection points in the space range according to preset density;
Screening the plurality of detection points, and removing the detection points positioned in the building model to obtain effective detection points;
Performing multi-directional ray detection on each effective detection point to obtain a ray detection result;
identifying target detection points located in the ambient light shielded area based on the radiation detection result, and
And generating a preset model at the target detection point position.
In accordance with another aspect of the present disclosure,
The acquisition module is used for acquiring the building model in the three-dimensional scene;
the range determining module is used for determining a space range in the three-dimensional scene;
The detection point generation module is used for generating a plurality of detection points according to preset density in the space range;
The detection point screening module is used for screening the detection points and eliminating the detection points positioned in the building model to obtain effective detection points;
the ray detection module is used for carrying out multi-directional ray detection on each effective detection point and obtaining a ray detection result;
A region identification module for identifying target detection points located in the ambient light shielding region based on the radiation detection result, and
And the model generation module is used for generating a preset model at the target detection point position.
According to another aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the model generation method of any one of the above.
According to another aspect of the present disclosure, there is provided an electronic device including:
Processor, display device, and
A memory for storing executable instructions of the processor;
wherein the processor is configured to perform the model generation method of any of the above via execution of the executable instructions.
The model generation method comprises the steps of obtaining a building model in a three-dimensional scene, determining a space range in the three-dimensional scene, generating a plurality of detection points in the space range according to preset density, screening the detection points, removing the detection points located in the building model to obtain effective detection points, carrying out multi-directional ray detection on each effective detection point to obtain a ray detection result, identifying a target detection point located in an ambient light shielding area based on the ray detection result, and generating a preset model at the position of the target detection point. Therefore, by the method provided by the embodiment, the environment light shielding area can be automatically identified according to the distribution characteristics of the building model in the three-dimensional scene and the model can be generated at a proper position, so that an art engineer is not required to manually put each small model, the scene creation efficiency is greatly improved, the naturalness and the sense of reality of the generated model are ensured, and the distribution characteristics of objects at corner positions in a natural law are met.
Drawings
The above and other features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort. In the drawings:
FIG. 1 is a diagram of a cloud interaction system architecture in an exemplary embodiment of the present disclosure;
FIG. 2 is a flow chart of a model generation method in an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic illustration of a simplified model in an exemplary embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a radiation detection result in an exemplary embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a generative model in an exemplary embodiment of the present disclosure;
FIG. 6 is a block diagram of a model generating apparatus in an exemplary embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a computer-readable storage medium in an exemplary embodiment of the present disclosure;
Fig. 8 is a composition diagram of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that, the information (including, but not limited to, information input by a user, for example, information input by a user to an input box), data (including, but not limited to, data for analysis, stored data, presented data, and the like, for example, a context code, an entire code of a current project, service pressures corresponding to operations performed on the entire code of the current project, and code development status of the current project) and signals related to the present application are all authorized by the user or are fully authorized by each party, and the collection, use, and processing of related data are required to comply with related laws and regulations. For example, the context code, the operations performed on all the codes of the current project, and the service pressures, code development states, etc. corresponding to the operations are all acquired under the condition of sufficient authorization.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe the embodiments of the invention herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be further noted that various triggering events disclosed in the present specification may be preset, and different triggering events may trigger different functions to be executed.
A model generation method in one embodiment of the present disclosure may run on a terminal device or a server. The terminal device may be a local terminal device. When the model generation method runs on a server, the method can be realized and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device. As shown in fig. 1, a cloud interaction system architecture diagram provided in the present disclosure is provided. As shown, the cloud interaction system may include a client device 10 and a server 20, wherein the client device 10 may be connected to the server 20 through a network 30.
In an alternative embodiment, various cloud applications, such as cloud gaming, may be run under the cloud interaction system. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presentation main body are separated, the storage and running of the model generating method are completed on a cloud game server, the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function, such as a mobile terminal, a television, a computer, a palm computer and the like, which is close to a user side, but the terminal device for information processing is the cloud game server in the cloud. When playing the game, the player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In an alternative embodiment, the terminal device may be a local terminal device. Taking a game as an example, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
Fig. 2 provides a model generation method in the present embodiment. FIG. 2 is a flow chart of a model generation method according to an embodiment of the present disclosure, as shown in FIG. 2, the flow comprising the steps of:
step S1, obtaining a building model in a three-dimensional scene;
step S2, determining a space range in the three-dimensional scene;
S3, generating a plurality of detection points in the space range according to preset density;
S4, screening the detection points, and removing the detection points positioned in the building model to obtain effective detection points;
s5, performing multi-directional ray detection on each effective detection point to obtain a ray detection result;
Step S6, identifying target detection points in the environment light shielding area based on the ray detection result, and
And S7, generating a preset model at the position of the target detection point.
According to the method provided by the embodiment, the environment light shielding area can be automatically identified according to the distribution characteristics of the building model in the three-dimensional scene and the model can be generated at a proper position, so that an art engineer is not required to manually put each small model, the scene creation efficiency is greatly improved, the naturalness and the sense of reality of the generated model are ensured, and the distribution characteristics of objects at corner positions in a natural law are met.
The above steps are specifically described below.
In step S1, a building model in a three-dimensional scene is acquired.
The building model is a three-dimensional data model used for representing a building structure in a three-dimensional scene. The building model is used for forming a basic framework of the three-dimensional scene and is used as a reference object and a basis for generating a follow-up model.
In an alternative embodiment, the building model may be model data created by a user through three-dimensional modeling software and imported into a three-dimensional scene, or may be a simplified version of the building model automatically generated by the system according to predetermined rules. For example, the terminal device may receive a building model file created by a user through modeling software such as 3D Max, maya, blender, or the like, and import it into the current three-dimensional scene as a base environment.
In an alternative embodiment, the building model may be represented as a collection of geometric shapes, which may be cubes, cylinders, polyhedrons, etc. basic shapes or combinations thereof, for simulating the appearance and structure of a real building. For example, the terminal device may simulate the general structure of a building using simplified geometries (e.g., cubes) that serve as abstract representations of the building that may be subsequently replaced by an art person with a more detailed, more realistic building model.
In a specific application, the terminal device may import a prefabricated building model through an asset import function of a game engine (e.g., a UE4 engine), or may use a basic geometric tool provided by the engine to create a simplified building model, for example, create a plurality of square box models to simulate a general structure of a building, including basic elements such as walls and floors, so as to lay a foundation for subsequent detection of an ambient light shielding area.
In step S2, a spatial range is determined in the three-dimensional scene.
The space range is a specific area defined in the three-dimensional scene and is used for limiting the calculation range of the subsequent operation so as to improve the processing efficiency. The spatial extent is typically represented in the form of a bounding box with well-defined boundaries and volumes.
In an alternative embodiment, the spatial extent may be defined by creating a bounding box that may cover the building model and its surrounding area at a distance, thereby ensuring that the possible ambient light shielding area is contained. For example, the terminal device may automatically calculate a bounding box of a suitable size based on the exterior contour of the building model, such that it can fully contain the building model and leave a certain margin around the building.
In an alternative embodiment, the size of the spatial range can be adjusted according to the requirements of users so as to adapt to different scale scenes and model generation requirements with different precision. For example, the terminal device may provide a parameter control interface that allows a user to input dimensional parameters (e.g., length, width, height) of the bounding box, or to directly adjust the size and position of the bounding box via a visual editing tool.
In a specific application, the terminal device may add a Volume box component (Volume) when creating a blueprint (e.g., bp_ SpawnInAO) and set the size of the Volume box using the SetBoxExtent function. For example, the parameter VolumeSize can be set to 600, and a cubic bounding box with a side length of 600 units is created as a calculation range for subsequent detection point generation and ray detection, so that calculation in the whole infinite three-dimensional space is avoided, and calculation resources are effectively saved.
In step S3, a plurality of detection points are generated according to a preset density within the spatial range.
The detection points are three-dimensional coordinate points distributed in a space range and used for subsequent ray detection and model placement. The preset density determines the distance and the number of the detection points, and influences the precision and the calculation efficiency of the final generated model.
In an alternative embodiment, the preset density may be expressed as the number of points or the distance between points in a unit length of space, and the higher the density, the more detection points are generated, the finer the model generated later, but at the same time, the calculation burden is increased. For example, the terminal device may uniformly divide the spatial range into a plurality of small blocks according to a density parameter set by a user, and the size of each small block is determined by a density value.
In an alternative embodiment, the detection points may be generated by uniformly slicing the spatial range along three coordinate axes and creating a detection point coordinate at each slicing point. For example, the terminal device may use three-layer nested loops, respectively corresponding to X, Y, Z directions, calculate a step size in each direction according to a preset density, and then generate a detection point at an intersection point of each step size.
In a specific application, the terminal device may set ProbeDensity parameter values to 10, uniformly divide the spatial range into 10×10×10=1000 small blocks, record the three-dimensional coordinates of the center point of each small block, and store the coordinates in the Probes array. Thus, the whole space range is filled by 1000 detection points which are uniformly distributed, and basic sampling points are provided for subsequent ray detection and ambient light shielding region identification.
In step S4, the plurality of detection points are screened, and detection points located in the building model are removed, so as to obtain effective detection points.
Wherein, the screening operation aims to remove invalid detection points, so that the efficiency of subsequent processing is improved. Detection points located inside the building model are considered invalid points because they are unlikely to be ambient light-shielded areas and are not suitable for placing the model.
In an alternative embodiment, the screening process may be implemented by performing collision detection on each probe point, detecting whether the point is located within the geometry of the building model. For example, the terminal device may determine the spatial relationship between the probe point and the building model using techniques such as radiation detection or overlap testing.
In an alternative embodiment, the screening results form a new set of valid detection points, which are all located in space outside the building model and are valid candidate points for subsequent ray detection. For example, the terminal device may create a new array that only holds probe point coordinates that pass the filtering, thereby reducing the amount of data that is subsequently processed.
In a specific application, the terminal device may traverse all the detection points generated in step S3, and use SphereOverlapActors functions for each point to detect whether the point overlaps with the building model. If the detection point is located inside the building model, the detection point is marked as invalid and removed from the candidate list, otherwise, the detection point is kept in the valid detection point list as a starting point for subsequent ray detection. Thus, unnecessary ray detection on the inner points of the building can be avoided, and the algorithm efficiency is improved.
In step S5, a multi-directional radiation detection is performed on each of the effective detection points, and a radiation detection result is obtained.
The ray detection is to emit rays from effective detection points to different directions and detect intersection point information of the rays and the building model. The ray detection result comprises information such as an object hit by the ray, the position of the hit point, the normal direction and the like, and is used for judging whether the detection point is located in the ambient light shielding area.
In an alternative embodiment, multi-directional ray detection may be achieved by emitting rays from each active detection point into a plurality of predefined directions, which may be several direction vectors uniformly distributed in three-dimensional space. For example, the terminal device may predefine a plurality of direction vectors, such as upper, lower, left, right, front, rear, etc., basic directions, or more detailed direction divisions.
In an alternative embodiment, the ray detection result may include information about whether the object was hit, three-dimensional coordinates of the hit point, a surface normal vector of the hit point, a material property of the hit object, and the like. For example, the terminal device may record whether each ray hits the building model, and if so, the location of the hit point and the normal direction to that point, which information will be used to subsequently determine whether the detection point is located in an ambient light-shielded area.
In a specific application, the terminal device may randomly select 5 directions for each effective detection point, transmit rays to the directions using LineTraceByChannel or similar functions, and record the detection result of each ray, including information about whether the object is hit, the position of the hit point, and the normal direction. These detection results are stored in an array (e.g., hitLocations and HitNormals arrays) to provide data support for subsequent ambient light occluding region identification.
In step S6, an object detection point located in the ambient light shielding area is identified based on the radiation detection result.
The ambient light shielding area is a relatively dark area formed by blocking light transmission by a plurality of surfaces around the building model, particularly at the junction of a wall body and the ground. The target detection points are those identified as valid detection points that are located within the ambient light shielded area.
In an alternative embodiment, the identification of the ambient light shielding area may be achieved by analyzing the pattern of the radiation detection results, such as examining factors such as the type of surface, distance, normal direction, etc. of the radiation hit. For example, the terminal device may check whether the rays of each detection point hit both the wall and the ground, and whether the normal direction of these hit points meets the characteristics of the intersection of the wall and the ground.
In an alternative embodiment, the selection of the target detection point may be based on a predefined combination of conditions that are effective to characterize the ambient light shielding region. For example, the terminal device may set a plurality of screening conditions such as that the ray must detect an object, the detection distance needs to be greater than a certain threshold, the detected object must have a valid physical material, the ground must be detected, and so on, and only the detection points satisfying these conditions at the same time are identified as target detection points.
In a specific application, the terminal device can analyze the ray detection result of each effective detection point to check whether the following conditions are simultaneously met, namely 1) the ray detects an object, 2) the ray detection distance is larger than a preset threshold value, 3) the detected object has effective physical materials, and 4) the ray detects the ground. In addition, an average value of the normal line of the ray striking point of each detection point can be calculated, and the difference between the average value and the normal line of a single wall body is compared to determine whether the detection point is positioned at the junction of the wall body and the ground. The detection points satisfying these conditions are marked as target detection points as positional basis for the subsequent model generation.
In step S7, a preset model is generated at the target detection point position.
The target detection points are effective detection points which are screened out by the preamble processing step and are positioned in the environment light shielding area. The preset model is a three-dimensional model object which is predefined or provided by a user and needs to be automatically generated at a specific position.
In an alternative embodiment, the target detection points are spatial points at the junction or corner of the building and the ground, which are determined after the ambient light shielding analysis, and these points are usually areas simulating the natural environment where objects are easily accumulated or where plants are easily grown. For example, in a game scene, areas of contact of buildings with the ground, corners of walls, under bridges, etc. are typical ambient light-shielding areas, which often accumulate more debris or grow more vegetation in the real world.
In an alternative embodiment, the pre-set model may be various types of three-dimensional models including, but not limited to, vegetation models, trash models, decoration models, etc., which may be selected by a user from a model library or directly imported into the system. For example, in the construction of a last ruin scene, the preset model may include various types of models such as weeds, shrubs, garbage piles, broken objects, etc., and the system may select an appropriate model to generate according to the environmental characteristics of the target detection point.
The process of generating the preset model includes instantiating the three-dimensional model at the target detection point position, and possibly performing transformation operations such as rotation, scaling and the like on the model to enable the model to be better integrated into the environment.
In an alternative embodiment, random variations may be applied during model generation, allowing each instantiated model to have a unique appearance and orientation, avoiding significant duplicative sensations in the scene. For example, the system may apply random rotation angles, different scaling to the models, and even randomly select one from the same type of multiple models for instantiation, thereby increasing the richness and naturalness of the scene.
In an alternative embodiment, the system may generate models of different types or densities based on environmental characteristics of different target detection points. For example, denser debris or vegetation models may be generated at corners of the wall, while sparser models may be generated where the model is broader but still in an ambient light-shielded area.
In a specific application, when the terminal device executes the model generation method, firstly, a model list provided by a user is obtained, and the list contains various vegetation and sundry models. The terminal device then randomly selects an appropriate model for each identified target detection point (an ambient light-shielded area located at the intersection of the building and the ground). The terminal device then instantiates the selected model to the location of the target probe point and applies random rotation and scaling parameters to ensure that each generated model has a unique appearance. Finally, the terminal equipment generates vegetation and sundry models which are naturally distributed around the building of the three-dimensional scene, so that the scene presents a more real and natural effect, and simultaneously, a large amount of time for manually placing the models is saved.
In a method for generating a model provided in an embodiment of the present application, obtaining a building model in a three-dimensional scene includes:
Receiving a user-provided building model, and/or
A simplified version of the building model is generated.
By the method provided by the embodiment, the terminal equipment can flexibly select the building model acquisition mode, not only can directly utilize the existing model resources of the user, but also can automatically simplify the building structure according to the requirement, reduce the calculation cost, improve the model generation efficiency, and simultaneously ensure the accuracy and the rationality of model generation.
The simplified building model is a geometrically simplified building model data structure.
In an alternative embodiment, the simplified building model is generated by voxel processing, bounding box representation or polygon number simplification. For example, the terminal device may use a voxel-based reduction algorithm to discretize the original building model into a regular three-dimensional grid structure, and then combine adjacent voxel units according to preset precision parameters to form a simplified building contour, or use a bounding box method to create a simple geometric bounding volume for a main part of the building, such as a wall, a roof, etc., and use these bounding volume combinations to represent the overall building structure.
In an alternative embodiment, receiving a user-provided building model and generating a reduced-form building model may be used in combination to form a multi-level building model representation. For example, the terminal device may first receive a fine building model provided by the user for final visual rendering, and automatically generate a simplified version of the building model for spatial detection and calculation based on the fine model, and this separation strategy may improve the operation efficiency while ensuring the visual effect.
In one embodiment, the terminal device may provide a model import interface through which a user uploads a detailed set of building models of a city block, and after receiving the models, the terminal device automatically converts each building to a simplified geometric representation (e.g., cube, cuboid combination) and assigns the simplified models a spatial location and approximate size that matches the original building, as shown in fig. 3. In the subsequent model generation process, the terminal equipment uses the simplified version building models for ray detection and space analysis, but the detailed building models provided by users are still displayed in the final rendering process, so that the balance between visual quality and calculation efficiency is achieved.
In the method for generating a model provided in an embodiment of the present application, the method further includes:
and assigning specific physical material identifiers to the building model for distinguishing different objects during ray detection.
By the method provided by the embodiment, the terminal equipment can accurately identify the building model and other objects in the radiation detection process, and accuracy of object identification is improved, so that validity of a radiation detection result is ensured, accuracy and rationality of model generation are further improved, and finally model distribution which accords with real environment characteristics is realized.
In an alternative embodiment, the physical texture identifier may include physical attribute parameters such as texture type, texture ID, reflectivity, absorptivity, etc., which constitute a unique identifier for the object. For example, the terminal device may assign a specific physical material ID value to the building model, such as setting the wall to ID value 1 and the floor to ID value 2, so that different object types can be distinguished by these ID values at the time of radiation detection.
In one specific application, when the terminal device needs to identify a target detection point located in an ambient light shielded area, radiation is emitted in multiple directions. At this time, the terminal device will first check whether the ray hits the object, and if so, further read the physical material identifier of the object. By means of this physical material identification, the terminal device can determine whether the object is part of a building model (e.g. a wall or a floor) or other objects in the scene (e.g. existing furniture or decorations). The detection points are further processed only when the ray hits a building model with a specific physical material identification, thereby avoiding erroneous generation of a pre-set model in non-building model areas.
In the method for generating a model provided in an embodiment of the present application, determining a spatial range in the three-dimensional scene includes:
Step S41, creating a bounding box as the spatial range;
step S42, the size of the bounding box is adjusted according to the volume size set by the user.
According to the method provided by the embodiment, the calculation range can be definitely limited in the model generation process, unlimited calculation resource consumption is effectively avoided, the generation efficiency is improved, the generation requirements of different scales can be met by adjusting the size of the bounding box, and the flexibility and the practicability of the method are enhanced.
The above scheme is specifically described below.
In step S41, a bounding box is created as the spatial range.
Wherein the bounding box is a three-dimensional geometry for defining a spatial computational boundary. A bounding box is understood to mean a three-dimensional cube or cuboid whose boundaries define a closed spatial region for determining the effective range of model generation.
In an alternative embodiment, the bounding box may define a closed three-dimensional space by specifying coordinate values of six facets, points within the space to be considered as effective detection areas. For example, before performing model generation, the terminal device may first create a bounding box of a default size, where the bounding box is centered on the origin of coordinates, and the initial side length is set to a predefined value, such as 500 units long, to form an initial computation space.
In an alternative embodiment, the bounding box may be defined by determining the center point coordinates and the dimensions of the three dimensions. For example, the terminal device may set the center point of the bounding box to (0, 0) at the beginning of the model generating process, and designate the half length of X, Y, Z directions to be 300 units, thereby constructing a cubic space with a side length of 600 units as the working area for the subsequent detection point generation and radiation detection.
In a specific application, when the terminal device executes the model generating method, the special function SetBoxExtent is called to create a bounding box component, and the bounding box component is used as a boundary definition of a space range, and the bounding box forms a closed area with a clear boundary in a three-dimensional coordinate system, and only the subsequent detection point generating and ray detecting operations are performed in the area, so that reasonable utilization of computing resources is ensured.
In step S42, the size of the bounding box is adjusted according to the volume size set by the user.
Wherein the volume dimension is a parameter value representing the size of the space. The volumetric size may be a number or set of numbers that determine the specific length of the bounding box in each dimension, thereby affecting the size of the overall computational range.
In an alternative embodiment, the volume size may be received through a parameterized interface and applied to the bounding box adjustment process, so that the bounding box size may be flexibly changed according to specific requirements. For example, the terminal device may provide a parameter control interface named VolumeSize, allowing a specific value, such as 600, this value is then taken as a side length parameter of the bounding box, the bounding box is thus adjusted to form a 600 x 600 cube space.
In an alternative embodiment, the adjustment of the volume size may be set for different dimensions of the bounding box, respectively, so that the bounding box forms a non-equilateral cuboid structure. For example, the terminal device may receive X, Y, Z independent dimension values of three dimensions, such as x=800, y=600, z=400, and then adjust the bounding box to a cuboid of a corresponding size to meet different dimension distribution requirements in a particular scene, such as a laterally-extended street scene or a longitudinally-extended canyon scene.
In a specific application, after creating the bounding box, the bounding box is adjusted by calling SetBoxExtent functions according to preset VolumeSize parameter values (such as an example value of 600), so that a cubic space range with a side length of 600 units is formed, and the range covers a main model generation area, so that enough calculation space is ensured, and unnecessary calculation resource waste is avoided.
In the method for generating a model provided in an embodiment of the present application, the generating a plurality of detection points according to a preset density in the spatial range includes:
uniformly dividing the space range into a plurality of small blocks according to a preset density value;
the three-dimensional coordinates of each patch are recorded as the probe point coordinates.
By the method provided by the embodiment, the terminal equipment can systematically divide the designated area through the preset density value to form the detection grid with uniform distribution, so that reasonably distributed sampling points are provided for subsequent ray detection, the comprehensiveness and uniformity of detection are ensured, the problem of uneven coverage of the area possibly caused by random sampling is effectively avoided, and the spatial distribution accuracy and the calculation efficiency of model generation are improved.
Wherein, the uniform segmentation refers to a process of dividing a space into regular small blocks in an equidistant manner.
In an alternative embodiment, the uniform segmentation may employ a meshing algorithm to ensure that each tile is the same size and shape. For a 600 x 600 cube space, for example, if the preset density value is 10, each small block has a size of 60 x 60 to form a regular three-dimensional grid structure.
In an alternative embodiment, the uniform slicing process may be implemented by three nested loops, corresponding to X, Y, Z coordinate axes, respectively, where the number of loops is determined by a preset density value. For example, the terminal device may perform a three-layer loop, the outer layer loop controlling the X-coordinate to increment in steps from a minimum value to a maximum value, the middle layer loop controlling the Y-coordinate to increment, the inner layer loop controlling the Z-coordinate to increment, and the position of a tile in space being determined each time the innermost layer loop is completed.
In one specific application, the terminal device first receives a preset density value of 10, then a spatial range of 600 x 600 units is calculated, the dicing pitch in each dimension is 60 units. Then, the terminal device starts from the minimum coordinates of the space through three layers of nested loops, and records a point position every 60 units until the whole space is traversed. Thus, the space is uniformly divided into 1000 cube small blocks with the same size, and a foundation is laid for the subsequent detection point generation.
Wherein, the small block refers to each independent area unit obtained by uniformly dividing the space range.
In an alternative embodiment, the tiles may be considered as one basic sampling unit in space, each tile having its unique spatial location and size attributes. For example, in a sliced space, each tile may be identified by an index triplet (i, j, k), where i, j, k represent the index position of the tile in the X, Y, Z direction, respectively.
In an alternative embodiment, the shape of the nubs is generally a cube or cuboid, with the sides being determined by the total size of the spatial range divided by the preset density value. For example, for a cubic space, if the total side length is 900 units and the preset density value is 15, the side length of each small block is 900/15=60 units, and a cubic small block with the side length of 60 units is formed.
Where three-dimensional coordinates refer to an accurate representation of the location of each tile in three-dimensional space.
In an alternative embodiment, the three-dimensional coordinates may be represented using the location of the center point of the tile, obtained by calculating the actual spatial coordinate value corresponding to each tile index. For example, for a tile with index (i, j, k), its center coordinates can be calculated by multiplying the index value by the tile size, plus the starting coordinates of the spatial range and the offset of half the tile.
In a specific application, the terminal device calculates the coordinates of the center point for each small block after the spatial uniform segmentation is completed. Taking a cubic space from (-300, -300, -300) to (300,300,300) as an example, when the preset density value is 10, the terminal device calculates the center coordinates of the first small block (0, 0) as (-270, -270, -270), the center coordinates of the second small block (0, 1) as (-270, -270, -210), and so on. The calculated center point coordinates are stored in a three-dimensional vector array as starting positions for subsequent ray detection, providing evenly distributed sampling points for identifying the ambient light shielding area.
In the method for generating a model provided by an embodiment of the present application, screening the plurality of detection points, and removing the detection points located inside the building model to obtain effective detection points includes:
step S61, performing collision detection on each detection point, and detecting whether the detection point is positioned inside the building model;
And step S62, if the detection point is positioned in the building model, marking the detection point as an invalid detection point.
By the method provided by the embodiment, the detection points in the building model can be effectively screened out by the terminal equipment, subsequent processing at invalid positions is avoided, and the accuracy and efficiency of model generation are improved. And invalid detection points are identified and marked through a collision detection mechanism, so that the follow-up ray detection and model generation are only performed in an effective area, and the waste of calculation resources is reduced.
The above scheme is specifically described below.
In step S61, collision detection is performed for each detection point, and it is detected whether the detection point is located inside the building model.
The collision detection is a spatial position relation judging mechanism used for determining whether a specific point is positioned inside a certain closed model in a three-dimensional space. The detection points refer to points with three-dimensional coordinates generated according to a preset rule in a specific spatial range. The interior of the building model refers to the internal space area enclosed by the closed model.
In an alternative embodiment, collision detection may be implemented using a ray casting method, by emitting rays from a detection point in a particular direction, and counting the number of times the rays intersect the model surface to determine whether the point is inside the model. For example, the terminal device may emit a ray from each detection point position in each of six principal axis directions (positive X, negative X, positive Y, negative Y, positive Z, negative Z), and indicate that the point is located inside the model if the number of intersections of the ray with the surface of the model is an odd number.
In a specific application, when the terminal device performs collision detection, a function interface provided by a physical engine may be used to call SphereOverlapActors a function, and a sphere with a very small radius (e.g. 0.1 unit) is set with the detection point as the center of the sphere, so as to detect whether the sphere overlaps with the building model. If overlapping occurs, the detection point is judged to be positioned in the building model and needs to be removed.
In step S62, if the detection point is located inside the building model, the detection point is marked as an invalid detection point.
The marking operation refers to a process of recording the state of the detection point. Invalid probe points refer to probe points that do not meet subsequent processing conditions and need to be excluded.
In an alternative embodiment, the marking may be implemented by creating a boolean-type flag array corresponding to the validity status of each probe point. For example, the terminal device may create a boolean array of the same number as the number of probe points, set all initial values to true (valid), and set the value of the corresponding index position to false (invalid) when a point is detected inside the building model.
In an alternative embodiment, the marking may also be accomplished by removing invalid points directly from the array of probe points, thereby reducing the amount of data that is subsequently processed. For example, the terminal device may maintain a list of valid probe points, and when a probe point is confirmed to be invalid, it is not added to the list or it is deleted from the existing list.
In a specific application, after the terminal device completes collision detection, all the detection points can be traversed, the detection points located in the building model are removed from the original detection point array, and a new effective detection point array is constructed. In the subsequent multi-direction ray detection step, only effective detection points need to be processed, so that the processing efficiency is remarkably improved, and particularly, under the condition that the building model occupies a large area in a complex scene.
In the method for generating a model provided in an embodiment of the present application, the method further includes:
Step S71, generating random seeds;
Step S72, randomly fine-tuning the positions of the effective detection points according to the random seeds to obtain fine-tuned effective detection points;
The multi-directional ray detection is carried out on each effective detection point, and comprises the step of multi-directional ray detection on the trimmed effective detection points.
By the method provided by the embodiment, the terminal equipment can finely adjust the effective detection points through random seeds, so that randomness and natural sense of the position of the generated model are increased, the model distribution is prevented from being too regular and mechanical, visual reality and naturalness of the final generated effect are improved, meanwhile, the difference of each generated result is ensured, and the flexibility of the system is improved while different scene demands are met.
The above scheme is specifically described below.
In step S71, a random seed is generated.
Wherein the random seed is an initial value for generating a random number sequence.
In an alternative embodiment, the random seed may be a value automatically generated by the system for providing a basis for randomness in a subsequent random fine tuning process. For example, the terminal device may generate an integer between 0 and 99999 as a random seed based on the current system time.
In step S72, the positions of the effective detection points are randomly fine-tuned according to the random seeds, so as to obtain the fine-tuned effective detection points.
Wherein, the random fine tuning is a processing procedure of carrying out small-range offset on the original detection point position.
In an alternative embodiment, the random fine tuning may shift the coordinates of the effective probe points in different directions, the amount of shift being controlled by a random seed, ensuring randomness and diversity of the fine tuning results. For example, the terminal device may generate three random offset values according to the random seed, and apply the three random offset values to three coordinate axes X, Y, Z of the probe point respectively, so as to implement random position adjustment in the three-dimensional space.
In an alternative embodiment, the range of random fine tuning may be limited to avoid distortion of the detection result due to excessive changes in the fine tuned probe point positions. For example, the terminal device may set the maximum offset range of the fine adjustment not to exceed half the distance between adjacent detection points, so as to ensure that the fine-adjusted points remain in the original spatial distribution area and avoid the deviation of the detection result caused by excessive offset.
In step S73, the trimmed valid detection points are subjected to multi-directional radiation detection.
Wherein, the multi-directional ray detection is a process of emitting rays from a detection point to a plurality of directions and recording collision results.
In an alternative embodiment, the multi-directional ray detection may emit multiple rays from each trimmed detection point position according to a preset direction set, and record interaction information of each ray with surrounding objects. For example, the terminal device may emit rays from six basic directions of up, down, left, right, front and rear of the detection point, and detect collision of the rays with a building or the ground.
In an alternative embodiment, the ray detection result includes information such as whether the ray hits the object, the position coordinate of the hit point, the normal direction of the hit point, and the physical material of the hit object, which is used to determine whether the detection point is located in the ambient light shielding area. For example, the terminal device may record the distance from the detection point to the collision point detected by each ray, the surface normal direction of the object where the collision point is located, and the material property of the object, and determine whether the point is located in an ambient light shielding area such as a corner by analyzing these data.
In a specific application, the terminal device performs multi-directional ray detection on an effective detection point located at a coordinate (15.15,28.78,0.08) after fine adjustment, and emits rays in five directions, namely 45 degrees upwards, 45 degrees right above, 45 degrees forwards, 45 degrees right and 45 degrees left. Three rays are recorded to hit the wall, one ray hits the ground, one ray does not hit any object, and the detection results are stored as ray detection data of the detection point.
In a specific application of this embodiment, the terminal device first randomly generates a random seed with a value of 25641, then generates 1000 detection points uniformly distributed in the three-dimensional space, and screens 782 valid detection points through collision detection. The terminal device then uses the random seed to calculate a fine tuning offset for each valid probe point, fine tuning the first valid probe point from the home position (10.0,20.0,0.0) to (10.13,19.87,0.05). And emitting rays to five different directions from the finely-adjusted detection points, recording collision points of two rays and a wall body, and determining an ambient light shielding area at the point by a system according to the detection results of the rays, wherein the collision points of one ray and the ground are recorded, and finally, a moss model is generated at the position.
In the model generating method provided by the embodiment of the application, the random fine adjustment of the positions of the effective detection points according to the random seeds comprises the step of randomly shifting the positions of each effective detection point within the range of not exceeding the distance between adjacent detection points.
According to the method provided by the embodiment, the problem that the generated model is too regular and uniform due to regular arrangement can be avoided, meanwhile, by limiting the random offset range, the fact that the finely-adjusted detection points are not excessively close to or far away from other detection points is ensured, proper spatial distribution is kept, and therefore the finally generated model distribution is more natural and reasonable, the sense of reality of a scene is improved, and stable and controllable detection accuracy is kept.
The distance between adjacent detection points is the distance between a plurality of detection points generated according to the preset density in the space range. The distance between adjacent detection points is a physical quantity determined by the space range and the preset density, and represents the distance value between two adjacent detection points in the space. The smaller the distance between adjacent detection points is, the denser the distribution of the detection points is, and the thinner the distribution of the detection points is.
In an alternative embodiment, the adjacent probe point spacing may be calculated by dividing the size of the spatial range by the preset density. For example, assuming that the spatial range is a cube with a side length of 600 units and a preset density of 10, the space is divided into 10×10×10=1000 small blocks, and the adjacent detection point pitch is 600/10=60 units.
In one specific application, the terminal device calculates the distance between adjacent detection points to be 60 units after generating the detection points at a preset density 10 in the space range. For each effective detection point obtained by screening, the terminal equipment uses a random seed generated before to ensure the randomness and the repeatability of the offset. For each effective detection point, the terminal equipment respectively generates random values (12, -18, 25) in the range of [ -30,30] on three coordinate axes of X, Y, Z, and adds the random values to the original coordinates of the detection point to obtain the position of the detection point after fine adjustment. This random offset ensures that the probe points do not form a distinct grid-like arrangement, making the model distribution that is subsequently generated based on these probe points more natural.
In the model generation method provided by the embodiment of the application, the multi-directional ray detection of each effective detection point comprises the steps of emitting rays from each effective detection point position to a plurality of preset directions, and recording the hit point position and the hit point normal of each ray, wherein the number of the plurality of preset directions is at least three.
By the method provided by the embodiment, the environmental information can be acquired from multiple directions, the comprehensiveness and accuracy of detection are improved, the position suitable for generating the preset model can be accurately identified, the quality and rationality of model generation are improved, and the resource utilization efficiency is further optimized.
In a specific application, when the terminal device can perform ray detection on each effective point, five preset direction vectors are firstly generated, and the vectors face different positions of the surface of the hemisphere respectively, so that the wide detection coverage range is ensured. Then, rays are emitted from each effective point one by one along the five directions, and the maximum detection distance of each ray is set to 300 units so as to limit the detection range and improve the efficiency.
Wherein, the hit point is a three-dimensional coordinate point where the ray intersects the object surface, and the coordinate information precisely describes a certain point on the object surface. The hit point normal refers to a normal vector to the object surface at the hit point, which vector is perpendicular to the object surface, pointing outside the object.
In an alternative embodiment, the hit point is obtained from the collision result returned by the ray detection function, which represents the exact location coordinates at which the ray first intersected the object. For example, after a ray emerges from the point of validity, if it intersects an object, the world coordinates (X, Y, Z) of the intersection point are recorded as the hit point location.
In an alternative embodiment, the hit point normal is the vertical vector of the object surface at the hit point, which characterizes the orientation of the object surface at that point. For example, for a planar wall, the normal vector is perpendicular to the wall, for a ground, the normal vector is generally directed directly above, and by analyzing the normal vector, it can be determined whether the ray hits the wall or the ground.
In a specific application, after the terminal device performs ray detection, the detection result of each ray is stored in two arrays, wherein one stores the position coordinates of all the hit points, and the other stores the corresponding normal vector. These data are used in subsequent analysis, in particular in calculating the average normal vector, to determine whether the inspection point is located at the intersection between objects.
Wherein the number of the plurality of preset directions is at least three.
In an alternative embodiment, the number of preset directions is set to at least three to ensure the comprehensiveness and accuracy of detection, and more complete environmental information can be obtained through multi-directional ray detection. For example, it may be set to 5 directions, pointing up, down, left, right and front, respectively, covering the main area of the hemispherical surface.
In an alternative embodiment, the selection of the number of directions requires balancing the accuracy of detection with computational efficiency, the more the number of directions, the more accurate the detection, but the greater the computational effort. For example, in practical applications, 3, 5, 8 or more directions may be selected for radiation detection, depending on the accuracy required and the computational resources available.
In a specific application of this embodiment, when the terminal device performs multi-directional ray detection, 5 rays are emitted from each effective point, and directions of the rays are (0, 1), (1, 0), (-1, 0), (0, 1, 0) and (0, -1, 0) respectively, which correspond to the upper, right, left, front and back directions respectively. When the ray hits the object, the position coordinates of the hit point and normal information are recorded. By analysing this information, in particular the change in the normal vector, the terminal device can identify points at the object junction, which are usually areas where the ambient light is strongly shielded, suitable for placing the pre-set model.
In an embodiment of the present application, the identifying, based on the radiation detection result, the target detection point located in the ambient light shielding area includes:
Analyzing the ray detection result of each effective detection point;
Identifying a detection point satisfying the following conditions simultaneously as a target detection point:
the ray detects the object;
The ray detection distance is larger than a preset threshold value;
The detected object has effective physical material and
The radiation detects the ground.
By the method provided by the embodiment, the terminal equipment can accurately identify the ambient light shielding area based on the clearly defined multi-dimensional condition, the accuracy of model placement and the sense of reality of a scene are improved, meanwhile, the consumption of computing resources is reduced through effectiveness screening, and the generation efficiency is improved.
The target detection points are effective detection points meeting specific conditions, and the points are considered to be positioned in an ambient light shielding area and are suitable for placing a preset model.
In an alternative embodiment, identifying probe points that satisfy multiple conditions simultaneously is performed by verifying, one by one, whether the radiation detection data for each valid probe point meets all specified conditions, and only if all are satisfied is marked as the target probe point. For example, the terminal device may check, for each valid detection point, whether the ray it emits has a record of hitting the object, and whether the hit distance exceeds a minimum threshold (e.g., 5 units long) preset by the system, while verifying whether the hit object has a predefined valid physical material identifier, and whether the ray detects the ground.
In an alternative embodiment, the ray detecting an object means that the ray emanating from the detection point intersects an object in the scene, resulting in collision data. For example, the terminal device may determine whether the ray successfully hits the object by determining whether a "collision success" flag bit in the ray collision result is true, which is a first step screening condition for determining whether the detection point is likely to be located in the ambient light shielding area.
In a specific application, the terminal device may evaluate a specific valid detection point, assuming that this point is located near a corner of a building, as shown in a schematic diagram of a radiation detection result in fig. 4, it can be seen that the whole area around the building is covered with blocks. The terminal equipment firstly checks whether any ray in 5 rays emitted by the point successfully hits the object, and then confirms whether the hit distance is larger than a set 10 unit length threshold. The terminal device then verifies whether the object hit has a physical texture identifier marked "building" and finally confirms whether the ray hit the object marked "ground". Only when all four conditions are met, the probe point is marked as a target probe point for subsequent generation of a pre-set model at that location.
In an embodiment of the present application, in the method for generating a model, the identifying, based on the radiation detection result, the target detection point located in the ambient light shielding area further includes:
Step 111, calculating an average value of the normal line of the ray striking point of each detection point;
step 112, comparing the difference between the normal average value and the normal of the single wall body;
And step 113, determining whether the detection point is positioned at the junction of the wall body and the ground based on the difference.
By the method provided by the embodiment, the terminal equipment can accurately identify the environment light shielding area at the junction of the wall body and the ground through more accurate analysis and processing of the ray detection result, so that the accuracy of the model generation position is improved, the generated model is more in line with the distribution rule of objects in the real environment, the sense of reality and the sense of naturalness of a scene are remarkably improved, meanwhile, the generation of an invalid model is reduced, and the system resource consumption is reduced.
The above scheme is specifically described below.
In step S111, an average value of the ray hit point normals of each detection point is calculated.
The normal line of the ray hit point is the normal vector of the ray hit object surface in the multi-direction ray detection process of the detection point. The normal vector refers to a direction vector perpendicular to a surface where a point on the surface of an object is located, and is used to represent the orientation of the surface of the object. The average value of the normals refers to the result of vector summation of all hit point normal vectors obtained from a plurality of rays emitted from the same detection point and then averaging.
In an alternative embodiment, the ray hits the normal of the point where the ray intersects the object surface, and the normal vector of the object surface at the intersection point reflects the orientation characteristic of the object surface at the point. For example, the normal vector of the wall surface is generally horizontally oriented, while the normal vector of the floor surface is generally vertically oriented. The terminal device may record the hit point normals of all rays from each detection point and store these normals in an array for subsequent computation.
In an alternative embodiment, the process of calculating the normal average includes vector addition of the hit point normals of all rays from the same detection point, and dividing the result by the number of rays to obtain a normalized average normal vector. For example, if 5 rays are emitted from one detection point, 5 normal vectors of the hit point are obtained, respectively, the terminal device vector-adds the 5 normal vectors, and divides by 5 to obtain a normal average value of the detection point. The process can be realized through a three-dimensional vector calculation library, and the accuracy of a calculation result is ensured.
In a specific application, the terminal device performs multi-directional ray detection on a detection point, assuming that 5 rays are emitted, wherein 3 rays hit the wall, normal vectors are (1, 0), (0.98,0.2,0) and (0.95,0.31,0), respectively, and the other 2 rays hit the ground, and normal vectors are (0, 1) and (0,0.1,0.99), respectively. The terminal device sums these 5 normal vectors and then divides by 5 to obtain a normal average value (0.586,0.122,0.398), which is used for the subsequent variance comparison analysis.
In step S112, the difference between the normal average and the single wall normal is compared.
Where a single wall normal refers to a normal vector that only considers the wall surface. The normal line difference refers to the degree of vector difference between the normal line average value and the normal line of a single wall body, and can be measured by the included angle or the distance between vectors.
In an alternative embodiment, a single wall normal is generally a unit vector in a horizontal direction, representing the orientation of the wall surface. In three-dimensional space, walls with different orientations may have different normal directions, e.g., (1, 0) for easting walls and (0, 1, 0) for northing walls. The terminal device can determine the value of the single wall normal by analyzing the geometric information of the building model or by a preset mode.
In an alternative embodiment, the comparison of the normal differences may be achieved by calculating the point multiplication between the average normal and the normal of a single wall, or calculating the angle between two vectors. The closer the dot product is to 1, the more similar the two vector directions are, and the smaller the included angle is, the more similar the two vector directions are. The terminal device may set a threshold value, and when the normal line difference exceeds the threshold value, it is considered that the detection point may be located at the junction of the wall and the ground.
In one embodiment, the terminal device compares the calculated normal average (0.586,0.122,0.398) to the standard normal (1, 0) to the eastern wall. By calculating the point multiplication of the two vectors, a value of 0.586 is obtained, which is significantly less than 1, indicating a large difference between the two vectors. The terminal device may also calculate the angle between the two vectors to be about 54 degrees, which significant angle difference further demonstrates that the probe point may be located at the intersection of the wall and the ground.
In step S113, it is determined whether the detection point is located at the junction of the wall and the ground based on the difference.
The junction between the wall and the ground is the area where the wall and the ground intersect, and is usually a position where the shielding of ambient light is obvious. The process of determining the position of the detection point is to judge whether the detection point satisfies a specific condition by analyzing the normal difference, thereby identifying the position having a specific environmental characteristic.
In an alternative embodiment, the terminal device may set a normal difference threshold, and determine that the detection point is located at the intersection of the wall and the ground when the difference between the calculated normal average value and the normal of the single wall is greater than the threshold. This difference indicates that the rays at the detection point hit different surfaces (e.g., wall and floor) simultaneously, resulting in a significant change in the normal average.
In an alternative embodiment, the terminal device may incorporate other parameters in addition to normal differences to enhance the accuracy of the determination, such as ray hit distance, spatial distribution of hit points, etc. For example, if the height distribution of the ray hits points has both a low point near the ground and a high point on the wall while the normal variance reaches the threshold, it is more certain that the detection point is located at the intersection.
In one specific application, the terminal device compares the difference value with a preset threshold (e.g. 0.7 or 45 degrees) according to the difference (the point multiplication value is 0.586, and the included angle is about 54 degrees) between the average value of the normals calculated above and the normal of the single wall. And because the difference exceeds the threshold value, the terminal equipment judges that the detection point is positioned at the junction of the wall body and the ground. Meanwhile, the terminal equipment also checks the height distribution of the ray hit points, confirms that part of hit points are located at the ground height and part of hit points are located at the wall height, and further verifies the accuracy of judgment. Thus, the probe points are marked as target probe points, which will be used in the subsequent steps for model generation.
In the method for generating a model provided in an embodiment of the present application, generating a preset model at the target detection point position includes:
step 121, obtaining a model list provided by a user;
Step 122, creating a hierarchical instantiation static grid instance for each model type;
Step 123, randomly selecting one model in the model list at each target detection point position for instantiation generation.
According to the method provided by the embodiment, the terminal equipment can carry out efficient organization and management according to the model resources designated by the user, and the purpose of dynamically generating diversified model examples at the calculated target position is achieved, so that the model generation efficiency and the scene richness are greatly improved, meanwhile, the system resource consumption is effectively reduced by adopting a layered instantiation technology, and good running performance in a large-scale model generation scene is ensured.
The above scheme is specifically described below.
In step S121, a list of models provided by the user is acquired.
Wherein the model list is a collective data structure containing one or more models.
In an alternative embodiment, the model list is a set of three-dimensional model resources available for generation that are selected and provided by the user via a graphical interface or configuration file. For example, the terminal device may provide a model selection interface in which the user may select a plurality of models, such as plants, sundries, decorations, etc., from a library of models that are desired to be generated in the scene, which selected models are to be added to the list of models for subsequent use.
In an alternative embodiment, the model list may contain models of different categories, and each category may be set with a different generation weight or probability distribution to control the distribution ratio of the different models in the final scene. For example, a model list may be provided with 60% of generating weights for plant models, 30% of generating weights for garbage models, and 10% of generating weights for other decorations, and the terminal device may perform probability sampling according to these weights when randomly selecting the model subsequently.
In step S122, a hierarchical instantiated static mesh instance is created for each model type.
The hierarchical instantiation static grid body instance is a technical structure for optimizing three-dimensional model rendering and management.
In an alternative embodiment, the hierarchical instantiation static grid-volume instance is a container object for efficiently managing and rendering a large number of identical models, and by batching model instances of the same type, the number of draw calls can be significantly reduced. For example, the terminal device may create a specific instantiation static grid instance for each model type in the model list, so that all model instances of the same type may be considered as a rendering unit when rendering, greatly reducing the burden on the graphics processor.
In an alternative embodiment, the hierarchical instantiation static grid instances may be further grouped and managed according to the material characteristics, size ranges, or functional categories of the model, so as to more finely control the rendering and interaction behavior of model instances of different levels. For example, the terminal device may create one independent instantiation static mesh instance for the model of transparent material, and another instantiation static mesh instance for the model of opaque material to optimize rendering order and depth ordering issues.
In step S123, one model in the model list is randomly selected at each target detection point position for instantiation generation.
Where instantiation generation refers to the process of creating model objects at determined three-dimensional coordinates.
In an alternative embodiment, the instantiation generation process includes randomly selecting an appropriate model from a list of models, then creating an instance of the model at the location of the target detection point, and applying the necessary spatial transformations. For example, at each target detection point determined to be an ambient light occluding region, the terminal device selects one model from the model list based on a random algorithm, and then creates an instance of the selected model at that coordinate point to be part of the scene.
In an alternative embodiment, the instantiation generation may also apply additional random changes, such as scaling, rotation, or subtle positional offsets, to increase the natural variation and realism of the scene. For example, when generating model instances, the terminal device may apply ±15% random size variation and 0-360 degree random rotation for each instance, so that even the same model may present different appearances at different positions, and avoid visual repetitive feeling.
In one specific application, the terminal device first receives a set of decoration models, including various plant, rock and sundry models, uploaded by a user through an interface. The terminal device then creates a separate, dedicated hierarchical instantiated static grid instance for each class of models (e.g., all plant models, all rock models). After the target detection points of the preamble are identified, the terminal device traverses each target detection point and randomly selects a suitable model at each point location. For example, a small plant model may be generated at target detection points near corners of the wall, and a pile of clutter models may be generated at target detection points at edges of the building. This automated model generation method makes the scene decoration process efficient and natural.
In the method for generating a model provided in an embodiment of the present application, randomly selecting, for instantiation, one model in the model list at each target detection point position includes:
Step S131, applying random scaling and rotation transformation for the generated model;
Step S132, detecting whether interpenetration exists between the generated models;
in step S133, the interpenetrated models are removed.
The method provided by the embodiment ensures that the generated model has more natural visual effect and layout rationality, increases the diversity and realism of the model through random scaling and rotation transformation, and simultaneously detects and removes the interpenetrating models to ensure the physical rationality of the layout of the objects in the scene, thereby improving the quality and usability of the automatically generated model and reducing the workload of manual adjustment.
The above scheme is specifically described below.
In step S131, a random scaling and rotation transformation is applied for the generated model.
Wherein scaling is an operation of scaling the size of the generative model. Scaling can make the model scale up or down according to different proportions in three dimensions of x-axis, y-axis and z-axis, so as to change the size and proportional relation of the model.
In an alternative embodiment, the scaling transformation may be calculated based on random values within a preset scaling range, which may be a section that is user-defined. For example, the terminal device may randomly generate three scaling factors in the range of 0.8 to 1.2, which are applied to the length, width and height dimensions of the model, respectively, so that the model has a certain size variation while maintaining the basic form.
In an alternative embodiment, the scaling transformation may set different scaling ranges and scaling strategies depending on the type of model. For example, the terminal device may set a larger height scale (e.g., 0.7 to 1.5) for the plant-based model while maintaining a smaller width scale (e.g., 0.9 to 1.1) to simulate the diversity of plant growth heights in the real world, while a similar scale (e.g., 0.8 to 1.3) may be used in three dimensions for the stone-based model to maintain the coordination of its shape.
Wherein the rotation transformation is an operation of changing the spatial orientation of the generative model. The rotational transformation may cause the model to rotate through different angles about the x, y, and z axes, thereby changing the orientation and pose of the model.
In an alternative embodiment, the rotation transformation may be represented and calculated using euler angles or quaternions, with random angular rotation of the model in three dimensions. For example, the terminal device may generate three random angle values representing the rotation angles around the x-axis, the y-axis and the z-axis, respectively, and then apply these rotations sequentially to the model, randomizing their orientation.
In an alternative embodiment, the rotation transformation may be specifically tuned according to the characteristics of the model and environmental constraints. For example, the terminal device may perform a small angle rotation (e.g., within + -15 degrees) on the plant model primarily in the y-axis (axis perpendicular to the ground) to maintain its approximately vertical growth characteristics, while performing a completely random rotation (0 to 360 degrees) in the horizontal plane (x-axis and z-axis) to increase the change in viewing angle, whereas for a debris model with no apparent orientation requirements, a completely random rotation may be performed in all three axes.
In step S132, it is detected whether or not there is an interpenetration between the generated models.
Wherein, interpenetration refers to the phenomenon that two or more three-dimensional models are unexpectedly overlapped or crossed in space position. When the models are mutually inserted, visual unnatural effects and violation of physical rules can be caused, and reality and rationality of the scene are affected.
In an alternative embodiment, detection of the interpenetration may be achieved by a collision volume detection mechanism that determines whether there is spatial overlap between different models by comparing their collision boundaries. For example, the terminal device may create a simplified crash box or ball for each generated model and then check whether there is an overlap region between the crash volumes, and consider the models to be interleaved if there is an overlap region.
In an alternative embodiment, a multi-level detection strategy may be used for detection interpenetration to improve detection efficiency and accuracy. For example, the terminal device may first use a fast but rough bounding box detection to initially screen for pairs of models that may be interspersed, and then perform a more accurate but computationally expensive mesh collision detection on these pairs of models that may be interspersed to determine whether they are truly interspersed. The layering detection method can ensure the detection accuracy and simultaneously remarkably reduce the calculated amount.
In step S133, the interpenetrating model is removed.
The process of removing the interpenetrating models refers to a process of determining which models are reserved and deleting other models according to a certain rule after two or more models are found to have spatial overlapping or crossing. This step aims at ensuring that all models in the finally generated scene conform to the physical rules and that no unnatural interpenetration phenomenon exists.
In an alternative embodiment, the removal of the interpenetrating models may be selected based on priority rules, such that the less important models are removed. For example, the terminal device may assign a priority score to each model according to factors such as the size, type, and generation sequence of the models, and when two models are found to be inserted into each other, reserve the model with high priority, and remove the model with low priority. This may ensure that the most important elements in the scene are preserved.
In an alternative embodiment, removing the interpenetrating model may employ a strategy that attempts to adjust the position rather than deleting it directly. For example, after the terminal device finds that two models are inserted into each other, it can try to fine-tune the position of one of the models in a small range, for example, find a new position that will not cause the insertion near the home position, and only when the insertion is unavoidable after multiple attempts to adjust the position, it is considered to remove the model. The method can keep the generated model to the maximum extent and improve the richness of the scene.
In a specific application, after the terminal device generates a set of models of simulated vegetation in the ambient light-shielded area, a random scaling factor is applied to each newly generated vegetation model, for example, a certain shrub may be scaled to a height of 0.9 times and a width of 1.1 times the original size, and a y-axis rotation of 15 degrees is applied to enable the vegetation model to be in a natural growth state. Subsequently, the terminal device detects that there is a 25% volumetric overlap of this shrub with a previously generated stone model, belonging to the interpenetrating state. The terminal device tries to find the alternative position within 0.3 meter around the original position, but it is still unavoidable to interlude, and finally decides to remove the shrub model according to the priority rule (assuming that the stone priority is higher), so as to ensure that all the reserved models in the final scene conform to the physical distribution rule.
In the method for generating a model provided in an embodiment of the present application, the method further includes:
receiving user adjustment of one or more of detection point density, ray quantity and ambient light shielding judgment threshold;
and re-executing the model generating process according to the adjusted parameters.
By the method provided by the embodiment, the terminal equipment can flexibly respond to the personalized demands of the user, and the model generation result is updated in real time by adjusting the key parameters, so that the accuracy of model distribution is improved, the interactivity and usability of the system are enhanced, and the optimal model generation effect is obtained in different application scenes.
The parameter adjustment refers to modification operation of various key variables in the control model generation process. These parameters directly affect the number, distribution location, and decision accuracy of the final generated model.
In an alternative embodiment, the probe density parameter is used to control how many probe points are generated within a specified spatial range. For example, the terminal device may provide a slider interface element allowing the user to adjust the density value from the initially set 10 to 20, which means that the space will be divided more finely, increasing from the original 10×10×10=1000 detection points to 20×20×20=8000 detection points, thereby obtaining a finer model distribution effect.
In an alternative embodiment, the probe point density parameter may be adjusted via a graphical user interface, and the range of values may be set to an integer between 1 and 100. For example, when the probe point density is set to a lower value such as 5, the terminal device can generate a sparse probe point distribution in space, which is suitable for fast preview or low configuration device operation, and when the probe point density is set to a higher value such as 50, a very dense probe point network can be generated, which is suitable for professional scene production requiring accurate model placement.
In an alternative embodiment, the radiation count parameter determines the number of radiation lines emitted from each detection point for detecting ambient conditions. For example, the terminal device may allow the user to adjust the default 5 rays to 8 rays, so that the system will emit rays from each effective detection point to 8 different directions, collect more comprehensive environmental information, and improve the accuracy of the identification of the environmental light-shielding region.
In an alternative embodiment, the ray number parameter may be set by a numerical input box, and the effective range is typically 3 to 20. For example, when the number of the terminal equipment is set to 3, the terminal equipment only executes the most basic environment detection, the processing speed is high but the accuracy is low, and when the number of the terminal equipment is set to 12 or more, the terminal equipment can perform all-round environment sampling, and the accuracy and the naturalness of model placement can be remarkably improved although the calculated amount is increased.
In an alternative embodiment, the ambient light shielding decision threshold is a critical value for determining whether the detection point is located in an ambient light shielding region. For example, the terminal device may allow the user to adjust the default threshold of 0.5 to 0.7, which means that the system will use more stringent criteria to determine the ambient light blocking area, and only areas where light is blocked above 70% will be considered valid model placement positions.
In an alternative embodiment, the ambient light shielding determination threshold may be adjusted by a percentage slider, with a value in the range of a fraction between 0 and 1. For example, when the threshold is set to a lower value, such as 0.3, the terminal device can identify more potential ambient light shielding areas and generate more models, and when the threshold is set to a higher value, such as 0.8, the models can be generated only in obvious corners or depth shielding areas, and the number is reduced but the positions are more accurate.
The re-execution of the model generation process refers to clearing the previous calculation result according to the parameter value newly adjusted by the user, and re-performing the complete calculation and generation operation once according to the complete model generation flow.
In an alternative embodiment, re-executing the model generation process includes purging the existing model and executing the generation flow with the new parameters. For example, after receiving an instruction from the user to increase the density of detection points from 10 to 20, the terminal device first removes all generated model instances in the scene, then uses the new density value 20 to divide the space, generates a new set of detection points, and continues to perform subsequent screening, detection, and model generation steps based on the points.
In a specific application, the terminal device allows the user to adjust the parameters in a live preview mode. The user firstly adjusts the density of the detection points from a default value of 10 to 15, the system immediately regenerates the detection points and displays the distribution of the detection points, then the user increases the number of rays from 5 to 8, the system recalculates the environment detection result of each detection point, and finally the user adjusts the environment light shielding judgment threshold value from 0.5 to 0.6, and the system updates the distribution position of the model in real time. In the whole process, the terminal equipment keeps continuous rendering of the scene, so that a user can intuitively see the influence of each parameter adjustment on a final result, and the user is helped to quickly find the parameter combination which is most in line with the expected effect.
In the method for generating a model provided in an embodiment of the present application, the method further includes:
calculating the ambient light shielding intensity value of each target detection point;
dividing the target detection points into a plurality of levels according to the ambient light shielding intensity values;
Different model generation strategies are set according to different grades, including model distribution density and model types.
According to the method provided by the embodiment, the terminal equipment can intelligently adjust the model generation strategy based on different levels of the shielding intensity of the ambient light, so that the model generation effect which is more in line with the real natural distribution rule is realized, the visual reality and diversity of the generated model are improved, and meanwhile, the generated result is finer and smoother and has layering sense through grading treatment.
Wherein the ambient light masking intensity value is a numerical indicator that characterizes the degree to which the target detection point is masked by surrounding buildings.
In an alternative embodiment, the ambient light blocking intensity value may be calculated by analyzing the detection of radiation around the target detection point, indicating the degree to which the point is blocked from ambient light by surrounding objects. For example, the terminal device may count, based on the result of the ray detection, the proportion of rays around the target detection point that are blocked, the higher the proportion of rays that are blocked, the greater the ambient light blocking intensity value that represents the point, or the terminal device may calculate the average distance between the hit point and the target detection point in the ray detection result, the closer the distance, the greater the ambient light blocking intensity value.
In an alternative embodiment, the ambient light shielding intensity value may be a value between 0 and 1 obtained by normalizing the multi-directional radiation detection result, where 0 indicates that the shielding is completely absent and 1 indicates that the shielding is completely present. For example, the terminal device may calculate the ratio of the radiation blocked by the building among the plurality of directional radiation emitted from the target detection point and use the ratio as the ambient light blocking intensity value, or the terminal device may calculate the ambient light blocking intensity value by a specific formula based on the distance relationship between the hit point detected by the radiation and the target detection point.
In an alternative embodiment, the levels may be divided by setting a plurality of threshold intervals of ambient light shielding intensity values, different intervals corresponding to different level categories. For example, the terminal device may divide the ambient light shielding intensity value into three levels of high, medium and low, wherein 0-0.3 is a low shielding level, 0.3-0.7 is a medium shielding level, and 0.7-1 is a high shielding level, or the terminal device may set more levels, such as five or ten levels, according to actual requirements, to achieve finer model generation control.
Wherein the model generation strategy is a model arrangement rule and parameter setting adopted for different ambient light shielding level areas.
In an alternative embodiment, the model generation strategy includes adjusting the distribution density of the model, increasing the model generation density in areas where the ambient light obscuration intensity is high, and decreasing the model generation density in areas where the obscuration intensity is low. For example, the terminal device may set a higher model generation probability for a high shielding level region so that more models are generated in a deep shielding region such as a corner of a building, or the terminal device may set a lower model generation probability for a low shielding level region and generate fewer models in an open region, thereby conforming to the rule of object distribution in nature.
In an alternative embodiment, the model generation strategy further comprises selecting different types of models according to the occlusion level such that a particular type of model is more prone to occur in a region of a particular occlusion level. For example, the terminal device may generate more of the more shade-like plants or waste deposits such as mosses, ferns, etc. in high shade-level areas, or the terminal device may generate more adaptable plants such as shrubs in medium shade areas, more sunlight plants or artificially placed items in low shade areas.
In a specific application, the terminal device implements a differentiated model generation policy according to the four masking levels previously divided. For heavily shaded areas (0.75-1), the terminal equipment sets the highest model density (e.g. 100%) and mainly generates moss, garbage piles, sundries and other models, for moderately shaded areas (0.5-0.75), uses medium and high densities (e.g. 70%) and generates shrubs and small vegetation, for lightly shaded areas (0.25-0.5), uses medium and low densities (e.g. 40%) and generates more decorative elements, for slightly shaded areas (0-0.25), uses the lowest densities (e.g. 10%) and mainly generates models suitable for open areas. The hierarchical generation strategy enables the finally generated scene to conform to the natural law, and the visual effect is richer and more various.
In the method for generating a model provided in an embodiment of the present application, the method further includes:
introducing environmental attributes including one or more of temperature, humidity, terrain type;
Establishing a mapping relation between the environmental attribute and the adaptability of different models;
and screening a model subset suitable for the environment attribute from a model library according to the environment attribute of each target detection point position for generation.
Wherein, the environment attribute is a parameter set describing the environment characteristics of a specific area or position and is used for representing the natural condition or physical state of the area.
In an alternative embodiment, the environmental attribute may be a numerical or categorical indicator that characterizes the environmental condition of a particular area, describing the climate condition, topographical features, etc. of that area. For example, environmental attributes may include numeric attributes such as temperature values (in degrees celsius, e.g., 25 ℃) and humidity percentages (e.g., 65%), as well as category attributes such as terrain type (plain, mountain, desert, etc.).
The mapping relation is a corresponding rule set between the environment attribute parameters and the model adaptability scores, and is used for determining the adaptability degrees of different models under various environment conditions.
In an alternative embodiment, the mapping relationship may be a set of rules or mathematical functions that are used to transform the environmental attribute values into model suitability scores to determine whether a model is suitable for generation in a particular environment. For example, for a "moss" model, a mapping rule may be established that the fitness score is high when the humidity is greater than 70%, high when the temperature is between 5 ℃ and 25 ℃, and high when the terrain type is rock or trunk.
Wherein the subset of models is a set of models selected from a complete model library based on environmental suitability, the models being adapted to be generated under specific environmental conditions.
In an alternative embodiment, the screening process of the subset of models includes calculating an fitness score for each model under the current environmental conditions and selecting models with scores above a threshold as candidate generation objects to ensure that the generated models are coordinated with the environment. For example, for an area with a temperature of 30 ℃ and a humidity of 20%, the system calculates an fitness score for each model in the model library, and screens out models with scores higher than 0.7 (full score 1) to form a subset of models suitable for the environment.
In a specific application, when processing a target point which is close to a water source, has a temperature of 22 ℃, a humidity of 75% and a terrain type of hills, the terminal equipment firstly acquires environmental attribute data of the target point, and then calculates an adaptability score of each model in the model library according to a pre-established mapping relation. The system found that the "fern", "moss" and "mushroom" models scored 0.92, 0.88 and 0.85, respectively, much higher than the other models, and thus the three models were screened out to form a subset of models. Finally, the system randomly selects one of the three models and generates an instance of the model at the target point location, ensuring that the generated model is consistent with the environmental conditions.
In a specific application of this embodiment, the terminal device first creates an environmental attribute profile for the scene area, containing data for three dimensions of temperature, humidity and terrain type, as shown in a schematic diagram of a generative model in fig. 5. For each target point determined, the terminal device reads the environmental parameters of the location, for example 18 ℃ temperature, 82% humidity and stone-like terrain type detected at a certain corner. The terminal device then queries the map of environmental attributes and model adaptations to find that the adaptation score of moss, lichen and small ferns is highest under such environmental conditions. And finally, the terminal equipment randomly selects one of the high-adaptability models, and generates a corresponding model at the target point position, so that the generated model distribution is more in accordance with the natural law, and the realism and immersion of the scene are enhanced.
Also disclosed in the present exemplary embodiment is a model generating apparatus, and fig. 6 is a composition diagram of a model generating apparatus in an exemplary embodiment of the present disclosure. As shown in fig. 6, the apparatus includes:
The acquisition module is used for acquiring the building model in the three-dimensional scene;
the range determining module is used for determining a space range in the three-dimensional scene;
The detection point generation module is used for generating a plurality of detection points according to preset density in the space range;
The detection point screening module is used for screening the detection points and eliminating the detection points positioned in the building model to obtain effective detection points;
the ray detection module is used for carrying out multi-directional ray detection on each effective detection point and obtaining a ray detection result;
A region identification module for identifying target detection points located in the ambient light shielding region based on the radiation detection result, and
And the model generation module is used for generating a preset model at the target detection point position.
Optionally, acquiring the building model in the three-dimensional scene includes:
Receiving a user-provided building model, and/or
A simplified version of the building model is generated.
Optionally, the method further comprises:
The building model is given specific physical material identification for distinguishing different objects during ray detection.
Optionally, determining a spatial range in the three-dimensional scene includes:
Creating a bounding box as a spatial range;
and adjusting the size of the bounding box according to the volume size set by the user.
Optionally, generating the plurality of detection points at the preset density in the spatial range includes:
Uniformly dividing the space range into a plurality of small blocks according to a preset density value;
the three-dimensional coordinates of each patch are recorded as the probe point coordinates.
Optionally, screening the plurality of probe points includes:
performing collision detection on each detection point, and detecting whether the detection point is positioned in the building model;
If the probe point is located inside the building model, the probe point is marked as an invalid probe point.
Optionally, the method further comprises:
Generating random seeds;
randomly fine-tuning the positions of the effective detection points according to the random seeds to obtain fine-tuned effective detection points;
wherein performing a multi-directional ray detection for each active detection point comprises:
And performing multi-directional ray detection on the trimmed effective detection points.
Optionally, randomly fine-tuning the position of the valid probe point according to the random seed includes:
The position of each effective probe point is randomly shifted within a range not exceeding the distance between adjacent probe points.
Optionally, performing multi-directional ray detection on each active detection point includes:
emitting radiation from each effective detection point position to a plurality of preset directions;
Recording the hit point position and the hit point normal of each ray;
Wherein the number of the plurality of preset directions is at least three.
Optionally, identifying the target detection point located in the ambient light shielded area based on the radiation detection result includes:
Analyzing the ray detection result of each effective detection point;
Identifying a detection point satisfying the following conditions simultaneously as a target detection point:
the ray detects the object;
The ray detection distance is larger than a preset threshold value;
The detected object has effective physical material and
The radiation detects the ground.
Optionally, identifying the target detection point located in the ambient light shielded area based on the radiation detection result further includes:
Calculating the average value of the ray hit point normals of each detection point;
Comparing the difference between the normal average value and the normal of the single wall body;
And determining whether the detection point is positioned at the junction of the wall body and the ground based on the difference.
Optionally, generating the preset model at the target detection point position includes:
Obtaining a model list provided by a user;
creating a hierarchical instantiated static grid instance for each model type;
And randomly selecting one model in the model list at each target detection point position for instantiation generation.
Optionally, randomly selecting one of the models in the model list at each target detection point location for instantiation generation includes:
Applying a random scaling and rotation transformation to the generated model;
detecting whether interpenetration exists between the generated models;
The interpenetrating model is removed.
Optionally, the method further comprises:
receiving user adjustment of one or more of detection point density, ray quantity and ambient light shielding judgment threshold;
and re-executing the model generating process according to the adjusted parameters.
Optionally, the method further comprises:
calculating the ambient light shielding intensity value of each target detection point;
dividing the target detection points into a plurality of levels according to the ambient light shielding intensity values;
Different model generation strategies are set according to different grades, including model distribution density and model types.
Optionally, the method further comprises:
introducing environmental attributes including one or more of temperature, humidity, terrain type;
establishing a mapping relation between the environmental attribute and the adaptability of different models;
And screening a model subset suitable for the environmental attribute from the model library according to the environmental attribute of each target detection point position for generation.
According to the method provided by the embodiment, the environment light shielding area can be automatically identified according to the distribution characteristics of the building model in the three-dimensional scene and the model can be generated at a proper position, so that an art engineer is not required to manually put each small model, the scene creation efficiency is greatly improved, the naturalness and the sense of reality of the generated model are ensured, and the distribution characteristics of objects at corner positions in a natural law are met.
The specific details of the unit modules in the foregoing embodiments have been described in detail in the corresponding model generating method, and in addition, the model generating device further includes other unit modules corresponding to the model generating method, so that details are not repeated herein.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Fig. 7 is a schematic diagram of a structure of a computer-readable storage medium in an exemplary embodiment of the present disclosure. As shown in fig. 7, a program product 1100 according to an embodiment of the present disclosure is described, on which a computer program is stored which, when being executed by a processor, implements the method steps of the model generation method described above. According to the method provided by the embodiment, the environment light shielding area can be automatically identified according to the distribution characteristics of the building model in the three-dimensional scene and the model can be generated at a proper position, so that an art engineer is not required to manually put each small model, the scene creation efficiency is greatly improved, the naturalness and the sense of reality of the generated model are ensured, and the distribution characteristics of objects at corner positions in a natural law are met.
The computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable storage medium may transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
The program code embodied in a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, radio frequency, etc., or any suitable combination of the foregoing.
An electronic device 1000 in the present exemplary embodiment is described below with reference to fig. 8. The electronic device 1000 is merely an example and should not be construed to limit the functionality and scope of use of embodiments of the present disclosure in any way.
Referring to fig. 8, the electronic device 1000 is embodied in the form of a general purpose computing device. Components of electronic device 1000 may include, but are not limited to, at least one processor 1010, at least one memory 1020, a bus 1030 connecting the different system components (including processor 1010 and memory 1020), and a display unit 1040.
The memory 1020 stores program code that can be executed by the processor 1010 such that the processor 1010 performs the specific method steps of the model generation method described above via execution of the executable instructions. According to the method provided by the embodiment, the environment light shielding area can be automatically identified according to the distribution characteristics of the building model in the three-dimensional scene and the model can be generated at a proper position, so that an art engineer is not required to manually put each small model, the scene creation efficiency is greatly improved, the naturalness and the sense of reality of the generated model are ensured, and the distribution characteristics of objects at corner positions in a natural law are met.
The electronic device may also include a power supply assembly configured to perform power management of the electronic device, a wired or wireless network interface configured to connect the electronic device to a network, and an input output (I/O) interface. The electronic device may operate based on an operating system stored in memory, such as Android, iOS, windows, mac OS X, unix, linux, freeBSD, or the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present invention may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, an electronic device, or a network device, etc.) to perform the method according to the embodiments of the present invention.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (19)

1. A method of generating a model, the method comprising:
acquiring a building model in a three-dimensional scene;
Determining a spatial range in the three-dimensional scene;
Generating a plurality of detection points in the space range according to preset density;
Screening the plurality of detection points, and removing the detection points positioned in the building model to obtain effective detection points;
Performing multi-directional ray detection on each effective detection point to obtain a ray detection result;
identifying target detection points located in the ambient light shielded area based on the radiation detection result, and
And generating a preset model at the target detection point position.
2. The method of generating a model according to claim 1, wherein the acquiring the building model in the three-dimensional scene includes:
Receiving a user-provided building model, and/or
A simplified version of the building model is generated.
3. The model generation method according to claim 1, characterized in that the method further comprises:
and assigning specific physical material identifiers to the building model for distinguishing different objects during ray detection.
4. The method of generating a model of claim 1, wherein said determining a spatial extent in said three-dimensional scene comprises:
creating a bounding box as the spatial range;
And adjusting the size of the bounding box according to the volume size set by the user.
5. The model generation method according to claim 1, wherein the generating a plurality of detection points at a preset density in the spatial range includes:
uniformly dividing the space range into a plurality of small blocks according to a preset density value;
the three-dimensional coordinates of each patch are recorded as the probe point coordinates.
6. The model generation method of claim 1, wherein the screening the plurality of probe points comprises:
Performing collision detection on each detection point, and detecting whether the detection point is positioned inside the building model;
and if the detection point is positioned in the building model, marking the detection point as an invalid detection point.
7. The model generation method according to claim 1, characterized in that the method further comprises:
Generating random seeds;
randomly fine-tuning the positions of the effective detection points according to the random seeds to obtain fine-tuned effective detection points;
wherein said performing a multi-directional ray detection on each of said active detection points comprises:
And performing multi-directional ray detection on the trimmed effective detection points.
8. The model generation method of claim 7, wherein said randomly fine-tuning the location of the active probe points based on the random seed comprises:
The position of each effective probe point is randomly shifted within a range not exceeding the distance between adjacent probe points.
9. The model generation method of claim 1, wherein said multi-directional ray detection of each of said active detection points comprises:
emitting radiation from each effective detection point position to a plurality of preset directions;
Recording the hit point position and the hit point normal of each ray;
Wherein the number of the plurality of preset directions is at least three.
10. The model generation method according to claim 1, wherein the identifying the target detection point located in the ambient light shielding region based on the radiation detection result includes:
Analyzing the ray detection result of each effective detection point;
Identifying a detection point satisfying the following conditions simultaneously as a target detection point:
the ray detects the object;
The ray detection distance is larger than a preset threshold value;
The detected object has effective physical material and
The radiation detects the ground.
11. The model generation method according to claim 10, wherein the identifying the target detection point located in the ambient light shielding region based on the radiation detection result further comprises:
Calculating the average value of the ray hit point normals of each detection point;
comparing the difference between the normal average value and the normal of the single wall body;
And determining whether the detection point is positioned at the junction of the wall body and the ground based on the difference.
12. The method of generating a model according to claim 1, wherein generating a predetermined model at the target detection point position includes:
Obtaining a model list provided by a user;
creating a hierarchical instantiated static grid instance for each model type;
And randomly selecting one model in the model list at each target detection point position for instantiation generation.
13. The model generation method of claim 12, wherein randomly selecting one model in the model list for instantiation at each target detection point location comprises:
Applying a random scaling and rotation transformation to the generated model;
detecting whether interpenetration exists between the generated models;
The interpenetrating model is removed.
14. The model generation method according to claim 1, characterized in that the method further comprises:
receiving user adjustment of one or more of detection point density, ray quantity and ambient light shielding judgment threshold;
and re-executing the model generating process according to the adjusted parameters.
15. The model generation method according to claim 1, characterized in that the method further comprises:
calculating the ambient light shielding intensity value of each target detection point;
dividing the target detection points into a plurality of levels according to the ambient light shielding intensity values;
Different model generation strategies are set according to different grades, including model distribution density and model types.
16. The model generation method according to claim 1, characterized in that the method further comprises:
introducing environmental attributes including one or more of temperature, humidity, terrain type;
Establishing a mapping relation between the environmental attribute and the adaptability of different models;
and screening a model subset suitable for the environment attribute from a model library according to the environment attribute of each target detection point position for generation.
17. A model generation apparatus, characterized in that the apparatus comprises:
The acquisition module is used for acquiring the building model in the three-dimensional scene;
the range determining module is used for determining a space range in the three-dimensional scene;
The detection point generation module is used for generating a plurality of detection points according to preset density in the space range;
The detection point screening module is used for screening the detection points and eliminating the detection points positioned in the building model to obtain effective detection points;
the ray detection module is used for carrying out multi-directional ray detection on each effective detection point and obtaining a ray detection result;
A region identification module for identifying target detection points located in the ambient light shielding region based on the radiation detection result, and
And the model generation module is used for generating a preset model at the target detection point position.
18. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the model generation method of any one of claims 1 to 16.
19. An electronic device comprising a processor and a memory, characterized in that the memory stores a computer program, which when executed by the processor implements the steps of the model generation method of any of claims 1 to 16.
CN202510668710.XA 2025-05-21 2025-05-21 Model generation method, device, storage medium and electronic device Pending CN120689494A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510668710.XA CN120689494A (en) 2025-05-21 2025-05-21 Model generation method, device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510668710.XA CN120689494A (en) 2025-05-21 2025-05-21 Model generation method, device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN120689494A true CN120689494A (en) 2025-09-23

Family

ID=97075765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510668710.XA Pending CN120689494A (en) 2025-05-21 2025-05-21 Model generation method, device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN120689494A (en)

Similar Documents

Publication Publication Date Title
Greene Voxel space automata: Modeling with stochastic growth processes in voxel space
US7414629B2 (en) Automatic scenery object generation
USRE44963E1 (en) System and method for computerized evaluation of gemstones
DK2483868T3 (en) Konnektivitets-dependent geometry optimization for real-time rendering
CN112052864B (en) Image drawing method and device, electronic equipment and readable storage medium
Hammes Modeling of ecosystems as a data source for real-time terrain rendering
CN109872390A (en) A kind of method for organizing of magnanimity threedimensional model
CN112419499A (en) Immersive situation scene simulation system
CN114937137B (en) BIM and GIS-based building environment intelligent analysis method
CN108171793A (en) A kind of method for detecting lamination area triangle gridding
WO2012145654A2 (en) System and method for stochastically generating maps of places in a virtual space
WO2025025724A1 (en) Lighting rendering method and apparatus, and terminal and storage medium
Gilet et al. Point-based rendering of trees
CN120689494A (en) Model generation method, device, storage medium and electronic device
CN118447150A (en) Denoising Monte Carlo rendering method and system for digital twin model
CN111957046A (en) Game scene resource generation method and device and computer equipment
AU778099B2 (en) Energy propagation modelling apparatus
Guérin et al. Efficient modeling of entangled details for natural scenes
CN117218284A (en) Three-dimensional simulation visualization method, system, equipment and medium for forest landscape
CN116958365A (en) Virtual terrain processing method, apparatus, device, medium and program product
Paris et al. Modeling rocky scenery using implicit blocks
Stiver et al. Sketch based volumetric clouds
CN118840498B (en) Three-dimensional plot drawing method and device based on GIS and digital twin integration
Roditakis Modeling and visualization of clouds from real world data
CN120633015B (en) A rooftop photovoltaic visualization design method, system and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination