[go: up one dir, main page]

CN109242963B - Three-dimensional scene simulation device and equipment - Google Patents

Three-dimensional scene simulation device and equipment Download PDF

Info

Publication number
CN109242963B
CN109242963B CN201811150189.7A CN201811150189A CN109242963B CN 109242963 B CN109242963 B CN 109242963B CN 201811150189 A CN201811150189 A CN 201811150189A CN 109242963 B CN109242963 B CN 109242963B
Authority
CN
China
Prior art keywords
target object
scene
dimensional
dimensional model
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811150189.7A
Other languages
Chinese (zh)
Other versions
CN109242963A (en
Inventor
田浦延
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fushi Technology Co.,Ltd.
Original Assignee
Shenzhen Fushi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Fushi Technology Co Ltd filed Critical Shenzhen Fushi Technology Co Ltd
Priority to CN201811150189.7A priority Critical patent/CN109242963B/en
Publication of CN109242963A publication Critical patent/CN109242963A/en
Application granted granted Critical
Publication of CN109242963B publication Critical patent/CN109242963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开了一种三维场景模拟装置和设备。所述三维场景模拟装置用于模拟目标物体在场景中的配置情况,包括信息获取模块、建模模块、耦合模块和应用模块。信息获取模块用于获取目标物体的三维数据和场景的三维数据。建模模块用于根据目标物体的三维数据和场景的三维数据分别建立目标物体的三维模型和场景的三维模型。耦合模块用于将目标物体的三维模型的坐标系耦合到场景的三维模型的坐标系中。应用模块用于在场景的三维模型中配置目标物体的三维模型以模拟目标物体在场景中的配置情况。

The application discloses a three-dimensional scene simulation device and equipment. The three-dimensional scene simulation device is used to simulate the configuration of the target object in the scene, including an information acquisition module, a modeling module, a coupling module and an application module. The information acquisition module is used to acquire the three-dimensional data of the target object and the three-dimensional data of the scene. The modeling module is used to respectively establish a 3D model of the target object and a 3D model of the scene according to the 3D data of the target object and the 3D data of the scene. The coupling module is used to couple the coordinate system of the 3D model of the target object to the coordinate system of the 3D model of the scene. The application module is used to configure the 3D model of the target object in the 3D model of the scene to simulate the configuration of the target object in the scene.

Description

Three-dimensional scene simulation device and equipment
Technical Field
The application relates to the field of intelligent home, in particular to a three-dimensional scene simulation device and equipment.
Background
With the increasing standard of living, people often need to add new things at home, such as: furniture or household robots, etc. However, the actual size of the product available on the market is often not clear to the consumer, and requires either in-person on-site measurement or repeated comparison on the internet. Even if a product with a proper size is found and bought, the product is often found to be unsuitable for replacement due to the fact that the product possibly is not matched with the existing environment in the home or the purchased product is unsuitable for replacement due to errors in measurement before, which is very troublesome.
Disclosure of Invention
The embodiment of the application provides a three-dimensional scene simulation device and equipment.
The three-dimensional scene simulation device is used for simulating the configuration condition of a target object in a scene and comprises an information acquisition module, a modeling module, a coupling module and an application module. The information acquisition module is used for acquiring three-dimensional data of the target object and three-dimensional data of the scene. The modeling module is used for respectively establishing a three-dimensional model of the target object and a three-dimensional model of the scene according to the three-dimensional data of the target object and the three-dimensional data of the scene. The coupling module is configured to couple a coordinate system of a three-dimensional model of the target object into a coordinate system of a three-dimensional model of the scene. The application module is used for configuring the three-dimensional model of the target object in the three-dimensional model of the scene so as to simulate the configuration condition of the target object in the scene.
The device of the embodiment of the application comprises the three-dimensional scene simulation device of the embodiment. The device correspondingly executes corresponding functions according to the simulation effect of the three-dimensional scene simulation device.
According to the three-dimensional scene simulation device and the three-dimensional scene simulation equipment, the coordinate system of the three-dimensional model of the target object is coupled into the coordinate system of the three-dimensional model of the scene, so that the configuration condition of the target object in the scene can be simulated according to the arrangement of the three-dimensional model of the target object in the three-dimensional model of the scene, and the three-dimensional scene simulation device and the three-dimensional scene simulation equipment are simple, convenient, clear and visual, and are beneficial to improving user experience.
Additional aspects and advantages of embodiments of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic block diagram of a three-dimensional scene simulation apparatus according to an embodiment of the present application;
FIG. 2 is a flow chart of a three-dimensional scene simulation method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a device according to an embodiment of the present application;
FIG. 4 is a schematic block diagram of a three-dimensional scene simulation apparatus according to a first embodiment of the present application;
fig. 5 is a flow chart of a three-dimensional scene simulation method according to a first embodiment of the present application;
fig. 6 is a flow chart of a three-dimensional scene simulation method according to a first embodiment of the present application;
fig. 7 is a flow chart of a three-dimensional scene simulation method according to the first embodiment of the present application;
fig. 8 is a flow chart of a three-dimensional scene simulation method according to a first embodiment of the present application;
fig. 9 is a flow chart of a three-dimensional scene simulation method according to a first embodiment of the present application;
FIG. 10 is a schematic block diagram of a three-dimensional scene simulation apparatus according to a second embodiment of the present application;
FIG. 11 is a flow chart of a three-dimensional scene simulation method according to a second embodiment of the present application;
FIG. 12 is a flow chart of a three-dimensional scene simulation method according to a second embodiment of the present application;
fig. 13 is a flow chart of a three-dimensional scene simulation method according to a second embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the present application and are not to be construed as limiting the present application.
In the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present application, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; may be mechanically connected, may be electrically connected, or may be in communication with each other; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances.
The following disclosure provides many different embodiments, or examples, for implementing different features of the application. In order to simplify the present disclosure, components and arrangements of specific examples are described below. They are, of course, merely examples and are not intended to limit the application. Furthermore, the present application may repeat reference numerals and/or letters in the various examples, which are for the purpose of brevity and clarity, and which do not themselves indicate the relationship between the various embodiments and/or arrangements discussed. In addition, the present application provides examples of various specific processes and materials, but one of ordinary skill in the art will recognize the application of other processes and/or the use of other materials.
It should be understood that the embodiments and/or methods described herein are exemplary in nature and should not be construed as limiting the scope of the application. The embodiments or methods described herein are only one or more of numerous technical solutions covered by the technical ideas related to the present application, and thus, the steps of the described method technical solutions may be performed in the order indicated, may be performed in other orders, may be performed simultaneously, or may be omitted in some cases, and the above modifications should be regarded as the scope covered by the technical claims of the present application.
Embodiments of the present application provide a three-dimensional scene simulation apparatus 10 and a three-dimensional scene simulation method.
Referring to fig. 1, a three-dimensional scene simulation apparatus 10 according to an embodiment of the present application is used for simulating a configuration of a target object in a scene. The three-dimensional scene simulation apparatus 10 includes an information acquisition module 12, a modeling module 14, a coupling module 16, and an application module 18.
The information acquisition module 12 is configured to acquire three-dimensional data of a scene to be simulated and three-dimensional data of a target object configured in the scene. The modeling module 14 is configured to establish a three-dimensional model of the scene and a three-dimensional model of the target object according to the three-dimensional data of the scene and the three-dimensional data of the target object, respectively. The coupling module 16 is for coupling a coordinate system of a three-dimensional model of the target object into a coordinate system of a three-dimensional model of the scene. The application module 18 is configured to configure a three-dimensional model of the target object in the three-dimensional model of the scene to simulate the configuration of the target object in the scene. Such configurations include, but are not limited to, moving and/or flipping a target object in a three-dimensional model of a scene, or interaction and avoidance of a target object with existing structures in a scene.
Referring to fig. 2, the three-dimensional scene simulation method according to the embodiment of the present application is used for simulating the configuration situation of a target object in a scene, and includes the following steps:
step S12: acquiring three-dimensional data of a target object and three-dimensional data of a scene;
step S14: respectively and correspondingly establishing a three-dimensional model of the target object and a three-dimensional model of the scene according to the three-dimensional data of the target object and the three-dimensional data of the scene;
step S16: coupling a coordinate system of the three-dimensional model of the target object into a coordinate system of the three-dimensional model of the scene; and
step S18: a three-dimensional model of the target object is configured in the three-dimensional model of the scene to simulate the configuration of the target object in the scene.
Referring to fig. 3, the present application further provides an apparatus 100, such as a mobile phone, a notebook computer, a tablet computer, a touch-control interactive screen, a door, a vehicle, a robot, an automatic numerical control machine, etc. The apparatus 100 includes at least one three-dimensional scene simulation device 10 of any of the embodiments described above. The equipment is used for correspondingly executing corresponding functions according to the simulation effect of the three-dimensional scene simulation device. Including but not limited to home design, robotic control, simulated interactive games, etc.
According to the three-dimensional scene simulation device 10, the equipment 100 and the three-dimensional scene simulation method, the coordinate system of the three-dimensional model of the target object is coupled to the coordinate system of the three-dimensional model of the scene, so that the real configuration situation of the target object in the scene is simulated according to the configuration requirement of the three-dimensional model of the target object in the three-dimensional model of the scene, and the three-dimensional scene simulation device is simple, convenient, clear, visual and beneficial to improving user experience.
The three-dimensional scene simulation apparatus 10 and the three-dimensional scene simulation method of the present application can be divided into two specific embodiments, and the two embodiments will be described below, respectively. It should be noted that modifications, or alternatives, made by those skilled in the art in light of the two embodiments of the application are also within the scope of the application.
Embodiment one:
referring to fig. 4, a three-dimensional scene simulation apparatus 10 according to a first embodiment of the present application is provided for simulating a configuration of a target object in a three-dimensional model of a current scene in real time. In this embodiment, the current scene is a home scene, and the target object is furniture to be placed in the home.
The three-dimensional scene simulation device 10 comprises an information acquisition module 12, a sensing module 13, a modeling module 14, a memory 15, a coupling module 16, an interaction module 17 and an application module 18 which are connected through a bus 11. The modules/modules of the three-dimensional scene simulation device 10 interact with signals and data through the bus 11. The information acquisition module 12 is configured to acquire three-dimensional data of a scene to be simulated and three-dimensional data of a target object to be configured in the scene. In the first embodiment of the present application, the information acquisition module 12 is configured to acquire three-dimensional data of furniture and three-dimensional data of home furnishing.
The information acquisition module 12 may acquire three-dimensional data of the target object from a server of the merchant over a network. For example: the information module 12 may further comprise a search unit 122 and a download unit 124. The searching unit 122 searches for a desired target object from the merchant's website according to the size data of the preset target object. The download unit 124 downloads and saves the three-dimensional data of the selected target object in the local memory 15.
The information acquisition module 12 may also directly sense three-dimensional data of the scene or the target object through the sensing module 13. For example: the information acquisition module 12 may acquire corresponding three-dimensional data by capturing a scene or a target object in front of the scene with a three-dimensional camera provided on a portable terminal such as a mobile phone.
The sensing module 13 comprises a three-dimensional camera. The sensing module 13 is configured to process the output data of the three-dimensional camera to obtain three-dimensional data of the target object and/or three-dimensional data of the scene, and transmit the three-dimensional data of the target object and/or the three-dimensional data of the scene to the information obtaining module 12. In this manner, the information acquisition module 12 is enabled to acquire three-dimensional data of a target object and/or three-dimensional data of a scene.
The three-dimensional camera is used for sensing three-dimensional data of a photographed object. The principles Of the three-dimensional camera may be based on structured light, binocular vision, or Time Of Flight (TOF), without limitation. It can be understood that, since the three-dimensional camera can sense three-dimensional data of the object in space, a three-dimensional model of the target object and the scene can be reconstructed according to the sensed three-dimensional data in the same proportion, and the real configuration situation of the target object in the scene is simulated.
Specifically, the user himself or a designed robot or other equipment can hold the three-dimensional camera to sense complete three-dimensional data of a scene or a seen target object according to a plurality of preset angles. Or, at a plurality of preset positions of the scene to be simulated, such as corners of an indoor space ceiling, a plurality of three-dimensional cameras are correspondingly erected to sense complete three-dimensional data of the scene, and the number of the three-dimensional cameras is not particularly limited. The information acquisition module 12 performs denoising, splicing, matching, optimizing and other processes on the three-dimensional data sensed by the three-dimensional camera, and stores the processed three-dimensional data into the memory 15 for standby.
It will be appreciated that the sensing module 13 may also include a color camera, such that color information of the scene and the target object may be obtained by the color camera. The color three-dimensional model of the scene and the target object constructed by combining the color information acquired by the color camera and the three-dimensional data acquired by the three-dimensional camera can be more realistic.
The memory 15 may store three-dimensional data of the home and three-dimensional data of the furniture for the user, so that the modeling module 14 may read the required three-dimensional data from the memory 15 when building the three-dimensional model. The memory 15 may be a storage medium provided on the local terminal device, or may be a cloud memory on a network.
The modeling module 14 is configured to establish a three-dimensional model of the target object and a three-dimensional model of the scene according to the three-dimensional data of the target object and the three-dimensional data of the scene, respectively. The three-dimensional data includes, but is not limited to, three-dimensional coordinates, laser reflection intensity, point cloud data of color information. The modeling module 14 may perform preprocessing, segmentation, triangle gridding, and mesh rendering on the point cloud data, so as to complete the establishment of the three-dimensional model. For example, the preprocessing of the point cloud data can be performed by filtering and denoising, data reduction, data interpolation and the like. Then, the point cloud data is segmented, so that the whole point cloud is clustered into a plurality of point clouds, and each point cloud corresponds to an independent object. Then, the point cloud can be triangulated by adopting a convex hull or concave hull algorithm, so that subsequent grid rendering is facilitated. After the space topological structure of the point cloud is obtained, the texture is mapped into the grid, so that the object is more vivid.
The coupling module 16 is configured to couple the coordinate system of the three-dimensional model of the target object into the coordinate system of the three-dimensional model of the scene through coordinate transformation operation, so as to simulate the configuration situation of the target object in the three-dimensional model of the scene. Specifically, the coupling module 16 is configured to perform a coordinate transformation algorithm according to a preset algorithm, such as: and the coordinate translation matrix and/or the coordinate rotation matrix convert coordinate data of the three-dimensional model of the target object in the self reference coordinate system into coordinate data of the three-dimensional model of the target object in the reference coordinate system of the scene.
It will be appreciated that the coordinate system of the three-dimensional model of the target object before it is not coupled to the three-dimensional model of the scene is a coordinate system established with the target object itself as a reference. The coordinate system of the three-dimensional model of the scene is a coordinate system established by taking preset points in the scene as reference objects. The three-dimensional model of the target object is coupled into the three-dimensional model of the scene, that is, the coordinate data of the three-dimensional model of the target object is converted into the coordinate data of the three-dimensional model of the scene, that is, the three-dimensional model of the target object is described and measured by using the coordinate system of the three-dimensional model of the scene.
The user may issue instructions via the interaction module 17 to couple the coordinate system of the three-dimensional model of the target object into the coordinate system of the three-dimensional model of the scene to control the coupling module 16 to couple the three-dimensional model of the target object into the three-dimensional model of the scene. The interaction module 17 includes, but is not limited to, a mouse, a keyboard, and a touch screen.
Specifically, in this embodiment, the user may first open the three-dimensional model of the home, and then drag the three-dimensional model of the furniture into the three-dimensional model of the home; or, the user can open the three-dimensional model of the furniture first and then drag the three-dimensional model of the home into the three-dimensional model of the furniture; alternatively, the user may select and open the three-dimensional model of furniture and the three-dimensional model of home at one time.
Of course, a user may couple multiple three-dimensional models of furniture to one three-dimensional model of home at a time; the three-dimensional model of the furniture can be coupled to the three-dimensional models of a plurality of households at one time; it is also possible to couple three-dimensional models of multiple pieces of furniture to three-dimensional models of multiple pieces of furniture at a time. The number of three-dimensional models of furniture and three-dimensional models of home furniture that are coupled to one another is not limited here.
The application module 18 is configured to configure a three-dimensional model of the target object in the three-dimensional model of the scene to simulate the configuration of the target object in the scene. In the first embodiment of the present application, the application module 18 is configured to configure a three-dimensional model of furniture in a three-dimensional model of home, so as to simulate the actual placement effect of the home, and provide a reference for the home design.
In the first embodiment of the present application, the application module 18 includes an interference unit 182, an interaction unit 184, and a measurement unit 186.
The interference unit 182 is configured to determine, according to a preset spatial attribute of three-dimensional data of a scene and a target object, an interference condition of a three-dimensional model of the target object when the three-dimensional model contacts an existing structure in the three-dimensional model of the scene. The spatial attributes include traversable attributes and non-traversable attributes. The interference unit 182 senses that two spatial coordinate groups having an impenetrable property are moved into contact with each other while restricting their mutual entry into the respective spatial regions, only along the contact surface therebetween.
It can be understood that the property of the spatial region corresponding to the object of the solid property and the existing structure in the scene is the non-penetrable property, and the property of the other spatial regions except the object of the solid property is the penetrable property. In this way, the three-dimensional model of the target object is made to be consistent with the real situation as is the configuration action in the field Jing Sanwei model. In a real scene, actions that are not possible with the target object should not be realized in the three-dimensional model of the scene either.
The spatial attribute of the three-dimensional data of the target object and the spatial attribute of the three-dimensional data of the scene may be preset by the user according to the actual situation, or may be automatically given to the attribute after the modeling module 14 determines the spatial area occupied by the object according to the sensed three-dimensional data, and then the user adjusts the spatial attribute according to the actual situation.
In one application scenario, after moving the three-dimensional model of the table along a direction perpendicular to the wall of the three-dimensional model of the home, the three-dimensional model of the table cannot continue to move along a direction perpendicular to the wall of the three-dimensional model of the home, because in an actual scenario, the table cannot pass through the wall. In this scenario, the two spatial coordinate groups with the non-traversable properties are the three-dimensional model of the table and the walls of the three-dimensional model of the home, respectively.
In another application scenario, the spatial coordinates of the table are preset as the non-penetrable attribute, the spatial coordinates above the upper surface of the table are preset as the penetrable attribute, the spatial coordinates of the barrel of the pen container are preset as the non-penetrable attribute, and the spatial coordinates of the barrel cavity of the pen container are preset as the penetrable attribute, so that a pen can be placed in the barrel cavity, but the pen cannot penetrate the barrel, and the pen container can be moved along the upper surface of the table, but the pen container cannot penetrate the table. Note that "table" and "pen container" in the examples refer to three-dimensional models of furniture in a three-dimensional model of home, rather than table and pen container in a real-world scenario.
In addition to the traversable and non-traversable properties, the spatial properties may also include elastic properties. It will be appreciated that for objects that may be deformed in a real-world scene, the passable and non-passable properties are not sufficient to describe their properties. The space region with elastic properties can deform to a preset degree according to the self-elasticity when the space region is contacted with an object with non-penetrable properties, so that the object with non-penetrable properties can enter the space with the original elastic properties within a preset range.
In one scenario, the spatial coordinates of the three-dimensional model of the window covering may be preset as elastic properties, and the deformation amount may be set according to the deformation capability of the window covering, and when the three-dimensional model of the window covering is pulled, the three-dimensional model of the window covering is stretched within the deformation amount.
In another scenario, the spatial coordinates of the three-dimensional model of the sponge wall may be preset as elastic properties, and the set variable may be set according to the deformability of the sponge wall. When the three-dimensional model of the table is moved along the direction perpendicular to the surface of the three-dimensional model of the sponge wall until the three-dimensional model of the sponge wall is contacted with the three-dimensional model of the sponge wall, the three-dimensional model of the sponge wall is recessed within the deformation amount range, and after the limit of the deformation amount is reached, if the three-dimensional model of the table is moved along the original direction again, the three-dimensional model of the table cannot be moved continuously, and the three-dimensional model of the sponge wall can maintain deformation in the limit state of the deformation amount. The deformation that the influence that comes to simulate the sponge wall in the reality scene received the pressure of desk like this takes place.
The arrangement of the elastic attribute can enable interaction between furniture and interaction between furniture and existing structures in the home to be simulated to be closer to reality in a three-dimensional model of a scene.
The interference unit 182 prompts the user to pay attention when it is judged that the interference condition has occurred. For example: the interference unit 182 reminds in a vibration, voice broadcast or highlighting or flashing display manner at the place where the interference occurs.
The interaction unit 184 is configured to receive a signal configuring a three-dimensional model of the target object, and change the position and morphology of the three-dimensional model of the target object in the three-dimensional model of the scene according to the signal.
In this manner, the user can change the position and morphology of the three-dimensional model of the target object in the three-dimensional model of the scene through the interaction unit 184. Specifically, a user may input configuration signals to the three-dimensional scene simulation apparatus 10 through the interaction module 17, the interaction module 17 including, but not limited to, a mouse, a keyboard, and a touch screen. For example, the user may drag or rotate the furniture three-dimensional model with a finger on the touch screen in the three-dimensional model of the home, and the interaction unit 184 changes the position and morphology of the furniture three-dimensional model in the three-dimensional model of the home according to the sensed finger movement.
The calculating unit 186 is configured to calculate relative position information between the three-dimensional model of the scene and the plurality of markers in the three-dimensional model of the target object, for example: distance, angle, curvature, etc. The marks include, but are not limited to, coordinate points, lines, or patterns.
Therefore, the user can configure the target object into an ideal state in the three-dimensional model of the scene, and then calculate the position information of the target object or the existing structure in the scene, which needs to be changed in order to achieve the ideal state, so that the user can modify and adjust furniture or home conveniently. Similarly, the user may select a measurement coordinate point to the three-dimensional scene modeling apparatus 10 through the interaction module 17, and after selecting the measurement coordinate point, the user may start measurement by clicking a completion button to cause the measurement unit 186 to start measurement.
For example, the user may select two measurement coordinate points at a time, and the interaction unit 184 may send coordinate values of the two measurement coordinate points to the measurement unit 186 after acquiring the two measurement coordinate points. The calculating module 186 calculates the distance between the two measuring coordinate points according to the coordinate values of the two measuring coordinate points. Alternatively, the user may select a plurality of measurement coordinate points at a time, and the interaction unit 184 may send the coordinate values of the plurality of measurement coordinate points to the measurement unit 186 after acquiring the plurality of measurement coordinate points. The calculating module 186 calculates the distance between each two measuring coordinate points or the angle between the two connecting points according to the coordinate values of the measuring coordinate points. The user can also calculate the position information such as the included angle by setting lines in the three-dimensional model of the target object and/or the scene through the interaction module 17.
The calculation module 186 may display the measured positional information in the associated three-dimensional model for reference by the user.
Note that, since the three-dimensional data of the real object and the scene is obtained, the position information of the relevant mark in the three-dimensional model of the object and/or the scene measured by the measuring module 186 is the real size of the object and/or the scene.
After seeing the measured distance value, the user can input adjustment data to the three-dimensional scene simulation device 10 to observe the placement condition of the three-dimensional model of the adjusted furniture in the three-dimensional model of the home. The interaction unit 184 is configured to obtain adjustment data of the three-dimensional model of the target object and adjust the three-dimensional model of the target object in the three-dimensional model of the scene according to the adjustment data.
Therefore, the user can decide whether to readjust or save adjustment data according to the placement condition of the three-dimensional model of the adjusted furniture in the three-dimensional model of the home. If the user feels that the three-dimensional model of the adjusted furniture is not placed in the three-dimensional model of the home, the three-dimensional model of the furniture can be continuously adjusted in a similar manner until the three-dimensional model of the furniture is placed in the three-dimensional model of the home. When the user feels that the three-dimensional model of the adjusted furniture is placed in the three-dimensional model of the home, the adjustment data can be saved to the memory 15 through the save button, and the adjustment data can be sent to the seller of the furniture through one key, so that the seller of the furniture adjusts the size of the furniture according to the adjustment data. Of course, if the user makes a plurality of adjustments to the three-dimensional model of the furniture, the interactive unit 184 may send the adjustment data of each adjustment to the memory 15, so that the user may reproduce a certain previous adjustment after making the plurality of adjustments.
The three-dimensional scene simulation apparatus 10 of the first embodiment of the present application may apply a three-dimensional scene simulation method. The above explanation of the embodiment and advantageous effects of the three-dimensional scene simulation apparatus 10 is also applicable to the three-dimensional scene method of the present embodiment, and is not developed in detail here to avoid redundancy.
Referring to fig. 1, a three-dimensional scene simulation method according to a first embodiment of the present application is used for simulating a configuration of a target object in a three-dimensional model of a current environment in real time. In this embodiment, the current environment is a home scene, and the target object is furniture that needs to be placed in the home. The three-dimensional scene simulation method comprises the following steps:
step S12: acquiring three-dimensional data of a target object and three-dimensional data of a scene;
step S14: establishing a three-dimensional model of the target object and a three-dimensional model of the scene according to the three-dimensional data of the target object and the three-dimensional data of the scene;
step S16: coupling a coordinate system of the three-dimensional model of the target object into a coordinate system of the three-dimensional model of the scene; and
step S18: a three-dimensional model of the target object is configured in the three-dimensional model of the scene to simulate the configuration of the target object in the scene.
According to the three-dimensional scene simulation method, the coordinate system of the three-dimensional model of the target object is coupled into the coordinate system of the three-dimensional model of the scene, so that the configuration situation of the target object in the scene is simulated according to the configuration situation of the three-dimensional model of the target object in the three-dimensional model of the scene, and the three-dimensional scene simulation method is simple, convenient, clear and visual, and is beneficial to improving user experience.
Referring to fig. 5, step S18 further includes:
step S181: and judging the mutual interference condition of the three-dimensional model of the target object when the three-dimensional model of the target object is configured in the three-dimensional model of the scene according to the preset spatial attribute of the three-dimensional data of the scene and the target object.
The spatial attributes include traversable attributes and non-traversable attributes. Two groups of spatial coordinates with non-traversable properties are restricted from entering each other inside their respective spatial regions when they are sensed to be moved into contact with each other.
Referring to fig. 6, step S18 further includes:
step S182: and when the three-dimensional model of the target object is contacted with the three-dimensional model of the scene in the space coordinates which are the same as the non-penetrable attribute, prompting that the three-dimensional model of the target object interferes with the three-dimensional model of the scene.
The prompt includes, but is not limited to, a prompt in the form of a vibration, a voice broadcast, or a highlight or flashing display where interference occurs.
Referring to fig. 7, step S18 further includes:
step S183: and receiving a signal for configuring the three-dimensional model of the target object, and changing the position and the shape of the three-dimensional model of the target object in the three-dimensional model of the scene according to the signal.
Referring to fig. 8, step S18 further includes:
step S184: a plurality of markers for use in the evaluation in a three-dimensional model of the target object and/or scene are acquired. The marks include, but are not limited to, coordinate points, lines, or patterns; and
step S185: and measuring and calculating the relevant position information between the marks in the three-dimensional model of the target object and/or the scene. The location information includes, but is not limited to, for example, between markers: distance, angle, curvature, etc.
Referring to fig. 9, step S18 further includes:
step S186: obtaining a three-dimensional model of a target object; and
step S187: and adjusting the three-dimensional model of the target object in the three-dimensional model of the scene according to the adjustment data.
Embodiment two:
referring to fig. 10, a three-dimensional scene simulation apparatus 10 according to a second embodiment of the present application is provided for simulating a configuration of a target object in a three-dimensional model of a current environment in real time. In this embodiment, the current environment is a home scene, and the target object is a robot that needs to move in the home.
Including, but not limited to, a floor sweeping robot, a floor mopping robot, and a window cleaning robot.
Similar to the first embodiment of the present application, in the second embodiment of the present application, the three-dimensional scene simulation apparatus 10 includes an information acquisition module 12, a sensing module 13, a modeling module 14, a memory 15, a coupling module 16, an interaction module 17, and an application module 18, which are connected through a bus 11. The information acquisition module 12 is configured to acquire three-dimensional data of a target object and three-dimensional data of a scene. The modeling module 14 is configured to establish a three-dimensional model of the target object and a three-dimensional model of the scene according to the three-dimensional data of the target object and the three-dimensional data of the scene, respectively. The coupling module 16 is for coupling a coordinate system of a three-dimensional model of the target object into a coordinate system of a three-dimensional model of the scene. The application module 18 is configured to configure a three-dimensional model of the target object in the three-dimensional model of the scene to simulate the configuration of the target object in the scene.
Similar to the first embodiment of the present application, in the second embodiment of the present application, the application module 18 includes an interference unit 182, an interaction unit 184, and a measurement unit 186. However, in the second embodiment of the present application, the application module 18 further includes a control unit 188.
In order to avoid redundancy, the same or similar parts as those of the first embodiment in the second embodiment will not be described again.
The three-dimensional data on the target object, that is, the three-dimensional data of the robot in the second embodiment of the present application, can be acquired in a similar manner to the above-described acquisition of the three-dimensional data of the home; alternatively, the user downloads himself on the official website of the purchasing robot; alternatively, the three-dimensional data of the robot is stored in the memory of the robot itself, and the three-dimensional scene simulation device 10 may transmit a request instruction to the robot so that the robot transmits the three-dimensional data to the three-dimensional scene simulation device 10. Of course, the user may also hold the mobile device including the three-dimensional camera, and take pictures or photographs of the household robot from a plurality of angles, thereby obtaining three-dimensional data of the robot.
The control unit 188 is configured to plan a travel route of the target object according to the three-dimensional model of the scene.
In this way, the user can plan the travel route of the three-dimensional model of the robot in the three-dimensional model of the home by means of the control unit 188, so as to achieve planning of the travel route of the robot in a realistic home scene.
Note that the control unit 188 may be configured to automatically plan the travel route, or may be configured to plan the travel route based on a user instruction. When the control unit 188 automatically plans the travel route, the control unit 188 determines the travel route of the robot three-dimensional model in the home three-dimensional model according to the three-dimensional model of the robot and the spatial attribute of the three-dimensional model of the home, and then combines the tasks to be completed by the robot to provide a plurality of travel routes for the user to select.
When the control unit 188 plans the travel route based on the instruction of the user, the user can move the three-dimensional model of the household robot in the three-dimensional model of the household in a similar manner to the three-dimensional model of the household in which the furniture is moved, similar to the first embodiment of the present application. After the movement is completed, the control unit 188 may send the movement route to the memory 15 for saving.
In addition, whether the control unit 188 automatically plans the travel route or plans based on the user's instructions, the user can make individual adjustments to the travel route after planning the travel route. The user may input the adjustment data through the interaction module 17, and then the control unit 188 re-plans the travel route according to the adjustment data.
The target object may also include a motion device 22 and a sensing device 24 thereon.
The movement device 22 is used for driving the target object to move.
The control unit 188 is configured to control the movement device 22 to move the target object in the scene according to the pre-planned travel route of the target object.
In this manner, the user can control the movement of the target object through the control unit 188. Specifically, the exercise device 22 may communicate with the control unit 188 via bluetooth, WI-FI (Wireless-Fidelity), or the like. The control unit 188 may send the designed travel route to the exercise device 22 in the manner described above. The movement device 22 can control the robot to move in a real home scene according to the travel route when receiving the movement instruction sent by the control unit 188.
The sensing device 24 includes, but is not limited to, a three-dimensional camera, an ultrasonic ranging sensor, a gyroscope, an infrared detector, and the like. The three-dimensional camera may be disposed on top of the robot and may be rotated to capture an image of the environment surrounding the robot. The gyroscope is used for sensing the motion state of the target object. The ultrasonic ranging sensor and the infrared detector are used for sensing spatial parameters such as the position, the distance, the size and the like of the obstacle in the advancing process so as to provide references for obstacle avoidance of a target object in the advancing process.
The actual travel situation often varies from the planned route, so the control unit 188 is also configured to control the target object to avoid an obstacle according to the motion state of the target object and the obstacle situation in front of the travel route, which are sensed by the sensing device 24.
After the target object avoids the obstacle, the control unit 188 determines whether to guide the target object back to the original planned route or select another planned route with better performance according to the sensed current position of the target object through factors such as the long distance of the reference path and the task completion.
The three-dimensional scene simulation apparatus 10 of the second embodiment of the present application may apply a three-dimensional scene simulation method. The above explanation of the embodiment and advantageous effects of the three-dimensional scene simulation apparatus 10 is also applicable to the three-dimensional scene method of the present embodiment, and is not developed in detail here to avoid redundancy.
Referring to fig. 1, a three-dimensional scene simulation method according to a first embodiment of the present application is used for simulating a configuration of a target object in a three-dimensional model of a current environment in real time. In this embodiment, the current environment is a home scene, and the target object is a robot that needs to move in the home. The three-dimensional scene simulation method comprises the following steps:
step S12: acquiring three-dimensional data of a target object and three-dimensional data of a scene;
step S14: establishing a three-dimensional model of the target object and a three-dimensional model of the scene according to the three-dimensional data of the target object and the three-dimensional data of the scene;
step S16: coupling a coordinate system of the three-dimensional model of the target object into a coordinate system of the three-dimensional model of the scene; and
step S18: a three-dimensional model of the target object is configured in the three-dimensional model of the scene to simulate the configuration of the target object in the scene.
According to the three-dimensional scene simulation method, the coordinate system of the three-dimensional model of the target object is coupled into the coordinate system of the three-dimensional model of the scene, so that the configuration situation of the target object in the scene is simulated according to the configuration situation of the three-dimensional model of the target object in the three-dimensional model of the scene, and the three-dimensional scene simulation method is simple, convenient, clear and visual, and is beneficial to improving user experience.
Referring to fig. 11, step S18 includes:
step S188: and planning a travel route of the target object in the scene according to the three-dimensional model of the target object, the three-dimensional model of the scene and the task to be completed by the target object.
Referring to fig. 12, the target object includes a moving device 22, and step S18 includes:
step S189: the movement device 22 is controlled to drive the target object to move in the scene according to the pre-planned travel route of the target object.
Referring to fig. 13, the target object further includes a sensing device 24, and step S18 includes:
step S18a: the spatial parameters of the obstacle object encountered in the travel path and the target object motion state sensed by the sensing device 24 are acquired.
The spatial parameters include, but are not limited to, the position of the obstacle in the scene, the distance to the target object, the three-dimensional data of itself, etc.;
step S18b: controlling the target object to avoid the obstacle according to the sensed motion state of the target object and the spatial parameter of the obstacle; and
Step S18c: and judging whether to guide the target object back to the original planning route or select other better planning routes according to the sensed current position of the target object and referring to the path length and the task completion condition.
It should be noted that, the embodiments of the present application may satisfy only one of the above embodiments or may satisfy the above embodiments simultaneously, that is, the embodiments in which one or more of the above embodiments are combined also belong to the protection scope of the embodiments of the present application.
In the description of the present specification, reference to the terms "certain embodiments," "one embodiment," "some embodiments," "an exemplary embodiment," "an example," "a particular example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, system that includes a processing module, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of embodiments of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that changes, modifications, substitutions and variations may be made therein by those of ordinary skill in the art without departing from the scope of the application as defined by the claims and their equivalents.

Claims (9)

1.一种三维场景模拟装置,用于实时模拟目标物体在场景中的真实配置情况,其特征在于,所述三维场景模拟装置包括:1. A three-dimensional scene simulation device, for real-time simulation of the real configuration of the target object in the scene, characterized in that, the three-dimensional scene simulation device comprises: 感测模组,所述感测模组包括三维相机;Sensing module, the sensing module includes a three-dimensional camera; 信息获取模块,所述信息获取模块通过所述感测模组的三维相机直接感测所在场景的三维数据和当前目标物体的三维数据;An information acquisition module, the information acquisition module directly senses the three-dimensional data of the scene where it is located and the three-dimensional data of the current target object through the three-dimensional camera of the sensing module; 建模模块,所述建模模块用于根据所述三维相机所感测到的所述目标物体的三维数据和所述场景的三维数据,按照相同比例构建所述目标物体的三维模型和所述场景的三维模型;a modeling module, the modeling module is configured to construct a 3D model of the target object and the scene in the same proportion according to the 3D data of the target object sensed by the 3D camera and the 3D data of the scene 3D model of 耦合模块,所述耦合模块用于根据预设的坐标变换算法将所述目标物体的三维模型在自身参考坐标系中的坐标数据转换成所述目标物体的三维模型在所述场景的参考坐标系中的坐标数据以将所述目标物体的三维模型的坐标系耦合到所述场景的三维模型的坐标系中;和A coupling module, the coupling module is used to convert the coordinate data of the 3D model of the target object in its own reference coordinate system into the reference coordinate system of the 3D model of the target object in the scene according to a preset coordinate transformation algorithm coordinate data in to couple the coordinate system of the three-dimensional model of the target object to the coordinate system of the three-dimensional model of the scene; and 应用模块,所述应用模块用于在所述场景的三维模型中配置所述目标物体的三维模型以模拟所述目标物体在所述场景中的配置情况;An application module, configured to configure the three-dimensional model of the target object in the three-dimensional model of the scene to simulate the configuration of the target object in the scene; 所述应用模块包括交互单元和测算单元,所述交互单元用于接收配置所述目标物体的三维模型的信号,并根据所述信号在所述场景的三维模型中改变所述目标物体的三维模型的位置和形态;所述交互单元用于在所述目标物体和/或所述场景的三维模型中设立多个用于测算的标记;所述测算单元用于测算所述目标物体和/或所述场景的三维模型中所述标记之间的相关位置信息作为调整所述目标物体的参考;所述标记之间的相关位置信息包括标记点之间的距离、角度或者曲率。The application module includes an interaction unit and a calculation unit, the interaction unit is used to receive a signal configuring the 3D model of the target object, and change the 3D model of the target object in the 3D model of the scene according to the signal position and shape of the target object and/or the scene; the interaction unit is used to set up a plurality of markers for calculation in the three-dimensional model of the target object and/or the scene; the calculation unit is used to measure the target object and/or the The relative position information between the markers in the three-dimensional model of the scene is used as a reference for adjusting the target object; the relative position information between the markers includes the distance, angle or curvature between marker points. 2.如权利要求1所述的三维场景模拟装置,其特征在于,所述应用模块包括:2. The three-dimensional scene simulation device according to claim 1, wherein the application module comprises: 干涉单元,所述干涉单元用于根据预设的所述场景和所述目标物体的三维数据的空间属性判断所述目标物体的三维模型与所述场景的三维模型中的已有结构相互接触时发生的干涉情况,所述空间属性包括可穿过属性和不可穿过属性,所述干涉单元感测到两个具有所述不可穿过属性的空间坐标群组被移动至相互接触时限制其进入各自空间区域内部。An interference unit, the interference unit is used to determine when the 3D model of the target object is in contact with an existing structure in the 3D model of the scene according to the preset spatial attributes of the scene and the 3D data of the target object In the event of interference, the spatial attributes include a passable attribute and an impenetrable attribute, and the interference unit senses that two spatial coordinate groups with the impenetrable attribute are moved to contact each other and restrict their access within their respective space regions. 3.如权利要求2所述的三维场景模拟装置,其特征在于,所述干涉单元用于当所述目标物体的三维模型与所述场景的三维模型中同为所述不可穿过属性的空间坐标相互接触时,提示所述目标物体的三维模型与所述场景的三维模型发生干涉。3. The three-dimensional scene simulation device according to claim 2, wherein the interference unit is used for when the three-dimensional model of the target object and the three-dimensional model of the scene are the same space with the impenetrable attribute When the coordinates touch each other, it is indicated that the 3D model of the target object interferes with the 3D model of the scene. 4.如权利要求1所述的三维场景模拟装置,其特征在于,所述交互单元用于获取测算出来的所述标记的相关位置信息并呈现出根据所述标记的相关位置信息调整所述目标物体的三维模型的效果。4. The three-dimensional scene simulation device according to claim 1, wherein the interaction unit is configured to obtain the calculated relative position information of the marker and present a display for adjusting the target according to the relative position information of the marker. The effect of the 3D model of the object. 5.如权利要求1所述的三维场景模拟装置,其特征在于,所述应用模块包括控制单元,所述控制单元用于根据所述目标物体的三维模型、所述场景的三维模型及所述目标物体预设的任务,规划出所述目标物体在所述场景中的行进路线。5. The three-dimensional scene simulation device according to claim 1, wherein the application module comprises a control unit, and the control unit is configured to use the three-dimensional model of the target object, the three-dimensional model of the scene and the The preset task of the target object plans the traveling route of the target object in the scene. 6.如权利要求5所述的三维场景模拟装置,其特征在于,所述目标物体包括运动装置,所述控制单元用于根据预先规划的所述目标物体的行进路线控制所述运动装置驱动所述目标物体在所述场景中运动。6. The three-dimensional scene simulation device according to claim 5, wherein the target object includes a motion device, and the control unit is used to control the motion device to drive the device according to the pre-planned travel route of the target object. The target object moves in the scene. 7.如权利要求5所述的三维场景模拟装置,其特征在于,所述目标物体包括感测装置,所述感测装置用于感测所述目标物体的运动状态和所述行进路线前方的障碍物情况,所述控制单元用于:7. The three-dimensional scene simulation device according to claim 5, wherein the target object comprises a sensing device, and the sensing device is used to sense the motion state of the target object and the distance ahead of the travel route. In case of obstacles, the control unit is used for: 获取所述感测装置感测的所述目标物体的运动状态和所述行进路线前方的障碍物情况;Obtaining the motion state of the target object sensed by the sensing device and the obstacle situation ahead of the traveling route; 根据所述目标物体的运动状态和行进路线前方的障碍物情况控制所述目标物体避开障碍物;controlling the target object to avoid the obstacle according to the movement state of the target object and the obstacle situation in front of the traveling route; 在所述目标物体避开所述障碍物后,所述控制单元根据感测到的所述目标物体的当前位置通过参考路径长远和任务完成情况判断是将所述目标物体引导回原先规划路线还是选择更优的其他规划路线。After the target object avoids the obstacle, the control unit judges whether to guide the target object back to the original planned route or Choose another better planned route. 8.一种设备,其特征在于,包括权利要求1至7中任一项所述的三维场景模拟装置,所述设备根据所述三维场景模拟装置的模拟效果来对应执行相应的功能。8. A device, characterized in that it comprises the three-dimensional scene simulation device according to any one of claims 1 to 7, and the device correspondingly executes corresponding functions according to the simulation effect of the three-dimensional scene simulation device. 9.如权利要求8所述的设备,其特征在于,所述相应功能包括家居设计、机器人控制、模拟互动游戏中的任意一种或多种。9. The device according to claim 8, wherein the corresponding function includes any one or more of home design, robot control, and simulated interactive games.
CN201811150189.7A 2018-09-29 2018-09-29 Three-dimensional scene simulation device and equipment Active CN109242963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811150189.7A CN109242963B (en) 2018-09-29 2018-09-29 Three-dimensional scene simulation device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811150189.7A CN109242963B (en) 2018-09-29 2018-09-29 Three-dimensional scene simulation device and equipment

Publications (2)

Publication Number Publication Date
CN109242963A CN109242963A (en) 2019-01-18
CN109242963B true CN109242963B (en) 2023-08-18

Family

ID=65054081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811150189.7A Active CN109242963B (en) 2018-09-29 2018-09-29 Three-dimensional scene simulation device and equipment

Country Status (1)

Country Link
CN (1) CN109242963B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934908B (en) * 2019-02-28 2023-06-27 东华大学 Actual scene modeling method based on unmanned aerial vehicle
CN110503040B (en) * 2019-08-23 2022-05-27 斯坦德机器人(深圳)有限公司 Obstacle detection method and device
CN111145326B (en) * 2019-12-26 2023-12-19 网易(杭州)网络有限公司 Processing method of three-dimensional virtual cloud model, storage medium, processor and electronic device
CN111832104B (en) * 2020-06-24 2023-07-28 深圳市万翼数字技术有限公司 Method for establishing three-dimensional equipment model and related equipment
CN113838209A (en) * 2021-09-09 2021-12-24 深圳市慧鲤科技有限公司 Information management method of target environment and display method of related augmented reality
CN114152241A (en) * 2021-12-07 2022-03-08 中国南方电网有限责任公司超高压输电公司广州局 Operating state monitoring system of high-voltage line emergency repair tower

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7277572B2 (en) * 2003-10-10 2007-10-02 Macpearl Design Llc Three-dimensional interior design system
US7728833B2 (en) * 2004-08-18 2010-06-01 Sarnoff Corporation Method for generating a three-dimensional model of a roof structure
CN103778538A (en) * 2012-10-17 2014-05-07 李兴斌 Furniture simulation layout method and furniture simulation layout system
CN108460840A (en) * 2018-01-17 2018-08-28 链家网(北京)科技有限公司 The methods of exhibiting and displaying device of virtual house decoration

Also Published As

Publication number Publication date
CN109242963A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN109242963B (en) Three-dimensional scene simulation device and equipment
US10872467B2 (en) Method for data collection and model generation of house
AU2019281667B2 (en) Data collection and model generation method for house
KR102197732B1 (en) Method and apparatus for generating 3d map of indoor space
CN112034830B (en) Map information processing method and device and mobile equipment
US20180204387A1 (en) Image generation device, image generation system, and image generation method
WO2016065063A1 (en) Photogrammetric methods and devices related thereto
EP4268197A1 (en) Computer-implemented recommendation system and visualization for interior design
Huang et al. Network algorithm real-time depth image 3D human recognition for augmented reality
JP5332061B2 (en) Indoor renovation cost estimation system
CN116012564B (en) Equipment and method for intelligent fusion of three-dimensional model and live-action photo
EP4275173B1 (en) Computer-implemented reconstruction of interior rooms
CN109064562A (en) A kind of three-dimensional scenic analogy method
EP4473500A1 (en) Computer-implemented modeling of interior rooms
CN119399368A (en) A three-dimensional modeling method and device based on panoramic vision laser scanner
CN112413827A (en) Intelligent air conditioner and information display method and device thereof
Angladon et al. Room floor plan generation on a project tango device
Nóbrega et al. Design your room: adding virtual objects to a real indoor scenario
US20230351706A1 (en) Scanning interface systems and methods for building a virtual representation of a location
CN115908627B (en) House source data processing method and device, electronic equipment and storage medium
CN118115653A (en) Three-dimensional scene reconstruction method, device, equipment and medium
WO2023174561A1 (en) Generating synthetic interior room scene data for training ai-based modules
EP4275178B1 (en) Computer-implemented augmentation of interior room models
CN115830162B (en) House type diagram display method and device, electronic equipment and storage medium
CN117717295A (en) Scene map construction method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 518055 room 2101, Kim Chi Chi house, 1 Tong Ling Road, Taoyuan street, Shenzhen, Guangdong, Nanshan District

Patentee after: Fushi Technology Co.,Ltd.

Country or region after: China

Address before: 518055 room 2101, Kim Chi Chi house, 1 Tong Ling Road, Taoyuan street, Shenzhen, Guangdong, Nanshan District

Patentee before: SHENZHEN FUSHI TECHNOLOGY Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address