[go: up one dir, main page]

CN114820968B - Three-dimensional visualization method and device, robot, electronic equipment and storage medium - Google Patents

Three-dimensional visualization method and device, robot, electronic equipment and storage medium

Info

Publication number
CN114820968B
CN114820968B CN202210461197.3A CN202210461197A CN114820968B CN 114820968 B CN114820968 B CN 114820968B CN 202210461197 A CN202210461197 A CN 202210461197A CN 114820968 B CN114820968 B CN 114820968B
Authority
CN
China
Prior art keywords
dimensional
information
position information
map
dimensional map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210461197.3A
Other languages
Chinese (zh)
Other versions
CN114820968A (en
Inventor
许铭淏
程冉
孙涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Midea Robozone Technology Co Ltd
Original Assignee
Midea Robozone Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Midea Robozone Technology Co Ltd filed Critical Midea Robozone Technology Co Ltd
Priority to CN202210461197.3A priority Critical patent/CN114820968B/en
Publication of CN114820968A publication Critical patent/CN114820968A/en
Application granted granted Critical
Publication of CN114820968B publication Critical patent/CN114820968B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开了一种三维可视化方法和装置、机器人、电子设备和存储介质,属于机器人地图处理及感知技术领域。其中,三维可视化方法,包括:获取二维地图中物体空间信息和物体位置信息,基于物体空间信息和物体位置信息,生成三维表征信息。基于三维表征信息,构建物体三维模型,对三维模型进行渲染,得到二维地图对应的三维空间,并进行可视化显示。

This application discloses a three-dimensional visualization method and apparatus, a robot, an electronic device, and a storage medium, all belonging to the field of robot map processing and perception technology. The three-dimensional visualization method includes obtaining spatial information and positional information of objects in a two-dimensional map, generating three-dimensional representation information based on the spatial information and positional information, constructing a three-dimensional model of the object based on the three-dimensional representation information, rendering the three-dimensional model, obtaining a three-dimensional space corresponding to the two-dimensional map, and visually displaying it.

Description

Three-dimensional visualization method and device, robot, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of robot map processing and perception, and particularly relates to a three-dimensional visualization method and device, a robot, electronic equipment and a storage medium.
Background
In the prior art, in the field of robots, the processing and expression of information of a map are very important links. Because of the structural characteristics of commercial lidar, the map representation of the mainstream robot is simplified into a two-dimensional grid map. In the related art, the map representations of robots all use two-dimensional grid maps, and users can only see two-dimensional plane images similar to a top view at clients, so that user experience is poor. In addition, a large number of noise points exist in the two-dimensional plane image, three-dimensional rendering cannot be directly performed, and three-dimensional visualization of a map cannot be achieved.
Disclosure of Invention
The embodiment of the application aims to provide a three-dimensional visualization method and device, a robot, electronic equipment and a storage medium, which can solve the problem that a two-dimensional grid map is adopted for map representation of the robot, a user can only see a two-dimensional plane image, and the user experience is poor. The two-dimensional plane image can not be directly rendered in three dimensions, and the three-dimensional visualization of the map can not be realized.
In a first aspect, an embodiment of the present application provides a three-dimensional visualization method for a two-dimensional map, including acquiring object space information and object position information in the two-dimensional map, and generating three-dimensional characterization information based on the object space information and the object position information. Based on the three-dimensional characterization information, an object three-dimensional model is constructed, the three-dimensional model is rendered, a three-dimensional space corresponding to the two-dimensional map is obtained, and visual display is performed.
In a second aspect, an embodiment of the present application provides a three-dimensional visualization device for a two-dimensional map, including a generating module and a display module. The generation module is used for acquiring object space information and object position information in the two-dimensional map, generating three-dimensional representation information based on the object space information and the object position information, and the display module is used for constructing an object three-dimensional model, rendering the three-dimensional model to obtain a three-dimensional space corresponding to the two-dimensional map and performing visual display.
In a third aspect, an embodiment of the present application provides a robot, which performs three-dimensional visualization on a two-dimensional map by using the three-dimensional visualization method of the two-dimensional map according to the first aspect.
In a fourth aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the three-dimensional visualization method as the two-dimensional map of the first aspect.
In a fifth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of a three-dimensional visualization method as the two-dimensional map of the first aspect.
According to the method, three-dimensional representation information of the two-dimensional map is constructed, three-dimensional modeling rendering is carried out according to the three-dimensional representation information, and a three-dimensional space is obtained and visualized. And the more written three-dimensional visual rendering is performed on the two-dimensional map, so that more real experience is provided for a user. In this example, there is no limitation of the acquisition mode for the two-dimensional map, any two-dimensional map can be visually displayed through this embodiment, so that this embodiment is simple and effective, and has high customizable degree, strong versatility and wide application range.
Drawings
FIG. 1 shows one of the flow diagrams of the three-dimensional visualization method provided by the embodiment of the application;
FIG. 2 is a second schematic flow chart of a three-dimensional visualization method according to an embodiment of the present application;
FIG. 3 is a third schematic flow chart of a three-dimensional visualization method according to an embodiment of the present application;
FIG. 4 shows a fourth flow diagram of a three-dimensional visualization method provided by an embodiment of the present application;
FIG. 5 shows a fifth flow diagram of a three-dimensional visualization method provided by an embodiment of the present application;
FIG. 6 shows a flow diagram of a three-dimensional visualization method provided by an embodiment of the present application;
FIG. 7 shows a seventh flowchart of a three-dimensional visualization method according to an embodiment of the present application;
FIG. 8 shows one of the block diagrams of the three-dimensional visualization apparatus provided by the embodiment of the present application;
FIG. 9 is a schematic diagram of an overall scheme of a three-dimensional visualization method according to an embodiment of the present application;
Fig. 10 shows a schematic diagram of three-dimensional space secondary editing provided by an embodiment of the present application;
FIG. 11 illustrates a three-dimensional theme editing and switching schematic provided by an embodiment of the present application;
FIG. 12 shows a flowchart of a three-dimensional visualization method according to an embodiment of the present application;
FIG. 13 illustrates a two-dimensional grid map schematic provided by an embodiment of the present application;
FIG. 14 is a schematic view of external building profile information provided by an embodiment of the present application;
FIG. 15 is a schematic view of internal building profile information provided by an embodiment of the present application;
FIG. 16 shows a wire-frame schematic of a three-dimensional representation provided by an embodiment of the present application;
FIG. 17 shows one of three-dimensional map schematics of a three-dimensional visualization method provided by an embodiment of the present application;
FIG. 18 shows a second three-dimensional map schematic of the three-dimensional visualization method provided by the embodiment of the application;
FIG. 19 illustrates a schematic view of object position adjustment provided by an embodiment of the present application;
FIG. 20 illustrates a schematic view of object rotation adjustment provided by an embodiment of the present application;
FIG. 21 illustrates a schematic view of an object stretch adjustment provided by an embodiment of the present application;
FIG. 22 shows a second block diagram of a three-dimensional visualization device provided by an embodiment of the present application;
Fig. 23 shows a block diagram of an electronic device according to an embodiment of the present application;
Fig. 24 shows a schematic hardware structure of an electronic device according to an embodiment of the present application.
The correspondence between the reference numerals and the component names in fig. 8 to 24 is:
100 parts of three-dimensional visualization device, 110 parts of three-dimensional representation generation module, 120 parts of webpage end rendering module, 130 parts of secondary editing module, 140 parts of theme editing and switching module, 200 parts of two-dimensional grid map, 202 parts of structured text data, 204 parts of three-dimensional space rendering, 206 parts of secondary editing of three-dimensional space, 208 parts of three-dimensional theme editing and switching, 210 parts of wooden box, 300 parts of three-dimensional visualization device 310 parts of generation module, 320 parts of display module, 400 parts of electronic equipment, 402 parts of processor, 404 parts of memory, 1100 parts of electronic equipment, 1101 parts of radio frequency unit, 1102 parts of network module, 1103 parts of audio output unit, 1104 parts of input unit, 11041 parts of graphic processor, 11042 parts of microphone, 1105 parts of sensor, 1106 parts of display unit, 11061 parts of display panel, 1107 parts of user input unit, 11071 parts of touch panel, 11072 parts of other input equipment, 1108 parts of interface unit, 1109 parts of memory, 1110 parts of processor.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The three-dimensional visualization method and apparatus, the robot, the electronic device and the storage medium provided by the embodiments of the present application are described in detail below with reference to fig. 1 to 24 by means of specific embodiments and application scenarios thereof.
In the embodiment of the present application, a three-dimensional visualization method of a two-dimensional map is provided, and fig. 1 shows one of flow diagrams of the three-dimensional visualization method provided in the embodiment of the present application, where, as shown in fig. 1, the three-dimensional visualization method of the two-dimensional map includes:
Step 102, acquiring object space information and object position information in a two-dimensional map, and generating three-dimensional characterization information based on the object space information and the object position information.
And 104, constructing a three-dimensional object model based on the three-dimensional characterization information, rendering the three-dimensional object model to obtain a three-dimensional space corresponding to the two-dimensional map, and performing visual display.
In the related art, the map representation of the robot is mostly simplified to a two-dimensional grid map, and the user can only see a two-dimensional planar image similar to a top view at the client. And because of the problem of two-dimensional grid map construction algorithm and the precision problem of the laser radar, a large number of noise points exist on the two-dimensional plane map, and three-dimensional rendering cannot be directly performed, how to better perform map visualization to improve user experience and find a general efficient rendering scheme of the map becomes a pain point problem to be solved urgently.
It can be understood that the two-dimensional map in this embodiment refers to a two-dimensional map obtained by preprocessing an original two-dimensional laser map. A two-dimensional map is a two-dimensional map with well-defined metrics.
It can be understood that in the two-dimensional map, there are different types of objects, and according to the conditions of the objects to be displayed, the map division corresponding to the different types of objects to be displayed is obtained.
In this embodiment, the two-dimensional map may be a two-dimensional grid map, the position information of the grid map with different divisions is object space information, the orientation information of the grid map with different divisions is object position information, the three-dimensional characterization information is a structured text, and the object space information and the object position information are stored in the three-dimensional characterization information.
According to the embodiment, three-dimensional representation information of the two-dimensional map is constructed, three-dimensional modeling rendering is carried out according to the three-dimensional representation information, and a three-dimensional space is obtained and visualized. And the more written three-dimensional visual rendering is performed on the two-dimensional map, so that more real experience is provided for a user. In this example, there is no limitation of the acquisition mode for the two-dimensional map, any two-dimensional map can be visually displayed through this embodiment, so that this embodiment is simple and effective, and has high customizable degree, strong versatility and wide application range.
In some embodiments of the present application, fig. 2 shows a second flowchart of a three-dimensional visualization method provided by the embodiment of the present application, where, as shown in fig. 2, object space information and object position information in a two-dimensional map are acquired, and three-dimensional characterization information is generated based on the object space information and the object position information, and specifically includes:
step 202, acquiring object space information in a two-dimensional map.
And 204, obtaining the type corresponding to the object.
Step 206, obtaining first position information of the object center point under the absolute scale.
Step 208, obtaining second position information of the object based on the object space information and the type corresponding to the object, wherein the position information comprises the first position information and the second position information.
Step 210, performing height alignment on the first position information.
Step 212, performing horizontal center alignment on the second position information.
Step 214, generating three-dimensional characterization information based on the aligned first position information and second position information.
In this embodiment, for example, when the two-dimensional map is a two-dimensional grid map, the two-dimensional grid map is provided with measurement information, and after the two-dimensional grid map is classified according to the category, different types of classification are obtained, and the number is stored in the category field and integrated into the three-dimensional characterization information for storage.
In this embodiment, orientation information of objects in the two-dimensional grid map of different divisions is obtained, and is converted into rotation information, that is, first position information of a center point of the object in an absolute scale.
After determining the object space information and the object corresponding type, second position information of the object can be obtained.
The first position information needs to be aligned in height, and two-dimensional center points of different objects are mapped to corresponding three-dimensional center points with different heights. The second positional information requires horizontal center alignment. The coordinates of all points are horizontally aligned by counting the outermost boundary information of the maximum xy two-dimensional boundary coordinate points and the minimum xy two-dimensional boundary coordinate points of all points at present.
The first position information after the height alignment and the second position information after the horizontal center alignment are stored in the three-dimensional characterization information.
In this embodiment, the three-dimensional characterization information is a structured text. May be used to store content. For example, the three-dimensional representation information may be combined into a JSON-format three-dimensional representation file, and transmitted to the rendering end through a network for subsequent rendering.
In the embodiment, the three-dimensional characterization file is generated by acquiring the information in the two-dimensional map, and three-dimensional visual display is realized through the three-dimensional characterization file, so that the three-dimensional map is not directly generated through the two-dimensional map, and the influence caused by excessive two-dimensional map noise points due to the accuracy of the laser radar is avoided. The map which is displayed in the follow-up three-dimensional visualization mode can be more accurate, and the rendering effect is better.
In some embodiments of the present application, obtaining first position information of a center point of an object in an absolute scale specifically includes:
And acquiring the rotation angle and the position coordinate of the center point of the object under the absolute scale.
For example, when the two-dimensional map is a two-dimensional grid map, the rotation angle and the position coordinate of the center point of the object under the absolute scale are calculated according to the orientation information of the grid map divided differently, wherein the first position information includes the rotation angle and the position coordinate of the center point of the object under the absolute scale.
In this embodiment, the two-dimensional map has explicit measurement information, and can simply obtain the rotation angle and the position coordinate of the center point of the object under the absolute scale.
In the embodiment, by acquiring the rotation angle and position coordinate information of the center point of the object under the absolute scale, the three-dimensional model of the object is convenient to build later and render, so that three-dimensional visual display is realized.
In some embodiments of the present application, fig. 3 shows a third flowchart of the three-dimensional visualization method provided by the embodiment of the present application, and as shown in fig. 3, the method for obtaining second position information of an object based on object space information and a type corresponding to the object specifically includes:
step 302, based on the type, the height and thickness of the object are obtained.
Step 304, based on the object space information, the object length is obtained.
For example, when the two-dimensional map is a two-dimensional grid map, the height and thickness of the object are calculated according to the types of the objects of the grid map divided differently. And calculating the length of the object according to the object space information, wherein the second position information comprises the height, the thickness and the length of the object.
In this embodiment, the height and thickness of the object are correspondingly determined according to the type of the object, the length of the object is determined according to the spatial information of the object, and the second position information is easy to obtain.
In the embodiment, the three-dimensional model of the object is conveniently built later and rendered through the height, the thickness and the length of the object, so that three-dimensional visual display is realized.
In some embodiments of the present application, fig. 4 shows a fourth flowchart of a three-dimensional visualization method provided by the embodiment of the present application, where, as shown in fig. 4, before obtaining a type corresponding to an object, the method further includes:
Step 402, building a correspondence between objects and types.
Step 404, building the correspondence between the type and the height and thickness.
It will be appreciated that in a two-dimensional grid map, there are some different kinds of objects, the height and thickness being the same, as they can be grouped into a unified type.
It can be understood that a plurality of objects can be arranged on the two-dimensional map to be displayed, and the corresponding relation between the objects and the types can be established first, and then the corresponding relation between the types and the heights and the thicknesses can be established. In this embodiment, by classifying different objects, the correspondence between the objects and the height and thickness thereof may be simplified, the calculation may be simplified, and the usability of the method may be increased.
In some embodiments of the application, the object space information includes an exterior building contour and/or an interior building contour.
It will be appreciated that the two-dimensional map of the present embodiment is a preprocessed two-dimensional laser map. The preprocessing process mainly comprises removing noise from an original two-dimensional map through image processing morphology to obtain a two-dimensional map with definite measurement.
It will be appreciated that the spatial information in this embodiment may include, for example, exterior building contours and/or interior building contours, which may refer to walls. The object space information is mapped onto a two-dimensional map having absolute dimensions.
It should be noted that, the object types applicable to the embodiment include, but are not limited to, the two display types, only the two types to be displayed exist in the current common grid map, and the embodiment is not limited by the number of the original types, and can be adaptively expanded according to the number of semantic types of the object contained in the map.
In this embodiment, more external building outlines and/or internal building outlines are applied as object space information in the two-dimensional grid map in the related technology, so that method steps can be effectively simplified, expansibility is reserved, and object space information can be conveniently expanded in the future.
In some embodiments of the present application, fig. 5 shows a fifth flowchart of a three-dimensional visualization method provided by the embodiment of the present application, as shown in fig. 5, a three-dimensional model of an object is constructed based on three-dimensional characterization information, the three-dimensional model is rendered, a three-dimensional space corresponding to a two-dimensional map is obtained, and visualization is performed, and specifically includes:
Step 502, constructing a polygonal network model corresponding to the position and the direction based on the three-dimensional characterization information.
And step 504, rendering the polygonal network model by adopting a preset theme style to obtain a three-dimensional space corresponding to the two-dimensional map, and visually displaying the three-dimensional space.
In this embodiment, three-dimensional characterization information is received, and then format analysis is performed on the three-dimensional characterization information, where the three-dimensional characterization information is a structured text, and may be in JSON format. The embodiment builds a Mesh model of the polygonal network model, can model and render the model at the webpage end, and finally performs visual display through the webpage end.
It can be understood that the grounding is performed, through the content in the three-dimensional representation information, a Mesh model can be constructed at the corresponding position and direction, and then the Mesh model is rendered by adopting a preset theme style. The preset theme style may be a preset theme style by a user, or may be a default theme style, or a theme style adopted when rendering is performed last time.
In the embodiment, based on the three-dimensional characterization information, a three-dimensional model is constructed and rendered, namely, a more realistic three-dimensional visual rendering is performed on the two-dimensional planar grid map, so that a more realistic experience is provided for a user.
In some embodiments of the present application, rendering a polygonal network model by adopting a preset theme style specifically includes:
and rendering the polygonal network model on the webpage end through a rendering engine by adopting a preset theme style.
The related art method for image rendering based on WebGL is to analyze and render the model by directly importing the existing model. However, the two-dimensional visual map obtained by the robot processing in this embodiment does not have a corresponding three-dimensional model, and the map cannot be directly rendered due to the problem of too many noise points in the two-dimensional map.
In the related art three-dimensional modeling and rendering method, a voxel or a method for constructing a three-dimensional space by using depth information is adopted, and the position of an object in the space is determined by pose information of a camera. However, the existing two-dimensional grid map lacks information on three-dimensional height, and cannot accurately determine the space structure and the position of an object in space, so that the three-dimensional rendering of the map cannot be performed.
In this embodiment, for example, the rendering engine may use WebGL, render the Mesh object through WebGL according to a preset theme style, and complete the visual display of the two-dimensional map at the web page end.
In this embodiment, if the user does not have the preset theme style, the user will display according to the default style.
In order to adopt WebGL as a rendering engine, the embodiment renders the three-dimensional characterization information at the webpage end, so that the method is high in universality and wide in application range.
In some embodiments of the present application, before rendering the polygon network model using the preset theme style, the three-dimensional visualization method of the two-dimensional map further includes:
At least one theme style is constructed.
In this embodiment, before rendering, a theme style needs to be built, which can be edited by the user, or multiple theme styles can be preset for the user to select.
For example, when a user edits a theme style, the user may edit the theme by editing different types of object styles and saving the custom theme including, but not limited to, color, texture map, thickness, whether shadows are generated, whether shadows are accepted, reflectivity, transparency, illumination model, etc.
For example, in performing the theme style of constructing the grid map, the user may edit by clicking on attribute information corresponding to the division of different categories mentioned in "rendering flow of the grid map". The user can freely combine the desired color scheme and style by generating relevant attributes corresponding to the Mesh model for different divisions, such as surface roughness, brightness, illumination model, geometric information, body color, contour color, luminous color, whether shadow is accepted or not, and the like. And storing all related attributes of classification in user-defined theme style characterization and three-dimensional characterization, wherein in the subsequent model rendering process, the method of the embodiment can preferentially select the corresponding style for rendering. The implementation details are that the final rendering effect is simulated by loading and analyzing the information and combining three-dimensional space characterization information in the WebGL rendering process according to the geometric shapes and illumination models of the corresponding attributes by dividing different categories.
The embodiment supports the user to construct the theme style, increases the practicability of the method, and for the user, the theme style can be set by the user according to the habit and the requirement of the user, so that the user operation is simplified, and the user experience is improved.
In some embodiments of the present application, fig. 6 shows a sixth flowchart of a three-dimensional visualization method provided by the embodiment of the present application, where, as shown in fig. 6, the three-dimensional visualization method of a two-dimensional map further includes:
in response to detecting that the theme style to be switched is selected, step 602.
And step 604, switching the theme style to be switched and displaying.
In this embodiment, the user may select an existing theme style to switch the display style.
The embodiment can adopt web page end WebGL to render the three-dimensional model, and supports selecting a preset theme style during rendering. After rendering, the rendered three-dimensional map can be edited for the second time, namely, the user performs theme style switching operation, and when the user is detected to perform theme selecting operation to be switched, the user responds to change the currently displayed theme style into the theme style to be switched selected by the user.
According to the embodiment, the three-dimensional model is rendered on the webpage end WebGL, the functions of model rendering of different theme styles and later-stage secondary three-dimensional map editing are supported, more realistic three-dimensional visual rendering is performed on the planar grid map of the robot, more real experience is provided for a user, the setting of the theme styles is switched, the user operation is simplified, and the user experience is improved.
In some embodiments of the present application, fig. 7 shows a seventh flowchart of the three-dimensional visualization method provided by the embodiment of the present application, as shown in fig. 7, after obtaining a three-dimensional space corresponding to a two-dimensional map and performing visualization display, the three-dimensional visualization method of the two-dimensional map further includes:
In step 702, an object to be added is obtained.
Step 704, adjusting third position information of the object to be added.
And step 706, adding the object to be added to the three-dimensional space based on the third position information, and performing visual display.
In this embodiment, a user may add a custom object, that is, an object to be added, to the rendered three-dimensional space.
In this embodiment, the user may select the built-in object as the object to be added, and adjust the third position information of the object to be added, for example, the third position may be a position, a rotation angle, a zoom size, and the like of the object to be added. And after the third position information is determined, displaying the object to be added.
In this embodiment, the object information to be added, i.e. the newly added three-dimensional object information, may be saved in the three-dimensional characterization information, i.e. the structured text file. When rendering is performed next time, the corresponding three-dimensional characterization information edited before can be recovered.
In this embodiment, for the object to be added, the basic information may be aligned to perform locking restriction, for example, the y-axis height may be locked to keep the locked height level with the map reference plane in the three-dimensional space.
In the embodiment, the object is supported to be added on the three-dimensional visual map, so that the user operation is simplified, and the user experience is improved.
Specific example 1:
The main purpose of the embodiment is to realize three-dimensional visual rendering of a two-dimensional grid map on a webpage end, and simultaneously support editability and display style configurability of the generated three-dimensional map. In the related art, a technical scheme of directly carrying out three-dimensional visualization on a two-dimensional grid map of a robot is not provided, and only the visualization of the three-dimensional grid map and the three-dimensional space construction based on a depth map exist. And the related technology does not have a three-dimensional map generated for the two-dimensional grid map, and provides a later editable function, a multi-theme configuration and switching function. The embodiment can directly act on the existing two-dimensional grid map and generate a three-dimensional space with good user experience.
Fig. 8 shows one of the structural block diagrams of the three-dimensional visualization device provided by the embodiment of the application, and as shown in fig. 8, the three-dimensional visualization device 100 includes a three-dimensional representation generating module 110, a web page end rendering module 120, a secondary editing module 130 and a theme editing switching module 140.
The three-dimensional representation generating module 110 is configured to embed key information of the two-dimensional grid map into the three-dimensional representation as an intermediary for information interaction. The web page end rendering module 120 is configured to parse the three-dimensional representation generated in the three-dimensional representation generating module 110 and render the three-dimensional representation with a WebGL engine. The secondary editing module 130 is configured to render WebGL into a three-dimensional space, and a user can insert the rotation angle and the position of the three-dimensional object in a user-defined manner. The theme editing and switching module 140 is configured to provide a configurable theme style and a function of dynamically switching the theme.
The precondition for this embodiment is an original two-dimensional laser map that has been preprocessed. The preprocessing method can adopt a two-dimensional solid-state laser map building indoor frame extraction method, and the preprocessing process mainly comprises the steps of removing noise from an original two-dimensional grid map through a series of algorithms such as image processing morphology and the like, obtaining the external contour and the internal wall contour of the building, and mapping the external contour and the internal wall contour of the building to a two-dimensional map with absolute dimensions. The preprocessing method described above is merely an exemplary illustration of a preprocessing method conforming to the preconditions of the present embodiment, and is not to be construed as limiting the present embodiment, and the present embodiment is applicable to any visualization in which the output result is a two-dimensional map with an explicit metric.
The three-dimensional representation generation module 110 obtains two types of object space information of the external and internal building profiles from the preconditions. It should be noted that, the object types applicable to the embodiment include, but are not limited to, the two display types, only the two types to be displayed exist in the current common grid map, and the embodiment is not limited by the number of the original types, and can be adaptively expanded according to the number of semantic types of the object contained in the map.
The three-dimensional representation generating module 110 sets corresponding type labels for different types of objects according to the object space information, calculates the rotation angle and position coordinates of the center point of the object under the absolute scale, sets the corresponding height and thickness of the object according to the corresponding types, and calculates the length of the object according to the object space information.
The position coordinates need to be aligned in height, and the three-dimensional representation generating module 110 maps two-dimensional center points of different objects to corresponding three-dimensional center points with different heights. The above positional information (corresponding height, thickness, length of the object) requires horizontal center alignment. The three-dimensional characterization generation module 110 performs horizontal center alignment on coordinates of all points by counting the maximum and minimum xy two-dimensional boundary coordinate points (outermost boundary information) of all points at present.
The three-dimensional representation generating module 110 combines all the output results into a JSON-format three-dimensional representation file (three-dimensional representation of the structured text) which is transmitted to the rendering end through a network.
After the webpage end rendering module 120 obtains the three-dimensional representation of the structured text generated by the three-dimensional representation generating module 110, format analysis is performed through the analysis module, and a Mesh object corresponding to the position and the direction is constructed. The web page end rendering module 120 performs WebGL rendering of the corresponding style of the corresponding type on the Mesh object according to the display theme style (if the user does not have the custom theme style, the display is performed according to the default style).
As shown in fig. 9, the above procedure of this embodiment is to generate structured text data 202 (three-dimensional representation) according to a two-dimensional grid map 200, and render 204 in three-dimensional space.
As shown in fig. 10, the user implements a function of adding a custom object to the already rendered three-dimensional space (i.e., the secondary editing 206 of the three-dimensional space) through the secondary editing module 130. The user can select the built-in object, adjust the position, rotation angle, zoom size, etc. of the object, place the object in the three-dimensional space, and the newly added three-dimensional object information is also saved in the structured text file (three-dimensional characterization file).
The user may edit different types of object styles and save the custom theme (including, but not limited to, color, texture map, thickness, whether shadows are generated, whether shadows are accepted, reflectivity, transparency, illumination model, etc.) through the theme edit switch module 140. The user may also switch the display style by selecting an existing theme style through the theme editing switching module 140. As shown in fig. 11, the user may edit and switch the theme style (i.e., edit and switch 208 the three-dimensional theme).
In the embodiment, starting from basic grid map information processing, a three-dimensional representation capable of being efficiently transmitted is constructed, and analysis and visual rendering of the three-dimensional representation are performed on a webpage end through a WebGL rendering engine. The embodiment also provides a set of three-dimensional map editors capable of being edited secondarily and a set of configurable multi-style display theme switching device.
According to the embodiment, the three-dimensional representation is constructed on the two-dimensional grid map, the three-dimensional representation is rendered on the webpage end WebGL to form the three-dimensional model, the functions of model rendering of different theme styles and later secondary three-dimensional map editing are supported, the planar grid map of the robot is subjected to more realistic three-dimensional visual rendering, more real experience is provided for a user, and the method has very positive promotion effects on secondary verification of map effects and rapid three-dimensional space modeling. Meanwhile, the embodiment has good universality, the front input condition of the embodiment is not limited to the two-dimensional grid map generated by the laser radar, and any two-dimensional map can be visually displayed by the method of the embodiment. The method of the embodiment is simple and effective, high in customization degree, strong in universality and wide in application range.
Specific example 2:
the embodiment is a method for generating a three-dimensional representation of a corresponding three-dimensional space by utilizing contour information provided by a two-dimensional grid map, and performing three-dimensional rendering on the representation through a WebGL rendering engine to obtain an interactable three-dimensional map. As shown in fig. 12, the three-dimensional visualization method of the two-dimensional map includes:
step 802, rendering of a three-dimensional map.
In the rendering process of the three-dimensional map, first, map divisions of different types of objects to be displayed are required to be obtained according to the two-dimensional map, fig. 13 is an exemplary grid map, and map divisions when the types are two types, including an inner contour of a building and an outer contour of the building, the outer contour of the building is shown in fig. 14, the inner contour of the building is shown in fig. 15, it should be noted that the present embodiment is applicable to visualization of multi-category maps with unlimited number of categories, that is, the number and types of categories of the exemplary map cannot be understood as limitations on the present embodiment, and only two categories are shown for illustrating the process and method of the present embodiment. After category classification similar to that shown in fig. 14 and 15 is obtained, a corresponding three-dimensional characterization method is generated according to the measurement information of the grid map, and specifically includes:
(1) Acquiring grid map position information of different partitions and storing the grid map position information in a structured text (namely three-dimensional representation);
(2) The method comprises the steps of acquiring orientation information of grid maps of different partitions, converting the orientation information into rotation information and storing the rotation information in a structured text (namely three-dimensional representation);
(3) Acquiring length information of different partitions in the grid map, recording the length information as corresponding length information and storing the corresponding length information in a structured text (namely three-dimensional representation);
(4) The types of different partitions are obtained, the numbers of the different partitions are stored in a category field and are integrated into a structured text (namely three-dimensional representation);
(5) And (3) centering the corresponding position information in the steps (1) and (2) according to the maximum and minimum positions of all the divisions of the grid map, namely aligning the horizontal centers of all the positions.
(6) All structured text information is stored.
(7) And analyzing the format of the three-dimensional representation of the structured text, and constructing the Mesh object with the corresponding position and direction. And according to the style of the display theme, performing WebGL rendering of the style corresponding to the corresponding type on the Mesh object. The interface diagram of the three-dimensional representation frame line model generated on the page of the computer page is shown in fig. 16. And adopting a three-dimensional map obtained by rendering a default theme style through WebGL, wherein an interface diagram of the three-dimensional map generated on a page of a computer page end is shown in figure 17. An interface diagram of a three-dimensional map generated on a page at the computer page end using a three-dimensional map rendered by WebGL in another theme style is shown in fig. 18. Fig. 17 and fig. 18 are rendered with different theme styles, and the three-dimensional map may be different in color, for example, fig. 17 may be gray and fig. 18 may be blue.
In the above flow, (1) and (2) are used for obtaining the position rotation information of all three-dimensional characterization, and (3) and (4) are used for storing the information of the Mesh used for rendering. (5) For performing a centering operation of the entire three-dimensional map, because the metrics and locations of the grid map may not be suitable for direct rendering display.
Step 804, performing secondary editing processing on the three-dimensional map.
In the secondary editing processing module of the grid map, a user can select an object to be added and edit the position size angle of the object in the three-dimensional map. Assuming that the user chooses to add one wooden box 210, the user can adjust the position of the wooden box 210 by x-axis, y-axis and z-axis, and the interface diagram for adjusting the position of the wooden box 210 by x-axis, y-axis and z-axis on the page at the computer page end is shown in fig. 19. The user can also adjust the rotation angles of the three directions of the wooden case 210 through the circular ring, and an interface diagram of the rotation angles of the three directions of the wooden case 210 on a page of the computer page end through the circular ring is shown in fig. 20. The user may also stretch the wooden case 210 in three directions, x-axis, y-axis and z-axis, and the wooden case 210 is stretched in three directions, x-axis, y-axis and z-axis on the page at the computer page end as shown in fig. 21. In summary, the secondary editing can be realized by dragging the x-axis, the y-axis and the z-axis or the circular ring of the three colors respectively. The three axes of the object are respectively an x axis, a y axis and a z axis, and can respectively correspond to three different colors, for example, red, blue and green, and the ring can also be provided with three colors to respectively correspond to the three axes for rotation. In this embodiment, the corresponding editing result is stored in the three-dimensional representation in the same way, and the corresponding three-dimensional representation information after the last editing is recovered when the next rendering is performed. In practical application of the embodiment, certain locking restriction can be performed on basic information of the added object, namely, the y-axis (green) height of the object added by the user can be preset to be fixed, and the locking is kept at a height which is flush with the map reference plane in the three-dimensional space, and the like.
Step 806, editing and switching the multi-theme style of the three-dimensional map.
In the grid map multi-theme editing switching process, a user can edit by clicking the attribute information corresponding to the different categories of division mentioned in the "rendering flow of the grid map". The user can freely combine the desired color scheme and style by generating relevant attributes corresponding to the Mesh for different divisions, such as surface roughness, brightness, illumination model, geometric information, body color, contour color, luminous color, whether shadow is accepted or not, and the like. And storing all related attributes of the classification categories in user-defined theme style characterization and three-dimensional characterization, and then performing subsequent model rendering processes, wherein the embodiment can preferentially select the corresponding style for rendering. The implementation details are that the final rendering effect is simulated by loading and analyzing the information and combining three-dimensional space characterization information in the WebGL rendering process according to the geometric shapes and illumination models of the corresponding attributes by dividing different categories.
According to the three-dimensional visualization method provided by the embodiment of the application, the execution main body can be a three-dimensional visualization device. In the embodiment of the application, a three-dimensional visualization device is taken as an example to execute a three-dimensional visualization method.
In some embodiments of the present application, a three-dimensional visualization device is provided, fig. 22 shows a block diagram of a three-dimensional visualization device provided in an embodiment of the present application, and as shown in fig. 22, a three-dimensional visualization device 300 for a two-dimensional map includes a generating module 310 and a display module 320. The generating module 310 is configured to obtain object space information and object position information in the two-dimensional map, and generate three-dimensional characterization information based on the object space information and the object position information. The display module 320 is configured to construct a three-dimensional model of the object, render the three-dimensional model, obtain a three-dimensional space corresponding to the two-dimensional map, and visually display the three-dimensional space.
According to the embodiment, three-dimensional representation information of the two-dimensional map is constructed, three-dimensional modeling rendering is carried out according to the three-dimensional representation information, and a three-dimensional space is obtained and visualized. And the more written three-dimensional visual rendering is performed on the two-dimensional map, so that more real experience is provided for a user. In this example, there is no limitation of the acquisition mode for the two-dimensional map, any two-dimensional map can be visually displayed through this embodiment, so that this embodiment is simple and effective, and has high customizable degree, strong versatility and wide application range.
The three-dimensional visualization device 300 provided in the embodiment of the present application can implement each process of the above-mentioned three-dimensional visualization method embodiment, and can achieve the same technical effects, and for avoiding repetition, the description is omitted here.
The three-dimensional visualization device in the embodiment of the application can be an electronic device, and can also be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. The electronic device may be a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a Mobile internet appliance (Mobile INTERNET DEVICE, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a robot, a wearable device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and may also be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., which are not particularly limited in the embodiments of the present application.
The three-dimensional visualization device in the embodiment of the application can be a device with an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The three-dimensional visualization device provided by the embodiment of the application can realize each process realized by the embodiment of the method, and in order to avoid repetition, the description is omitted.
The embodiment of the application also provides a robot which adopts the three-dimensional visualization method of the two-dimensional map to carry out three-dimensional visualization display on the two-dimensional map. The above-mentioned three-dimensional visualization method can realize the processes of the embodiments, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Optionally, as shown in fig. 23, the embodiment of the present application further provides an electronic device 400, where the electronic device 400 includes a processor 402 and a memory 404, and a program or an instruction that can be executed on the processor 402 is stored in the memory 404, and when the program or the instruction is executed by the processor 402, the steps of the embodiment of the method are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 24 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1100 includes, but is not limited to, a radio frequency unit 1101, a network module 1102, an audio output unit 1103, an input unit 1104, a sensor 1105, a display unit 1106, a user input unit 1107, an interface unit 1108, a memory 1109, and a processor 1110.
Those skilled in the art will appreciate that the electronic device 1100 may further include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1110 by a power management system, such as to perform functions such as managing charging, discharging, and power consumption by the power management system. The electronic device structure shown in fig. 24 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown in the drawings, or may combine some components, or may be arranged in different components, which will not be described in detail herein.
The processor 1110 is configured to obtain object space information and object position information in the two-dimensional map, and generate three-dimensional characterization information based on the object space information and the object position information.
And the processor 1110 is used for constructing a three-dimensional object model based on the three-dimensional characterization information, rendering the three-dimensional object model to obtain a three-dimensional space corresponding to the two-dimensional map, and performing visual display.
The processor 110 provided in the embodiment of the present application may implement each process of the above three-dimensional visualization method embodiment, and may achieve the same technical effects, so that repetition is avoided and no further description is provided herein.
It should be appreciated that in embodiments of the present application, the input unit 1104 may include a graphics processor (Graphics Processing Unit, GPU) 11041 and a microphone 11042, the graphics processor 11041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 1106 may include a display panel 11061, and the display panel 11061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1107 includes at least one of a touch panel 11071 and other input devices 11072. The touch panel 11071 is also referred to as a touch screen. The touch panel 11071 may include two parts, a touch detection device and a touch controller. Other input devices 11072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 1109 may be used to store software programs as well as various data. The memory 1109 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1109 may include volatile memory or nonvolatile memory, or the memory 1109 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDRSDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct random access memory (DRRAM). Memory 1109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1110 may include one or more processing units and, optionally, processor 1110 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1110.
The embodiment of the application also provides a readable storage medium, and the readable storage medium stores a program or an instruction, which when executed by a processor, realizes each process of the three-dimensional visualization method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
The processor is a processor in the electronic device in the above embodiment. Readable storage media include computer readable storage media such as computer readable memory ROM, random access memory RAM, magnetic or optical disks, and the like.
The embodiment of the application further provides a chip, the chip comprises a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running programs or instructions, the processes of the three-dimensional visualization method embodiment can be realized, the same technical effects can be achieved, and the repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the three-dimensional visualization method embodiments described above, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in part in the form of a computer software product stored on a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (13)

1. A three-dimensional visualization method of a two-dimensional map, comprising:
acquiring object space information and object position information in a two-dimensional map, and generating three-dimensional characterization information based on the object space information and the object position information;
Constructing an object three-dimensional model based on the three-dimensional characterization information, rendering the three-dimensional model to obtain a three-dimensional space corresponding to the two-dimensional map, and performing visual display;
The obtaining object space information and object position information in the two-dimensional map, and generating three-dimensional characterization information based on the object space information and the object position information specifically includes:
acquiring the object space information in a two-dimensional map;
acquiring the type corresponding to the object;
Acquiring first position information of a center point of the object under an absolute scale;
Acquiring second position information of the object based on the object space information and the type corresponding to the object, wherein the position information comprises the first position information and the second position information;
performing height alignment on the first position information;
performing horizontal center alignment on the second position information;
and generating the three-dimensional characterization information based on the aligned first position information and second position information.
2. The method for three-dimensional visualization of a two-dimensional map according to claim 1, wherein the acquiring the first position information of the center point of the object at the absolute scale specifically comprises:
And acquiring the rotation angle and the position coordinates of the center point of the object under the absolute scale.
3. The three-dimensional visualization method of a two-dimensional map according to claim 1, wherein the acquiring the second position information of the object based on the object space information and the type corresponding to the object specifically includes:
acquiring the height and thickness of the object based on the type;
and acquiring the object length based on the object space information.
4. A three-dimensional visualization method of a two-dimensional map as recited in claim 3, further comprising, prior to said obtaining the type corresponding to the object:
constructing a corresponding relation between the object and the type;
and constructing the corresponding relation between the type and the height and thickness.
5. The method of three-dimensional visualization of a two-dimensional map of claim 1, wherein the object space information comprises an exterior building contour and/or an interior building contour.
6. The method for three-dimensional visualization of a two-dimensional map according to claim 1, wherein the constructing the three-dimensional model of the object based on the three-dimensional characterization information, rendering the three-dimensional model, obtaining a three-dimensional space corresponding to the two-dimensional map, and performing visual display, specifically comprises:
constructing a polygonal network model corresponding to the position and the direction based on the three-dimensional characterization information;
And rendering the polygonal network model by adopting a preset theme style to obtain a three-dimensional space corresponding to the two-dimensional map, and performing visual display.
7. The method for three-dimensional visualization of a two-dimensional map according to claim 6, wherein rendering the polygonal network model with a preset theme style specifically comprises:
and rendering the polygonal network model on the webpage end through a rendering engine by adopting a preset theme style.
8. The three-dimensional visualization method of a two-dimensional map of claim 6, wherein prior to the rendering of the polygonal network model with the preset theme style, the three-dimensional visualization method of a two-dimensional map further comprises:
at least one of the theme styles is constructed.
9. The three-dimensional visualization method of a two-dimensional map of claim 6, further comprising:
Responding to detection of selecting a theme style to be switched;
and switching the theme style to be switched and displaying.
10. The three-dimensional visualization method of a two-dimensional map according to any one of claims 1 to 9, characterized in that, after the obtaining of the three-dimensional space corresponding to the two-dimensional map and the visualized display, the three-dimensional visualization method of the two-dimensional map further comprises:
acquiring an object to be added;
adjusting third position information of the object to be added;
And adding the object to be added to the three-dimensional space based on the third position information, and performing visual display.
11. A three-dimensional visualization apparatus for a two-dimensional map, comprising:
the generating module is used for acquiring object space information and object position information in the two-dimensional map and generating three-dimensional characterization information based on the object space information and the object position information;
The display module is used for constructing a three-dimensional object model, rendering the three-dimensional object model to obtain a three-dimensional space corresponding to the two-dimensional map, and performing visual display;
the generation module is also used for acquiring the object space information in the two-dimensional map;
acquiring the type corresponding to the object;
Acquiring first position information of a center point of the object under an absolute scale;
Acquiring second position information of the object based on the object space information and the type corresponding to the object, wherein the position information comprises the first position information and the second position information;
performing height alignment on the first position information;
performing horizontal center alignment on the second position information;
and generating the three-dimensional characterization information based on the aligned first position information and second position information.
12. An electronic device, comprising:
a memory having stored thereon programs or instructions;
a processor for implementing the steps of the three-dimensional visualization method of a two-dimensional map as claimed in any one of claims 1 to 10 when executing the program or instructions.
13. A readable storage medium having stored thereon a program or instructions which when executed by a processor realizes the steps of the three-dimensional visualization method of a two-dimensional map as claimed in any one of claims 1 to 10.
CN202210461197.3A 2022-04-28 2022-04-28 Three-dimensional visualization method and device, robot, electronic equipment and storage medium Active CN114820968B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210461197.3A CN114820968B (en) 2022-04-28 2022-04-28 Three-dimensional visualization method and device, robot, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210461197.3A CN114820968B (en) 2022-04-28 2022-04-28 Three-dimensional visualization method and device, robot, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114820968A CN114820968A (en) 2022-07-29
CN114820968B true CN114820968B (en) 2025-08-01

Family

ID=82508800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210461197.3A Active CN114820968B (en) 2022-04-28 2022-04-28 Three-dimensional visualization method and device, robot, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114820968B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761082A (en) * 2022-10-21 2023-03-07 圣名科技(广州)有限责任公司 Method and apparatus for rendering three-dimensional graphics, electronic device, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780736A (en) * 2017-01-09 2017-05-31 网易(杭州)网络有限公司 Map data processing method and device, three-dimensional map generation method and device
CN107622519A (en) * 2017-09-15 2018-01-23 东南大学 3D model hybrid rendering system and method based on mobile device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109260708B (en) * 2018-08-24 2020-01-10 腾讯科技(深圳)有限公司 Map rendering method and device and computer equipment
CN110095786B (en) * 2019-04-30 2021-02-02 北京云迹科技有限公司 Three-dimensional point cloud map generation method and system based on one-line laser radar
CN112927278B (en) * 2021-02-02 2024-10-22 深圳市杉川机器人有限公司 Control method, control device, robot and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780736A (en) * 2017-01-09 2017-05-31 网易(杭州)网络有限公司 Map data processing method and device, three-dimensional map generation method and device
CN107622519A (en) * 2017-09-15 2018-01-23 东南大学 3D model hybrid rendering system and method based on mobile device

Also Published As

Publication number Publication date
CN114820968A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
US12347016B2 (en) Image rendering method and apparatus, device, medium, and computer program product
CN111340928B (en) Ray tracing-combined real-time hybrid rendering method and device for Web end and computer equipment
CN114549723B (en) Method, device and equipment for rendering lighting information in game scenes
CN112184789B (en) Plant model generation method, device, computer equipment and storage medium
US9508186B2 (en) Pre-segment point cloud data to run real-time shape extraction faster
CN108648269A (en) The monomerization approach and system of three-dimensional building object model
CN115131482B (en) Method, device and equipment for rendering lighting information in game scenes
CN113538706B (en) Digital sand table-based house scene display method, device, equipment and storage medium
CN107464286B (en) Method, device, equipment and readable medium for repairing holes in three-dimensional city model
CN114782646B (en) Modeling method, device, electronic device and readable storage medium for house model
US9019268B1 (en) Modification of a three-dimensional (3D) object data model based on a comparison of images and statistical information
CN116109765A (en) Three-dimensional rendering method and device for labeling objects, computer equipment and storage medium
CN110428504B (en) Text image synthesis method, apparatus, computer device and storage medium
CN107481307B (en) Method for rapidly rendering three-dimensional scene
CN114820968B (en) Three-dimensional visualization method and device, robot, electronic equipment and storage medium
CN114820980B (en) Three-dimensional reconstruction method, device, electronic device and readable storage medium
CN107704483A (en) A kind of loading method of threedimensional model
CN112070901A (en) A garden AR scene construction method, device, storage medium and terminal
CN107481306B (en) A method of three-dimensional interaction
CN112465692A (en) Image processing method, device, equipment and storage medium
CN114663560B (en) Method, device, storage medium and electronic device for realizing animation of target model
US9734579B1 (en) Three-dimensional models visual differential
CN115222867A (en) Overlap detection method, overlap detection device, electronic equipment and storage medium
CN114797109A (en) Object editing method and device, electronic equipment and storage medium
CN107688599B (en) A kind of method of quick-searching threedimensional model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant