[go: up one dir, main page]

CN120689528A - Method, device, storage medium and equipment for 3D real-scene CAD drawing - Google Patents

Method, device, storage medium and equipment for 3D real-scene CAD drawing

Info

Publication number
CN120689528A
CN120689528A CN202511074827.1A CN202511074827A CN120689528A CN 120689528 A CN120689528 A CN 120689528A CN 202511074827 A CN202511074827 A CN 202511074827A CN 120689528 A CN120689528 A CN 120689528A
Authority
CN
China
Prior art keywords
cad
dimensional
scene
real
background display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202511074827.1A
Other languages
Chinese (zh)
Inventor
吴泽锦
陈建成
谢毅
林朔
许浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Survey And Design Institute Fujian Co ltd
PowerChina Huadong Engineering Corp Ltd
Original Assignee
East China Survey And Design Institute Fujian Co ltd
PowerChina Huadong Engineering Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Survey And Design Institute Fujian Co ltd, PowerChina Huadong Engineering Corp Ltd filed Critical East China Survey And Design Institute Fujian Co ltd
Publication of CN120689528A publication Critical patent/CN120689528A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a CAD drawing method, device, storage medium and equipment for fusing three-dimensional live-action, which are suitable for CAD software industry and engineering industry. The method comprises the steps of generating a background display window in a background layer of a CAD three-dimensional drawing area, wherein the background display window faces a virtual camera of the CAD three-dimensional drawing environment and is used for displaying a three-dimensional real-scene point cloud model of a target scene, acquiring position and direction information of the virtual camera in real time, generating view content based on the camera position and direction information, acquiring a screen coordinate point selected by a user based on the three-dimensional real-scene point cloud model in the background display window, generating a corresponding three-dimensional real-scene pick-up point in the CAD three-dimensional drawing environment based on the screen coordinate point and a real-scene surface triangle net model in the background display window, and generating a corresponding CAD graph in the CAD three-dimensional drawing environment based on the three-dimensional real-scene pick-up point in the CAD three-dimensional drawing environment in combination with a preselected CAD drawing tool.

Description

Method, device, storage medium and equipment for three-dimensional live-action CAD drawing
Technical Field
The invention relates to a CAD drawing method, device, storage medium and equipment for fusing three-dimensional live-action. The method is suitable for CAD software industry and engineering industry.
Background
Engineering related industries such as construction, geology, mining, traffic municipal administration and the like, the operation activities are in a wide field space, and the support of three-dimensional space data is highly dependent. However, for a long time, the site information is mainly obtained by a traditional mapping method, and the space information is transmitted through carriers such as characters, drawings (topographic map), image records and the like, so that the defects of authenticity, intuitiveness, accuracy, timeliness, comprehensiveness and the like exist, the real site three-dimensional space condition is difficult to completely restore, the efficiency and quality of related work based on the site space are seriously affected, and the scientificity and the execution efficiency of engineering decision are restricted.
Disclosure of Invention
The invention aims to solve the technical problems and provides a method, a device, a storage medium and equipment for three-dimensional live-action CAD drawing.
The technical scheme adopted by the invention is that the three-dimensional live-action CAD drawing method comprises the following steps:
S100, generating a background display window in a background layer of the CAD three-dimensional drawing area, wherein the background display window is opposite to a virtual camera of the CAD three-dimensional drawing environment and is used for displaying a three-dimensional scenic spot cloud model of a target scene;
the coordinate system of the three-dimensional real scenic spot cloud model displayed by the background display window is consistent with the three-dimensional CAD drawing environment coordinate system, and the three-dimensional real scenic spot cloud model and the three-dimensional CAD drawing environment coordinate system share a virtual camera;
The three-dimensional real-scene point cloud model of the target scene comprises a real-scene model and a point cloud model of the target scene, the coordinate systems of the real-scene model and the point cloud model are consistent, and a real-scene surface triangle network model is generated based on the point cloud model of the target scene;
S200, acquiring position and direction information of a virtual camera in real time, and generating view contents based on the camera position and direction information, wherein the view contents comprise background display windows and existing CAD graphics in a CAD three-dimensional drawing environment;
S300, acquiring a screen coordinate point selected by a user based on a three-dimensional real-scene point cloud model in a background display window, and generating a corresponding three-dimensional real-scene pick-up point in a CAD three-dimensional drawing environment based on the screen coordinate point and a real-scene surface triangle network model in the background display window;
S400, based on the three-dimensional real scene pick-up points in the CAD three-dimensional drawing environment, generating corresponding CAD graphics in the CAD three-dimensional drawing environment by combining a preselected CAD drawing tool.
The generating a corresponding three-dimensional real scene pick-up point in the CAD three-dimensional drawing environment based on the real scene surface triangle network model in the screen coordinate point and the background display window comprises the following steps:
Determining a starting point P0 in the CAD three-dimensional drawing environment based on the position and direction information of the virtual camera;
Calculating a spatial coordinate point P1 of the virtual camera in the CAD three-dimensional drawing environment based on the screen coordinate point and the position and direction information of the virtual camera;
And determining a ray taking P0 as a base point and taking a P0P1 vector as a direction, wherein an intersection point of the ray and a real scene surface triangle network model of the target scene in the background display window is a three-dimensional real scene pickup point corresponding to the screen coordinate point selected by the user.
The generating view content based on the camera position and direction information, wherein the view content comprises a background display window and an existing CAD graph in the CAD three-dimensional drawing environment, and the generating method comprises the following steps:
based on the position and direction information of the virtual camera, three-dimensional scenic spot cloud model data corresponding to the CAD three-dimensional drawing area are downloaded from the cloud server in real time, and rendered into images to be displayed in the background display window.
Generating view content based on the camera position and direction information, wherein the view content comprises a background display window and drawn CAD graphics in a CAD three-dimensional drawing environment, and the method comprises the following steps of:
Based on the spatial position relation between the real-scene surface triangular mesh model corresponding to the three-dimensional real-scene point cloud model displayed by the background display window and the drawn CAD graph in the CAD three-dimensional drawing environment, judging whether the real-scene surface triangular mesh model covers the CAD graph or not by combining the position and direction information of the virtual camera, and displaying the uncovered CAD graph in view content.
The method for generating the corresponding CAD graph in the CAD three-dimensional drawing environment based on the three-dimensional real scene pick-up point in the CAD three-dimensional drawing environment by combining with a preselected CAD drawing tool comprises the following steps:
Based on the three-dimensional live-action pick-up points, the corresponding CAD graph is generated by combining the graph generation rules corresponding to the preselected CAD drawing tools.
The background display window can adjust the shape, size, and position on the view of the window based on a user's window manipulation request.
The background display window can be opened or closed based on a window operation request of a user.
A three-dimensional live-action CAD drawing apparatus comprising:
The drawing environment construction module is used for generating a background display window in a background layer of the CAD three-dimensional drawing area, wherein the background display window is opposite to a virtual camera of the CAD three-dimensional drawing environment and is used for displaying a three-dimensional scenic spot cloud model of a target scene;
the coordinate system of the three-dimensional real scenic spot cloud model displayed by the background display window is consistent with the three-dimensional CAD drawing environment coordinate system, and the three-dimensional real scenic spot cloud model and the three-dimensional CAD drawing environment coordinate system share a virtual camera;
The three-dimensional real-scene point cloud model of the target scene comprises a real-scene model and a point cloud model of the target scene, the coordinate systems of the real-scene model and the point cloud model are consistent, and a real-scene surface triangle network model is generated based on the point cloud model of the target scene;
The camera imaging module is used for acquiring the position and direction information of the virtual camera in real time, and generating view contents based on the camera position and direction information, wherein the view contents comprise background display windows and existing CAD graphics in the CAD three-dimensional drawing environment;
The pick-up point generation module is used for acquiring a screen coordinate point selected by a user based on a three-dimensional real-scene point cloud model in a background display window and generating a corresponding three-dimensional real-scene pick-up point in a CAD three-dimensional drawing environment based on the screen coordinate point and a real-scene surface triangle network model in the background display window;
And the CAD drawing module is used for generating corresponding CAD graphics in the CAD three-dimensional drawing environment based on the three-dimensional real scene pick-up points in the CAD three-dimensional drawing environment by combining with a preselected CAD drawing tool.
A storage medium having stored thereon a computer program executable by a processor, the computer program when executed implementing a method of three-dimensional live-action CAD drawing.
A CAD drawing apparatus having a memory and a processor, the memory having stored thereon a computer program executable by the processor, the computer program when executed implementing a method of three-dimensional live-action CAD drawing.
The three-dimensional real-scene point cloud model is integrated into the CAD three-dimensional drawing environment through the background display window, the three-dimensional real-scene point cloud model and the CAD three-dimensional drawing environment are unified, and the three-dimensional real-scene point cloud model and the CAD three-dimensional drawing environment are shared by the virtual cameras, so that three-dimensional real-scene point cloud model content and CAD three-dimensional drawing environment content can be synchronously acquired through the virtual cameras, view content is generated, the view content is displayed on a screen, a user can select a required point based on observation of the three-dimensional real-scene point cloud model on the screen, three-dimensional real-scene pick-up points in the CAD three-dimensional drawing environment are determined based on screen coordinates of the selected point and combined with a real-scene surface triangle network model, and drawing is completed based on the three-dimensional real-scene pick-up points.
According to the invention, only the three-dimensional live-action image and the point cloud data are acquired on the target engineering site, the three-dimensional live-action image keeps the image characteristics of the target engineering, and the point cloud data keep the space characteristics of the target engineering, so that CAD drawing work of the target engineering can be completed rapidly and accurately through software, and the working efficiency is improved.
Drawings
Fig. 1 is a flow chart of an embodiment.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
In the description of the present invention, the plurality means two or more, and if the description is made to the first and second for the purpose of distinguishing technical features, it should not be construed as indicating or implying relative importance or implicitly indicating the number of the indicated technical features or implicitly indicating the precedence of the indicated technical features. Furthermore, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art.
The three-dimensional real scene technology (3D Reality Modeling, hereinafter referred to as "three-dimensional real scene") constructs a high-precision and measurable three-dimensional real scene point cloud model by means of oblique photography, laser radar (LiDAR), multi-view image fusion and the like, and realizes accurate re-engraving on the scene. The method is a new VR (Virtual Reality) technology which is only developed in the last two years, and solves the technical problem that real vision and accurate spatial information are difficult to fuse for a long time in the mapping field. The three-dimensional live-action technology organically fuses the field omnibearing live-action photo and the laser scanning space point cloud by using a computer to construct a three-dimensional live-action point cloud model integrating a three-dimensional space and live-action vision into a whole, realizes accurate rechecking of the field into a network digital twin space, is beneficial to improvement of engineering efficiency and quality, and has visual realistic effect and accuracy which are difficult to achieve by oblique photography of the three-dimensional live-action.
CAD (Computer aided design, english "Computer AIDED DESIGN" abbreviation) has been widely used in various fields of construction, electronics, electrical, scientific research, machinery, clothing, geology, etc., and is an indispensable tool for engineers.
Embodiment 1 as shown in fig. 1, the embodiment is a method for three-dimensional live-action CAD drawing, specifically comprising the following steps:
And S100, generating a background display window in a background layer of the CAD three-dimensional drawing area, wherein the background display window is opposite to a virtual camera of the CAD three-dimensional drawing environment and is used for displaying a three-dimensional scenic spot cloud model of the target scene.
In this embodiment, the background display window can adjust the shape and size of the window and the position on the view based on the window operation request of the user, for example, the background display window can be adjusted to be distributed over the camera imaging area of the virtual camera, and the background display window can also be adjusted to be distributed over part of the camera imaging area.
The background display window in this example can be opened or closed based on a user's window operation request.
In this embodiment, the coordinate system of the three-dimensional scenic spot cloud model displayed through the background display window is consistent with the coordinate system of the three-dimensional CAD drawing environment, and the virtual camera is shared, so as to ensure the consistency of the view display operation.
In this embodiment, the general laser live-action scanning device collects a three-dimensional live-action point cloud model of a target scene, where the model includes a high-definition three-dimensional live-action image of the target scene and point cloud data, and coordinate systems of the live-action model and the point cloud model are consistent.
In this example, the three-dimensional scenic spot cloud model is stored in a cloud server and is used for CAD drawing in a network service mode. The corresponding view content is downloaded from the cloud server only based on the position and direction information of the virtual camera, and the high-performance cloud server is used for imaging, so that the imaging efficiency is improved, the requirements of local software and hardware configuration are reduced, and the popularization is facilitated.
In the example, a real scene surface triangular mesh model of a target scene is generated based on a point cloud model of the target scene, and the real scene surface triangular mesh model is not displayed and is fused with the real scene model.
In the embodiment, the CAD three-dimensional drawing area is generated based on a virtual camera in the CAD three-dimensional drawing environment, the drawing area content consists of the CAD three-dimensional drawing environment under the visual angle of the virtual camera and a three-dimensional real scenic spot cloud model under the visual angle of the virtual camera, and the image content corresponding to the three-dimensional real scenic spot cloud model is displayed in the drawing area through a background display window.
S200, acquiring position and direction information of the virtual camera in real time, and generating view contents based on the camera position and direction information, wherein the view contents comprise background display windows and existing CAD graphics in the CAD three-dimensional drawing environment.
In this embodiment, the user may adjust the position and direction of the virtual camera through a mouse, a keyboard, etc. to implement rotation, translation, and scaling of the screen display content.
In this embodiment, based on the position and direction information of the virtual camera, three-dimensional real-scene point cloud model data corresponding to the CAD three-dimensional drawing area is downloaded from the cloud server in real time, and rendered into corresponding images for display in the background display window.
In this example, based on the position and direction information of the virtual camera, the existing CAD graphics in the CAD three-dimensional drawing environment are combined to generate the view content corresponding to the existing CAD graphics.
Based on the spatial position relation between the real-scene surface triangular net model corresponding to the three-dimensional real-scene point cloud model displayed by the background display window and the existing CAD graph in the CAD three-dimensional drawing environment, combining the position and direction information of the virtual camera, taking the real-scene surface triangular net model as a boundary, carrying out hiding processing on the part of the CAD graph positioned on the triangular net model covered by the triangular net model according to the view principle, and displaying the uncovered CAD graph in view content so as to restore the visual effect of the real spatial relation between the real-scene model and the CAD graph.
S300, acquiring a screen coordinate point selected by a user based on a three-dimensional real-scene point cloud model in a background display window, and generating a corresponding three-dimensional real-scene pick-up point in a CAD three-dimensional drawing environment based on the screen coordinate point and a real-scene surface triangle network model in the background display window.
S310, determining a starting point P0 in the CAD three-dimensional drawing environment based on the position and direction information of the virtual camera;
S320, calculating a corresponding space coordinate point P1 in the CAD three-dimensional drawing environment based on the screen coordinate point and the position and direction information of the virtual camera;
S330, determining a ray taking P0 as a base point and taking a P0P1 vector as a direction, and taking an intersection point of the ray and a triangle mesh model of the real scene surface of the target scene in the background display window as a three-dimensional real scene pickup point corresponding to the screen coordinate point selected by the user.
S400, based on three-dimensional real scene pick-up points in the CAD three-dimensional drawing environment, generating corresponding CAD graphics in the CAD three-dimensional drawing environment by combining the graphics generation rules corresponding to the preselected CAD drawing tools.
In this embodiment, the user starts drawing by using a CAD drawing tool (such as a straight line drawing tool and a curve drawing tool), and obtains coordinates of points required for drawing by using live-action visual positioning, so as to complete drawing work under live-action, and improve drawing quality and efficiency.
In the embodiment, the background display window can be randomly switched on and off based on a window operation request of a user, a live-action drawing mode is adopted when the live-action is opened, and a CAD drawing mode which is as pure as a traditional CAD is adopted when the live-action is closed, and the two modes can be switched at will, so that the three-dimensional live-action and CAD have the advantages of being good in three-dimensional live-action and CAD. Each drawing operation can be completed in a pure real-scene drawing mode, a pure CAD drawing mode or a mixed mode of the real-scene drawing and the CAD drawing mode, and switching is performed according to drawing requirements.
Embodiment 2. This embodiment is a three-dimensional live-action CAD drawing apparatus, including:
The drawing environment construction module is used for generating a background display window in a background layer of the CAD three-dimensional drawing area, wherein the background display window is opposite to a virtual camera of the CAD three-dimensional drawing environment and is used for displaying a three-dimensional scenic spot cloud model of a target scene;
the coordinate system of the three-dimensional real scenic spot cloud model displayed by the background display window is consistent with the three-dimensional CAD drawing environment coordinate system, and the three-dimensional real scenic spot cloud model and the three-dimensional CAD drawing environment coordinate system share a virtual camera;
The three-dimensional real-scene point cloud model of the target scene comprises a real-scene model and a point cloud model of the target scene, the coordinate systems of the real-scene model and the point cloud model are consistent, and a real-scene surface triangle network model is generated based on the point cloud model of the target scene;
The camera imaging module is used for acquiring the position and direction information of the virtual camera in real time, and generating view contents based on the camera position and direction information, wherein the view contents comprise background display windows and existing CAD graphics in the CAD three-dimensional drawing environment;
The pick-up point generation module is used for acquiring a screen coordinate point selected by a user based on a three-dimensional real-scene point cloud model in a background display window and generating a corresponding three-dimensional real-scene pick-up point in a CAD three-dimensional drawing environment based on the screen coordinate point and a real-scene surface triangle network model in the background display window;
And the CAD drawing module is used for generating corresponding CAD graphics in the CAD three-dimensional drawing environment based on the three-dimensional real scene pick-up points in the CAD three-dimensional drawing environment by combining with a preselected CAD drawing tool.
Embodiment 3 this embodiment is a storage medium having stored thereon a computer program executable by a processor, which when executed implements the method of three-dimensional live-action CAD drawing of embodiment 1.
Embodiment 4 this embodiment is a CAD drawing apparatus that fuses three-dimensional live-action, having a memory and a processor, the memory having stored thereon a computer program executable by the processor, the computer program when executed performing the method of three-dimensional live-action CAD drawing of embodiment 1.
Furthermore, while the present invention has been described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the functions and/or features described above may be integrated in a single physical device and/or software module or one or more of the functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the invention, which is to be defined in the appended claims and their full scope of equivalents.
The above functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or a part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the above-described method of the various embodiments of the present invention. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include an electrical connection (an electronic device) having one or more wires, a portable computer diskette (a magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer-readable medium may even be paper or other suitable medium upon which the program described above is printed, as the program described above may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of techniques known in the art, discrete logic circuits with logic gates for implementing logic functions on data signals, application specific integrated circuits with appropriate combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the foregoing description of the present specification, reference has been made to the terms "one embodiment/example", "another embodiment/example", "certain embodiments/examples", and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the spirit and scope of the invention as defined by the appended claims and their equivalents.
While the preferred embodiment of the present application has been described in detail, the present application is not limited to the above embodiments, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present application, and these equivalent modifications and substitutions are intended to be included in the scope of the present application as defined in the appended claims.

Claims (10)

1. A method of three-dimensional live-action CAD drawing, comprising:
S100, generating a background display window in a background layer of the CAD three-dimensional drawing area, wherein the background display window is opposite to a virtual camera of the CAD three-dimensional drawing environment and is used for displaying a three-dimensional scenic spot cloud model of a target scene;
the coordinate system of the three-dimensional real scenic spot cloud model displayed by the background display window is consistent with the three-dimensional CAD drawing environment coordinate system, and the three-dimensional real scenic spot cloud model and the three-dimensional CAD drawing environment coordinate system share a virtual camera;
The three-dimensional real-scene point cloud model of the target scene comprises a real-scene model and a point cloud model of the target scene, the coordinate systems of the real-scene model and the point cloud model are consistent, and a real-scene surface triangle network model is generated based on the point cloud model of the target scene;
S200, acquiring position and direction information of a virtual camera in real time, and generating view contents based on the camera position and direction information, wherein the view contents comprise background display windows and existing CAD graphics in a CAD three-dimensional drawing environment;
S300, acquiring a screen coordinate point selected by a user based on a three-dimensional real-scene point cloud model in a background display window, and generating a corresponding three-dimensional real-scene pick-up point in a CAD three-dimensional drawing environment based on the screen coordinate point and a real-scene surface triangle network model in the background display window;
S400, based on the three-dimensional real scene pick-up points in the CAD three-dimensional drawing environment, generating corresponding CAD graphics in the CAD three-dimensional drawing environment by combining a preselected CAD drawing tool.
2. The method of three-dimensional live-action CAD drawing according to claim 1, wherein generating corresponding three-dimensional live-action pick-up points in the CAD three-dimensional drawing environment based on the screen coordinate points and the live-action surface triangle mesh model in the background display window, comprises:
Determining a starting point P0 in the CAD three-dimensional drawing environment based on the position and direction information of the virtual camera;
Calculating a spatial coordinate point P1 of the virtual camera in the CAD three-dimensional drawing environment based on the screen coordinate point and the position and direction information of the virtual camera;
And determining a ray taking P0 as a base point and taking a P0P1 vector as a direction, wherein an intersection point of the ray and a real scene surface triangle network model of the target scene in the background display window is a three-dimensional real scene pickup point corresponding to the screen coordinate point selected by the user.
3. The method of three-dimensional live-action CAD drawing according to claim 1, wherein generating view content based on camera position and orientation information, the view content comprising a background display window and CAD graphics existing in the CAD three-dimensional drawing environment, comprises:
based on the position and direction information of the virtual camera, three-dimensional scenic spot cloud model data corresponding to the CAD three-dimensional drawing area are downloaded from the cloud server in real time, and rendered into images to be displayed in the background display window.
4. A method of three-dimensional live-action CAD drawing according to claim 1 or 3, wherein the generating view content based on camera position and orientation information, the view content comprising a background display window and drawn CAD graphics in a CAD three-dimensional drawing environment, comprises:
Based on the spatial position relation between the real-scene surface triangular mesh model corresponding to the three-dimensional real-scene point cloud model displayed by the background display window and the drawn CAD graph in the CAD three-dimensional drawing environment, judging whether the real-scene surface triangular mesh model covers the CAD graph or not by combining the position and direction information of the virtual camera, and displaying the uncovered CAD graph in view content.
5. The method of three-dimensional live-action CAD drawing of claim 1, wherein the generating a corresponding CAD drawing in the CAD three-dimensional drawing environment based on the three-dimensional live-action pick-up points in the CAD three-dimensional drawing environment in combination with the preselected CAD drawing tool comprises:
Based on the three-dimensional live-action pick-up points, the corresponding CAD graph is generated by combining the graph generation rules corresponding to the preselected CAD drawing tools.
6. The method of three-dimensional live-action CAD drawing of claim 1, wherein the background display window is capable of adjusting the shape, size, and position on the view of the window based on a user's window manipulation request.
7. The method of three-dimensional live-action CAD drawing according to claim 1, wherein the background display window is capable of opening or closing the background display window in the CAD three-dimensional drawing area based on a user's window manipulation request.
8. A three-dimensional live-action CAD drawing apparatus, comprising:
The drawing environment construction module is used for generating a background display window in a background layer of the CAD three-dimensional drawing area, wherein the background display window is opposite to a virtual camera of the CAD three-dimensional drawing environment and is used for displaying a three-dimensional scenic spot cloud model of a target scene;
the coordinate system of the three-dimensional real scenic spot cloud model displayed by the background display window is consistent with the three-dimensional CAD drawing environment coordinate system, and the three-dimensional real scenic spot cloud model and the three-dimensional CAD drawing environment coordinate system share a virtual camera;
The three-dimensional real-scene point cloud model of the target scene comprises a real-scene model and a point cloud model of the target scene, the coordinate systems of the real-scene model and the point cloud model are consistent, and a real-scene surface triangle network model is generated based on the point cloud model of the target scene;
The camera imaging module is used for acquiring the position and direction information of the virtual camera in real time, and generating view contents based on the camera position and direction information, wherein the view contents comprise background display windows and existing CAD graphics in the CAD three-dimensional drawing environment;
The pick-up point generation module is used for acquiring a screen coordinate point selected by a user based on a three-dimensional real-scene point cloud model in a background display window and generating a corresponding three-dimensional real-scene pick-up point in a CAD three-dimensional drawing environment based on the screen coordinate point and a real-scene surface triangle network model in the background display window;
And the CAD drawing module is used for generating corresponding CAD graphics in the CAD three-dimensional drawing environment based on the three-dimensional real scene pick-up points in the CAD three-dimensional drawing environment by combining with a preselected CAD drawing tool.
9. A storage medium having stored thereon a computer program executable by a processor, wherein the computer program when executed implements the method of three-dimensional live-action CAD drawing of any one of claims 1-7.
10. A CAD drawing apparatus having a memory and a processor, the memory having stored thereon a computer program executable by the processor, wherein the computer program when executed implements the method of three-dimensional live-action CAD drawing of any one of claims 1-7.
CN202511074827.1A 2025-07-31 2025-08-01 Method, device, storage medium and equipment for 3D real-scene CAD drawing Pending CN120689528A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202511063025 2025-07-31
CN2025110630250 2025-07-31

Publications (1)

Publication Number Publication Date
CN120689528A true CN120689528A (en) 2025-09-23

Family

ID=97083031

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202511074827.1A Pending CN120689528A (en) 2025-07-31 2025-08-01 Method, device, storage medium and equipment for 3D real-scene CAD drawing

Country Status (1)

Country Link
CN (1) CN120689528A (en)

Similar Documents

Publication Publication Date Title
CN108564527B (en) Panoramic image content completion and restoration method and device based on neural network
CN108919944B (en) Virtual roaming method for realizing data lossless interaction at display terminal based on digital city model
US6268862B1 (en) Three dimensional virtual space generation by fusing images
US6867772B2 (en) 3D computer modelling apparatus
US8042056B2 (en) Browsers for large geometric data visualization
US20090289937A1 (en) Multi-scale navigational visualtization
US20170090460A1 (en) 3D Model Generation From Map Data
US20170091993A1 (en) 3D Model Generation From Map Data and User Interface
US10140000B2 (en) Multiscale three-dimensional orientation
CN107223269A (en) Three-dimensional scene positioning method and device
CN113593027B (en) Three-dimensional avionics display control interface device
CN201780606U (en) Field three-dimensional reappearance device
CN107220372B (en) A kind of automatic laying method of three-dimensional map line feature annotation
US20160225179A1 (en) Three-dimensional visualization of a scene or environment
CN115409957A (en) Map construction method based on illusion engine, electronic device and storage medium
CN115409960A (en) Model construction method based on illusion engine, electronic device and storage medium
CN111161398A (en) Image generation method, device, equipment and storage medium
Conde et al. LiDAR data processing for digitization of the castro of Santa Trega and integration in Unreal Engine 5
Trapp et al. Strategies for visualising 3D points-of-interest on mobile devices
Tsai et al. Polygon‐based texture mapping for cyber city 3D building models
Vallance et al. Multi-Perspective Images for Visualisation.
US6518964B1 (en) Apparatus, system, and method for simplifying annotations on a geometric surface
Pan et al. Perception-motivated visualization for 3D city scenes
CN120689528A (en) Method, device, storage medium and equipment for 3D real-scene CAD drawing
CN118674880A (en) A method and device for generating a three-dimensional texture model of a city based on composite data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination