CN119850853B - A fully automatic, fast, high-precision 3D reconstruction method and system - Google Patents
A fully automatic, fast, high-precision 3D reconstruction method and system Download PDFInfo
- Publication number
- CN119850853B CN119850853B CN202510339530.7A CN202510339530A CN119850853B CN 119850853 B CN119850853 B CN 119850853B CN 202510339530 A CN202510339530 A CN 202510339530A CN 119850853 B CN119850853 B CN 119850853B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- spectral
- data
- cloud data
- reflectivity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/06—Ray-tracing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Image Generation (AREA)
Abstract
The invention discloses a full-automatic, rapid and high-precision three-dimensional reconstruction method and system, and relates to the field of three-dimensional dynamic reconstruction. According to the invention, by constructing the spectrum-point cloud fusion model and introducing the Monte Carlo ray tracing calculation dynamic reflectivity field and combining the Fresnel formula and the dynamic ray tracing technology, the reflectivity calculation is more in accordance with the physical rule, so that the accuracy of spectrum mapping is improved. In addition, in the three-dimensional reconstruction process, the invention improves the continuity and the spectrum consistency of the three-dimensional model by a grid optimization method driven by poisson surface reconstruction and spectrum reflectivity according to spectrum enhancement point cloud data.
Description
Technical Field
The invention relates to the field of three-dimensional dynamic reconstruction, in particular to a full-automatic, rapid and high-precision three-dimensional reconstruction method and system.
Background
At present, the three-dimensional reconstruction technology is widely applied to multiple fields of computer vision, reverse engineering, medical imaging, cultural heritage protection and the like, and the main purpose of the three-dimensional reconstruction technology is to acquire the geometric structure of an object through various imaging and measuring means and reconstruct a high-precision three-dimensional model. The traditional three-dimensional reconstruction method mainly comprises three-dimensional vision, structured light scanning, laser radar (LiDAR), multi-view geometric reconstruction and the like. The stereoscopic vision method relies on images of different angles shot by a plurality of cameras, calculates depth information of an object by triangulation by matching feature points in the same scene, and finally constructs a three-dimensional model.
Although the current three-dimensional reconstruction technology has been remarkably developed, the existing method still has many challenges, especially under the requirements of complex scenes or high-precision reconstruction, the existing method has the problems that firstly, full automation cannot be realized, manual intervention is needed in the reconstruction process, manual adjustment equipment is needed, mapping is needed in a man-machine interaction mode after the reconstruction is finished, the trouble is caused, textures of a target cannot be reconstructed while the geometric shape of the target is reconstructed, and finally, only a white geometric model is obtained, which is unfavorable for the observation of the target, and in addition, the traditional three-dimensional reconstruction method mostly only focuses on the geometric shape of an object and omits the acquisition and the utilization of spectrum information. Because the color, material and illumination conditions of the surface of the object have important influence on the three-dimensional reconstruction result, the reconstruction method lacking spectral information is easily influenced by the reflection characteristics of the environmental illumination and the surface material, so that the reconstruction model is not real enough in visual effect and even has errors of geometric forms. Secondly, noise often exists in point cloud data based on technologies such as structured light and LiDAR, and particularly under the conditions of special materials such as low reflectivity, transparency or metal surfaces, the traditional method is difficult to effectively remove the noise and optimize the quality of the point cloud. In addition, in the three-dimensional reconstruction process, due to lack of accurate modeling of the reflectivity of the surface of the object, the existing method cannot truly simulate the spectral reflection characteristics of the object under different illumination conditions, so that the rendering effect and the physical consistency of the model are affected. Meanwhile, the point cloud-based surface reconstruction method still has certain limitation in grid generation and optimization, so that the problems of unsmooth surface, discontinuous topological structure and the like of a final three-dimensional model can exist. Therefore, how to improve the prior art, the three-dimensional reconstruction can accurately capture geometric information, can effectively combine spectral information, improves reconstruction accuracy and visual effect, and becomes an important direction of current research.
Disclosure of Invention
The invention provides a full-automatic, rapid and high-precision three-dimensional reconstruction method and system, which can effectively make up for the defects of the traditional method and improve the spectrum precision, reflectivity consistency and dynamic adaptability of three-dimensional reconstruction.
The full-automatic, rapid and high-precision three-dimensional reconstruction method comprises the following steps of:
s1, synchronously acquiring spectrum data and image data of the surface of a target object through hyperspectral imaging and RGB imaging, acquiring point cloud data driven by an image through Structure from Motion technology and a depth estimation algorithm, carrying out coordinate alignment on the spectrum data and the point cloud data through a joint calibration model, and preprocessing the aligned data through adaptive noise suppression and iteration closest points to obtain a spectrum point cloud data set;
S2, constructing a spectrum-point cloud fusion model according to a spectrum point cloud data set, and performing point cloud rendering through a 3D Gauss Splating technology, calculating a dynamic reflectivity field of the surface of a target object through Monte Carlo ray tracing, and correcting a spectrum mapping error of the dynamic reflectivity field by combining a spectrum error compensation network;
S3, reconstructing a continuous surface according to the spectrum enhancement point cloud data poisson surface, optimizing and adjusting a triangular grid structure through a grid driven by the spectrum reflectivity to obtain a three-dimensional grid model, generating texture information with object surface details according to the image data and the surface characteristics of the spectrum enhancement point cloud, driving the texture generation through a dynamic reflectivity field to finally generate textures with the spectrum information, mapping the textures onto the three-dimensional grid model, and generating a final 3D OBJ model with the textures and the spectrum information.
Further, a full-automatic, fast and high-precision three-dimensional reconstruction system, which is realized based on the full-automatic, fast and high-precision three-dimensional reconstruction method described in any one of the above, comprises:
The data acquisition module is used for synchronously acquiring the spectrum data and the point cloud data of the object surface through combining hyperspectral imaging and RGB imaging with SFM technology, carrying out coordinate alignment on the spectrum data and the point cloud data through a combined calibration model, and preprocessing the aligned data through self-adaptive noise suppression and iteration closest points to obtain a spectrum point cloud data set;
The data processing module is used for constructing a spectrum-point cloud fusion model according to the spectrum point cloud data set, and acquiring spectrum enhanced point cloud data by combining a calculated dynamic reflectivity field and a spectrum error compensation network
The data reconstruction module is used for reconstructing the poisson surface according to the spectrum enhancement point cloud data to generate a continuous surface and optimizing and adjusting a triangular grid structure through a grid driven by the spectrum reflectivity;
The data processing module specifically comprises a dynamic reflectivity field construction unit for calculating a dynamic reflectivity field of the surface of the target object through Monte Carlo ray tracing, a spectrum error compensation network correction unit for correcting spectrum mapping errors of the dynamic reflectivity field, and a spectrum enhancement point cloud data output unit for mapping spectrum data onto a point cloud coordinate system through a spectrum-point cloud fusion model to form spectrum enhancement point cloud data.
A computer readable storage medium storing a computer program which, when run on a computer, causes the computer to perform a fully automated, fast, high precision three-dimensional reconstruction method as claimed in any one of the preceding claims.
An electronic device, comprising:
A memory for storing a computer program;
a processor for executing the computer program to implement a fully automatic, fast, high precision three-dimensional reconstruction method as described in any one of the above.
The invention has the beneficial effects that:
(1) According to the invention, by constructing the spectrum-point cloud fusion model and introducing the Monte Carlo ray tracing calculation dynamic reflectivity field and combining the Fresnel formula and the dynamic ray tracing technology, the reflectivity calculation is more in accordance with the physical rule, so that the accuracy of spectrum mapping is improved. In addition, in order to further reduce the spectrum error, the invention optimizes and adjusts the spectrum data based on the error correction matrix through a spectrum error compensation network, thereby reducing error accumulation caused by factors such as measurement angle, surface roughness and the like;
(2) In the three-dimensional reconstruction process, according to the spectrum enhancement point cloud data, the continuity and the spectrum consistency of the three-dimensional model are improved through the Poisson surface reconstruction and the grid optimization method driven by the spectrum reflectivity. Compared with the traditional Delaunay triangulation or direct point cloud interpolation method, the method provided by the invention uses the spectrum data as optimization constraint to ensure that the spectrum reflectivity characteristics of the grid structure and the real object are kept consistent, so that a more real and accurate three-dimensional reconstruction result is provided in a dynamic scene
(3) According to the invention, a target 720 degrees is imaged through a plurality of cameras at the same time, a full-automatic image matting technology based on artificial intelligence is used for removing the background on the image, and a latest 3DGaussSplating rendering technology is used for three-dimensional reconstruction, so that full-automatic, rapid and high-precision three-dimensional reconstruction is realized.
Drawings
FIG. 1 is a flow chart of a method for a full-automatic, fast, high-precision three-dimensional reconstruction method according to an embodiment of the present invention;
Fig. 2 is a schematic structural diagram of a full-automatic, fast, high-precision three-dimensional reconstruction device according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention will be described in further detail with reference to the accompanying drawings, but the scope of the present invention is not limited to the following description.
For the purpose of making the technical solution and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the particular embodiments described herein are illustrative only and are not intended to limit the invention, i.e., the embodiments described are merely some, but not all, of the embodiments of the invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention. It is noted that relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude that an additional identical element is present in a process, method, article or machine that comprises the element.
The features and capabilities of the present invention are described in further detail below in connection with the examples.
Example 1
As shown in fig. 1, a full-automatic, rapid and high-precision three-dimensional reconstruction method comprises the following steps:
S1, synchronously acquiring spectrum data and point cloud data of the object surface by combining hyperspectral imaging and RGB imaging with SFM technology, aligning coordinates of the spectrum data and the point cloud data by combining a calibration model, and preprocessing the aligned data by self-adaptive noise suppression and iteration closest points to obtain a spectrum point cloud data set;
S2, constructing a spectrum-point cloud fusion model according to a spectrum point cloud data set, and performing point cloud rendering through a 3D Gauss Splating technology, calculating a dynamic reflectivity field of the surface of a target object through Monte Carlo ray tracing, and correcting a spectrum mapping error of the dynamic reflectivity field by combining a spectrum error compensation network;
S3, reconstructing a continuous surface according to the spectrum enhancement point cloud data poisson surface, optimizing and adjusting a triangular grid structure through a grid driven by the spectrum reflectivity, generating texture information with object surface details according to the image data and the surface characteristics of the spectrum enhancement point cloud, driving the texture generation through a dynamic reflectivity field, finally generating textures with the spectrum information, mapping the textures onto a three-dimensional grid model, and generating a final 3D OBJ model with the textures and the spectrum information.
Specifically, the specific implementation principle of the above embodiment is as follows:
S1, acquiring spectral data and image data of the surface of a target object through hyperspectral imaging and RGB imaging synchronously, acquiring point cloud data driven by an image through SFM (Structure from Motion) technology and combining a depth estimation algorithm, performing coordinate alignment on the spectral data and the point cloud data through a combined calibration model, preprocessing the aligned data through adaptive noise suppression and iterative closest points to obtain a spectral point cloud data set, and preferably, removing noise in the spectral and point cloud data through an adaptive noise suppression method due to noise interference and coordinate errors in the acquisition process, optimizing coordinate alignment accuracy through an Iterative Closest Point (ICP) algorithm, and finally forming a high-quality spectral point cloud data set.
S2, constructing a spectrum-point cloud fusion model according to a spectrum point cloud data set, performing point cloud rendering through a 3D Gauss Splating technology, calculating a dynamic reflectivity field of the surface of a target object through Monte Carlo ray tracing, correcting spectrum mapping errors of the dynamic reflectivity field by combining a spectrum error compensation network, mapping corrected spectrum data to a point cloud coordinate system through the spectrum-point cloud fusion model to form spectrum enhanced point cloud data, and calculating the dynamic reflectivity field by adopting a Monte Carlo ray tracing method and estimating reflectivity distribution under different wavelengths through simulating multiple reflection and scattering of rays on the surface of the object because the spectrum reflection characteristic of the surface of the object is not only related to the material of the object but also influenced by ambient illumination. However, due to the complexity of the light propagation model and the effect of measurement errors, there may be some errors in the mapping of the spectral data, so a spectral error compensation network is further introduced, the errors of the spectral data are minimized by training the network, and an error correction matrix is output. The correction matrix is used for compensating spectrum mapping errors in the dynamic reflectivity field, so that the accuracy of fusion of spectrum data and point cloud data is improved. The spectrum-point cloud fusion model maps the corrected spectrum data to a point cloud coordinate system, so that each point cloud data point contains accurate spectrum information, and spectrum enhancement point cloud data is formed.
S3, reconstructing a continuous surface according to the spectrum enhancement point cloud data poisson surface, optimizing and adjusting a triangular grid structure through a grid driven by the spectrum reflectivity to obtain a three-dimensional grid model, generating texture information with object surface details according to the image data and the surface characteristics of the spectrum enhancement point cloud, driving the texture generation through a dynamic reflectivity field to finally generate textures with the spectrum information, mapping the textures onto the three-dimensional grid model, and generating a final 3D OBJ model with the textures and the spectrum information. However, due to the discreteness of the point cloud data, geometric errors may exist on the directly reconstructed surface, so that a grid optimization strategy driven by the spectral reflectivity is further introduced to adjust the triangular grid structure. In the grid optimization process, the normal vector of each point in the spectrum enhancement point cloud data is calculated first, and an initial triangular grid is constructed based on the normal vector. The poisson reconstructed surface is further discretized into triangular patches by a Delaunay triangulation method, and the spectral reflectance value of each patch is determined by the spectral information of the adjacent point cloud data. Preferably, in order to improve the spectrum consistency and geometric accuracy of the model, a Lagrange multiplier optimization algorithm is adopted to match the normal vector of the triangular patch with the normal vector of the point cloud data, and the positions of grid vertices are adjusted through iterative optimization, so that the high-accuracy spectrum enhanced three-dimensional reconstruction model is finally obtained.
Further, in the step S1, a specific implementation principle flow of obtaining the spectral point cloud data set is as follows:
assume that each point cloud data point has spatial coordinates The corresponding spectrum data corresponds to the wavelength ofThe method comprises the steps of obtaining a spectrum data set, mapping the spectrum data to a low-dimensional space through principal component analysis, maintaining the geometric structure of the spectrum data, combining the spectrum data with point cloud coordinate data through the mapping to generate a fused spectrum point cloud data set, and preferably, performing joint training on the point cloud data and the spectrum data through a deep learning technology (such as a multi-mode neural network), so that a complex relationship between the spectrum data and the point cloud coordinate data is learned. The method can automatically adjust the weight of the spectral characteristics according to the geometric characteristics of the point cloud, so that the fused result can better reflect the real optical characteristics of the object. By the method, the obtained spectrum point cloud data set comprises coordinate values and spectrum values, wherein the coordinate values represent the spatial positions of points on the surface of an object and are described by three-dimensional coordinates (x, y, z), the coordinate values represent the information of the geometric shape and the structure of the object, and the spectrum values represent the reflection characteristics of the surface of the object under different wave bands and reflect the absorption and reflection characteristics of substances in different spectrum areas. Further, by fusing point cloud and spectral data, a data set including integrated data of spatial information (i.e., surface shape) and spectral information (i.e., texture and surface reflectivity) is obtained, which enables more accurate surface analysis, defect detection, texture classification, and surface attribute evaluation of an object.
Further, in the step S2, calculating the dynamic reflectivity field of the surface of the target object through monte carlo ray tracing specifically includes the following sub-steps:
s2011, calculating the ray tracing reflectivity of each point cloud data according to a Fresnel formula, and summarizing the ray tracing reflectivity of all the point cloud data on the surface of the target object to obtain an initial reflectivity field;
s2012, according to the surface material type of the target object, setting the reflectivity coefficients of the material type under different wavelengths;
S2013, calculating a dynamic reflectivity field according to the set reflectivity coefficient and the initial reflectivity field.
Further, the fresnel formula is specifically expressed as:
;
Wherein the said Indicating wavelengthAnd angle of incidenceReflectivity under, saidRepresenting the refractive index of an object, saidRepresenting the angle of reflection. Specifically, the reflectance value of each point on the target surface is calculated by simulating the light reflection process for a plurality of times by the Monte Carlo method. In addition, for the type of the surface material of the target object, material parameters are introduced into the reflectivity calculation according to the material (such as metal, ceramic, plastic and the like) of the target object. These material properties (e.g., gloss, absorption coefficient, refractive index, etc.) directly affect the calculation of the reflectivity field. For each material, the reflectance calculation formula needs to be adjusted according to the spectral characteristics of the material, so that the calculation of the reflectance field is optimized.
The metal surface has stronger specular reflection characteristic, the reflectivity can change along with the change of the incident angle, the Ferrbur model can be used for reflecting light rays and calculating the reflectivity, and the ceramic material surface can reflect stronger diffuse reflection light, so that the calculation is performed based on the Lambertian reflection model.
Further, the step S2011 specifically includes the following sub-steps:
s20111, calculating a normal vector of each point cloud data;
S20112, setting the directions of incident light rays and reflected light rays of each point cloud data, calculating an incident angle according to the directions of normal lines and light rays, and calculating a reflection angle according to the included angle between the reflected light rays and the normal line of the surface;
s20113, calculating the ray tracing reflectivity of each point cloud data according to the calculated incident angle and reflection angle through a Fresnel formula;
and S20114, summarizing the reflectivity of each point to obtain an initial reflectivity field of the surface of the whole target object.
Further, in the step S2013, a specific process of calculating the dynamic reflectivity field is shown as follows:
;
Wherein the said Representing point cloud dataAt a wavelength ofA dynamic reflectivity field ofRepresenting wavelength of theRepresenting spatial coordinates of point cloud data, saidRepresenting the reflectivity coefficient of the surface material type of the target object at different wavelengths, theRepresenting point cloud dataAt a wavelength ofAn initial reflectivity field below. Specifically, the initial reflectivity fieldIs a function of spatial coordinates and wavelength and is related to material properties.
Further, the method comprises the steps of,Representing the reflectivity at a certain wavelength and a certain angle of incidence, for describing the reflective properties of the interaction of light with the surface, which in the above-described embodiments are based on the optical properties of the material, whereasRepresenting the spectral reflectance, i.e. the reflection characteristic of a certain wavelength at a spatial location, of each point on the surface of the target object. Whereas each point on the target surface needs to be associated with an angle of incidence, which is related to the propagation path of the light and to the surface normal, the reflectivity depends on the angle of incidence and the angle of reflection, which are related to the surface normal of each point, for a surface point whose normal vector is derived from the point cloud data, for each point the direction of the incident light, and its direction of reflected light, is calculated by the spectral reflectivity formula, at a certain wavelength, namely: Wherein the said Representing the incident light, the reflected light and the surface normal, respectively, saidThe geometric factors describing the relationship between the incident, reflected light and the surface normal are expressed, which are calculated from the reflection model, and can be calculated by Lambertian reflection model or Phong reflection model, for example.
Further, in the step S2, the correcting the spectral mapping error of the dynamic reflectivity field by combining with the spectral error compensation network specifically includes the following sub-steps:
S2021, taking the error of the minimized spectrum data as an optimization target, defining an error function of a spectrum error compensation network, and training the spectrum error compensation network through the minimized error function;
S2022, outputting an error correction matrix through a trained spectrum error compensation network;
s2023, correcting the spectrum data of each point cloud data point according to the error correction matrix.
Specifically, the spectral error compensation network is trained by a deep learning method, and learns how to correct errors by comparing the spectral error compensation network with real reflectivity data. The input of the network is the original spectral reflectance data, and the output is the spectral data after error compensation. For the reflectivity value of each point cloud data, the spectral error compensation network identifies errors based on the spectral characteristics thereof and automatically adjusts the corresponding values. Illustratively, assuming that reflectance values in certain bands are too high or too low, the network will adjust according to the compensation strategy obtained during training, thereby improving the accuracy of the spectral data.
Further, in the step S2021, an error function of the spectral error compensation network is expressed as:
;
Wherein the said Representing an error function of a spectral error compensation network, saidRepresenting an error correction matrix, saidRepresenting a total number of point cloud data, theA quantity index representing point cloud data, saidRepresenting ith point cloud dataCorresponding raw spectral data, obtained from historical data, theRepresenting ith point cloud dataCorresponding real spectrum data is obtained through actual measurement.
Further, in the step S2022, the error correction matrix is expressed as:
;
The said Representing solutions minimizes error correction matricesI.e. find an optimumSo that the corrected spectrum dataClosest to the real spectral data.
Specifically, the error correction matrix is used to correct the spectral value of each wavelength, and is defined by assuming an error correction matrixIs an n x n matrix, where n represents n wavelengths of measurement data for each point, the matrixEach element of (2)The correction coefficient of the spectral data at the i-th wavelength to the spectral data at the j-th wavelength is represented. Illustratively, assuming n=3, then the matrix is modifiedAnd raw spectral dataThe following are provided:
;
;
obtaining corrected spectrum data through matrix multiplication :
。
Further, the spectrum enhanced point cloud data includes point cloud coordinates and a spectral reflectance of each point, that is:
;
Wherein the said Representing spectrally enhanced point cloud data, saidRespectively representing point cloud dataCoordinates of (a), saidIndicating that the ith point cloud data is at wavelengthSpectral reflectance under, saidRepresents the mth wavelength ofRepresenting the ith point cloud data.
Further, the step S3 specifically includes the following substeps:
S301, calculating normal vectors of each point in the spectrum enhancement point cloud data;
s302, generating a three-dimensional surface through a poisson reconstruction algorithm based on normal vector data of the point cloud;
s303, discretizing the poisson reconstruction surface into a triangular grid through Delaunay triangulation, wherein the triangular grid consists of triangular patches, and each patch corresponds to three adjacent points in a point cloud, wherein each point has a spectral reflectance value;
S304, the normal vector of the patch of each triangle is consistent with the normal vector of the corresponding point in the point cloud data and is used as an objective function of spectrum driving optimization, and the objective function is minimized through a Lagrangian multiplier method optimization algorithm;
s305, optimizing the grid structure by iteratively updating the positions of the grid vertices.
Further, in the step S304, the objective function of the spectrum driving optimization is specifically expressed as:
;
Wherein the said Representing an objective function of a spectral drive optimization, saidRepresenting trianglesNormal vector of the dough sheet, saidRepresenting ith point cloud dataNormal vector of (a), saidRepresenting a dough sheet, saidRepresenting the influence factor of spectral reflectance on grid optimization, saidRepresenting point cloud dataAt the wavelength ofSpectral reflectance under, saidIndicating dough sheetAt the wavelength ofSpectral reflectance under, saidRepresenting the mth wavelength. Specifically, the grid vertex positions are adjusted through a spectrum reflectivity driven optimization method, so that the quality of grids is improved, and the optimization aim is to enable the normal vector of the patch of each triangle to be consistent with the normal vector of a corresponding point in point cloud data, so that the consistency of geometric and spectrum information is maintained. The objective function comprises two parts, wherein the first part is a geometric consistency term, namely, the normal vector of the triangular mesh patch is ensured to be consistent with the normal vector of the point cloud data, and the second part is a spectrum consistency term, namely, the spectrum reflectivity data is ensured to be effectively transferred and maintained in the mesh.
Further, as a preferred implementation manner of the foregoing embodiment, a method for verifying and post-processing an optimized grid is provided, where the method includes the following steps:
S401, verifying the geometric quality of the optimized grid through a grid smoothness index and triangle shape quality, wherein the geometric quality is used as an evaluation standard through Delaunay property and side length ratio;
S402, performing spectrum consistency verification by calculating the error of the spectrum reflectivity of each patch and the reflectivity of the adjacent point cloud data points;
s403, outputting the optimized grid as a three-dimensional grid file.
Further, in the step S401, the specific process of verifying the geometric quality of the optimized mesh by the mesh smoothness index and the triangle shape quality is that the geometric accuracy of the surface reconstruction is verified by calculating the error between the mesh vertex and the original point cloud data, namely:
;
Wherein the said Representing an error in mesh vertex and origin cloud data, theOptimized grid and pointThe closest point is extracted by means of random sampling.
Further, the specific flow of step S402 is shown as follows:
;
Wherein the said Representing the error of the spectral reflectance of each patch from the reflectance of the adjacent point cloud data points, theRepresenting the i-th patch.
Example two
Further, as a preferred implementation manner of the foregoing embodiment, a fully automatic, fast, high-precision three-dimensional reconstruction system is provided, including:
The data acquisition module is used for synchronously acquiring the spectrum data and the point cloud data of the object surface through combining the hyper-spectrum imaging and RGB imaging with Structure from Motion technology, carrying out coordinate alignment on the spectrum data and the point cloud data through a combined calibration model, and preprocessing the aligned data through self-adaptive noise suppression and iteration closest points to obtain a spectrum point cloud data set;
The data processing module is used for constructing a spectrum-point cloud fusion model according to a spectrum point cloud data set, calculating a dynamic reflectivity field of the surface of the target object through Monte Carlo ray tracing, and correcting a spectrum mapping error of the dynamic reflectivity field by combining a spectrum error compensation network;
and the data reconstruction module is used for reconstructing the poisson surface according to the spectrum enhancement point cloud data to generate a continuous surface and optimizing and adjusting the triangular grid structure through the grid driven by the spectrum reflectivity.
Further, the data processing module specifically includes a dynamic reflectivity field building unit, a spectrum error compensation network correction unit and a spectrum enhancement point cloud data output unit, wherein:
The dynamic reflectivity field construction unit specifically further comprises:
the initial reflectivity field calculation subunit is used for calculating the reflectivity of the ray tracing of each point cloud data according to a Fresnel formula, and summarizing the reflectivity of the ray tracing of all the point cloud data on the surface of the target object to obtain an initial reflectivity field;
The reflectivity coefficient calculating subunit is used for setting the reflectivity coefficient of the material type under different wavelengths according to the surface material type of the target object;
And the dynamic reflectivity length calculating subunit is used for calculating a dynamic reflectivity field according to the set reflectivity coefficient and the initial reflectivity field.
The specific flow of calculating the dynamic reflectivity field is expressed as follows:
;
Wherein the said Representing point cloud dataAt a wavelength ofA dynamic reflectivity field ofRepresenting wavelength of theRepresenting spatial coordinates of point cloud data, saidRepresenting the reflectivity coefficient of the surface material type of the target object at different wavelengths, theRepresenting point cloud dataAt a wavelength ofAn initial reflectivity field below.
Further, the spectral error compensation network correction unit specifically further includes:
An optimization target definition subunit, configured to define an error function of the spectrum error compensation network by using the minimized error of the spectrum data as an optimization target, and train the spectrum error compensation network by using the minimized error function;
the network training subunit is used for outputting an error correction matrix through the trained spectrum error compensation network;
and the error correction subunit is used for correcting the spectrum data of each point cloud data point according to the error correction matrix.
Further, the error function of the spectral error compensation network is expressed as:
;
Wherein the said Representing an error function of a spectral error compensation network, saidRepresenting an error correction matrix, saidRepresenting a total number of point cloud data, theA quantity index representing point cloud data, saidRepresenting ith point cloud dataCorresponding raw spectral data, obtained from historical data, theRepresenting ith point cloud dataCorresponding real spectrum data is obtained through actual measurement.
Further, the error correction matrix is expressed as:
;
Wherein the said Representing solutions minimizes error correction matricesI.e. find an optimumSo that the corrected spectrum dataClosest to the real spectral data.
Further, the spectrum enhanced point cloud data includes point cloud coordinates and a spectral reflectance of each point, that is:
;
Wherein the said Representing spectrally enhanced point cloud data, saidRespectively representing point cloud dataCoordinates of (a), saidIndicating that the ith point cloud data is at wavelengthSpectral reflectance under, saidRepresents the mth wavelength ofRepresenting the ith point cloud data.
Further, the data reconstruction module specifically includes:
The normal vector calculation unit is used for calculating the normal vector of each point in the spectrum enhancement point cloud data;
the three-dimensional reconstruction unit is used for generating a three-dimensional surface through a poisson reconstruction algorithm based on normal vector data of the point cloud;
a construction processing unit for discretizing the poisson reconstruction surface into a triangular mesh by Delaunay triangulation, the triangular mesh being composed of triangular patches, each patch corresponding to three adjacent points in a point cloud, wherein each point has one spectral reflectance value;
the spectrum driving optimization unit is used for taking the normal vector of the patch of each triangle and the normal vector of the corresponding point in the point cloud data as an objective function of spectrum driving optimization, and minimizing the objective function through a Lagrange multiplier optimization algorithm;
The updating iteration optimization unit is used for optimizing the grid structure by iteratively updating the positions of the grid vertexes;
The texture mapping generation unit is used for driving texture generation through a dynamic reflectivity field, finally generating textures with spectrum information, mapping the textures onto a three-dimensional grid model, and generating a final 3D OBJ model with the textures and the spectrum information.
Further, the objective function of the spectrum driving optimization is specifically expressed as:
;
Wherein the said Representing an objective function of a spectral drive optimization, saidRepresenting trianglesNormal vector of the dough sheet, saidRepresenting ith point cloud dataNormal vector of (a), saidRepresenting a dough sheet, saidRepresenting the influence factor of spectral reflectance on grid optimization, saidRepresenting point cloud dataAt the wavelength ofSpectral reflectance under, saidIndicating dough sheetAt the wavelength ofSpectral reflectance under, saidRepresenting the mth wavelength.
Example III
On the basis of the first embodiment, the embodiment provides a full-automatic, rapid and high-precision three-dimensional reconstruction device, a hardware design schematic diagram of the device is shown in fig. 2, a target image is rapidly acquired through synchronous acquisition of a plurality of cameras, large depth of field imaging is realized through stacking acquisition, background interference is reduced through artificial intelligence automatic matting, and high-precision three-dimensional reconstruction of a target is realized while a reconstruction process is further accelerated.
The working flow is as follows
1. The vertical 360-degree image of the target object is synchronously acquired through a plurality of cameras which are vertically arranged and automatically zoom and automatically focus, and the horizontal 360-degree image of the target object is acquired through the rotation of the turntable, so that 720-degree omnibearing imaging is realized. The method comprises the steps of automatically removing the background by adopting an artificial intelligence algorithm, ensuring that the image data only contains a target object, and recovering the camera posture during image shooting by adopting an SFM (Structure-from-Motion) technology. And optimizing the density of the point cloud by combining a depth estimation algorithm to obtain image-driven high-precision point cloud data. The method comprises the steps of synchronously obtaining spectrum information of the surface of an object by adopting hyperspectral imaging, aligning the spectrum data with cloud coordinates of image points through a joint calibration model, and forming a spectrum point cloud data set;
2. Establishing a spectrum-point cloud fusion model, and establishing a spectrum enhanced point cloud model through a mapping relation between point cloud coordinates and spectrum data, wherein the optimized 3D Gauss Splating technology is adopted to conduct point cloud rendering so that the spectrum data are uniformly distributed in a three-dimensional space;
3. Calculating a dynamic reflectivity field (D-SRF) of the surface of the target object by adopting Monte Carlo ray tracing, calculating an error correction matrix by combining a spectrum error compensation network, and optimizing spectrum mapping precision;
4. and mapping the spectrum data to a point cloud coordinate system through a spectrum-point cloud fusion model to form high-precision spectrum enhanced point cloud data.
5. The method comprises the steps of converting a spectrum enhancement point cloud into a continuous 3D surface model by a Poisson surface reconstruction algorithm, optimizing triangular grids driven by spectrum reflectivity, adjusting grid structures to enable spectrum data to be more uniform on the geometric surface, generating texture information with object surface details by utilizing image data (obtained from an image acquisition process) and surface characteristics of the spectrum enhancement point cloud in the previous step, driving texture generation by a dynamic reflectivity field (corrected spectrum reflectivity data) in combination with the image data and the spectrum data to ensure that visual effects of the textures are consistent with the spectrum information, finally generating textures with the spectrum information, mapping the textures onto a three-dimensional grid model, and generating a final 3D OBJ model with the textures and the spectrum information.
Example IV
On the basis of the first embodiment, the present embodiment proposes a terminal device for full-automatic, rapid, high-precision three-dimensional reconstruction, where the terminal device includes at least one memory, at least one processor, and a bus connecting different platform systems.
The memory may include readable media in the form of volatile memory, such as RAM211 and/or cache memory, and may further include ROM213.
The memory further stores a computer program, and the computer program may be executed by the processor, so that the processor executes any one of the above-mentioned full-automatic, rapid, and high-precision three-dimensional reconstruction methods in the embodiments of the present application, and a specific implementation manner of the method is consistent with an implementation manner and an achieved technical effect described in the embodiments of the method, and some of the contents are not repeated. The memory may also include programs/utilities having a set (at least one) of program modules including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Accordingly, the processor may execute the above-described computer program, as well as the program/utility.
The bus may be one or more of several types of bus structures including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures.
The terminal device may also communicate with one or more external devices, such as a keyboard, pointing device, bluetooth device, etc., as well as with one or more devices capable of interacting with the terminal device, and/or with any device (e.g., router, modem, etc.) that enables the terminal device to communicate with one or more other computing devices. Such communication may be through an I/O interface. And the terminal device may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the internet, via a network adapter. The network adapter may communicate with other modules of the terminal device via a bus. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with the terminal device, including, but not limited to, microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, among others.
Example five
On the basis of the first embodiment, the present embodiment proposes a fully automatic, fast, high-precision three-dimensional reconstruction computer-readable storage medium, on which instructions are stored, which when executed by a processor implement any one of the above-mentioned fully automatic, fast, high-precision three-dimensional reconstruction methods. The specific implementation manner of the method is consistent with the implementation manner and the achieved technical effect recorded in the embodiment of the method, and part of the contents are not repeated.
The present embodiment provides a program product for implementing the above method, which may employ a portable compact disc read only memory (CD-ROM) and comprise program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in the present embodiment, the readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of a readable storage medium include an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable storage medium may also be any readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The foregoing is merely a preferred embodiment of the invention, and it is to be understood that the invention is not limited to the form disclosed herein but is not to be construed as excluding other embodiments, but is capable of numerous other combinations, modifications and environments and is capable of modifications within the scope of the inventive concept, either as taught or as a matter of routine skill or knowledge in the relevant art. And that modifications and variations which do not depart from the spirit and scope of the invention are intended to be within the scope of the appended claims.
Claims (16)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510339530.7A CN119850853B (en) | 2025-03-21 | 2025-03-21 | A fully automatic, fast, high-precision 3D reconstruction method and system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510339530.7A CN119850853B (en) | 2025-03-21 | 2025-03-21 | A fully automatic, fast, high-precision 3D reconstruction method and system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119850853A CN119850853A (en) | 2025-04-18 |
| CN119850853B true CN119850853B (en) | 2025-05-16 |
Family
ID=95358305
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510339530.7A Active CN119850853B (en) | 2025-03-21 | 2025-03-21 | A fully automatic, fast, high-precision 3D reconstruction method and system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119850853B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120563503B (en) * | 2025-07-30 | 2025-09-30 | 陕西金鑫电器有限公司 | Intelligent detection method for weld bead surface air holes of high-pressure shell |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116416375A (en) * | 2023-02-17 | 2023-07-11 | 贵州大学 | A 3D reconstruction method and system based on deep learning |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11893692B1 (en) * | 2023-05-31 | 2024-02-06 | Illuscio, Inc. | Systems and methods for retexturizing mesh-based objects using point clouds |
-
2025
- 2025-03-21 CN CN202510339530.7A patent/CN119850853B/en active Active
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116416375A (en) * | 2023-02-17 | 2023-07-11 | 贵州大学 | A 3D reconstruction method and system based on deep learning |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119850853A (en) | 2025-04-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| AU2017248506B2 (en) | Implementation of an advanced image formation process as a network layer and its applications | |
| CN109076148B (en) | Daily scene reconstruction engine | |
| US6937235B2 (en) | Three-dimensional object surface shape modeling apparatus, method and program | |
| CN119850853B (en) | A fully automatic, fast, high-precision 3D reconstruction method and system | |
| US20120013617A1 (en) | Method for global parameterization and quad meshing on point cloud | |
| Wang et al. | Curvature-guided adaptive T-spline surface fitting | |
| CN113393577B (en) | Oblique photography terrain reconstruction method | |
| CN118781178B (en) | A volume measurement method based on surface reconstruction and triple integral | |
| CN119165652A (en) | A high-precision three-dimensional reconstruction method and system for optical lenses | |
| CN119991921B (en) | A 3D scene loading and rendering method for deep-sea 3D environment simulation | |
| CN118644605A (en) | 3D Gaussian-based inverse rendering method, device, equipment and storage medium | |
| CN119693544A (en) | A three-dimensional Gaussian splash reconstruction method and related equipment | |
| CN118247429A (en) | A method and system for rapid three-dimensional modeling in air-ground collaboration | |
| Zhuang et al. | The influence of active projection speckle patterns on underwater binocular stereo vision 3D imaging | |
| Wang et al. | Overcoming single-technology limitations in digital heritage preservation: A study of the LiPhoScan 3D reconstruction model | |
| US8948498B1 (en) | Systems and methods to transform a colored point cloud to a 3D textured mesh | |
| CN118762314B (en) | Scene simulation-based image water level identification method and device | |
| CN119941947A (en) | A simulation method and system based on three-dimensional Gaussian splashing | |
| CN119251430A (en) | A method, medium and device for constructing a virtual scene based on point cloud data | |
| CN117934727A (en) | Three-dimensional reconstruction method, device, equipment and storage medium for mirror object | |
| CN112149578B (en) | Face skin material calculation method, device and equipment based on face three-dimensional model | |
| CN119888091B (en) | Three-dimensional reconstruction method and related equipment | |
| CN120107517B (en) | Three-dimensional model generation method and system based on light-weight glass container | |
| CN119478232B (en) | Method, device and equipment for detecting three-dimensional building defects based on multi-sensor fusion | |
| CN118521699B (en) | A method and system for generating three-dimensional hair styles of virtual humans |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |