CN109214350B - Method, device and equipment for determining illumination parameters and storage medium - Google Patents
Method, device and equipment for determining illumination parameters and storage medium Download PDFInfo
- Publication number
- CN109214350B CN109214350B CN201811108374.XA CN201811108374A CN109214350B CN 109214350 B CN109214350 B CN 109214350B CN 201811108374 A CN201811108374 A CN 201811108374A CN 109214350 B CN109214350 B CN 109214350B
- Authority
- CN
- China
- Prior art keywords
- face
- illumination
- determining
- coefficient set
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/60—Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses a method, a device, equipment and a storage medium for determining an illumination parameter, wherein the method comprises the following steps: identifying at least one face sampling point in a face area in a target picture; determining a reflection coefficient set matched with the face sampling points by a geometric method and/or a training set method; calculating an illumination coefficient set corresponding to the face sampling points according to the pixel information respectively corresponding to the face sampling points and the reflection coefficient set; and determining an illumination parameter corresponding to the target picture according to the illumination coefficient set corresponding to the face sampling point. The technical scheme of the embodiment of the invention has low requirements on related equipment, can fully explore the illumination parameters of the picture, and meets the simulation effect on complex illumination, thereby improving the accuracy and the universality of illumination parameter estimation.
Description
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a method, a device, equipment and a storage medium for determining an illumination parameter.
Background
Illumination parameter estimation (hereinafter referred to as illumination estimation) is to acquire illumination information from a picture, and the illumination information can be applied to the technical fields of face recognition, enhanced display and the like. Under the condition that the illumination parameters are known, the virtual object can be lightened, and the light and shade change and the shadow projection of the surface of the virtual object are rendered. Therefore, accurate illumination parameter estimation is critical to quantitatively describe illumination characteristics in a video or picture scene.
Existing illumination estimation techniques can be broadly divided into two categories: methods using a non-face probe and methods using a face probe, wherein the methods using a non-face probe can be mainly classified into the following categories: 1) the presence of a probe object with known geometrical information is required: such as cube or specular reflecting spheres. 2) Manual calibration of a special reference area is required as a probe: such as object boundaries-pixel regions whose normal is in the image plane, or the position of a vertical object and its parallel shadow lines in a shadowed image. 3) Illumination estimation using panorama sampled pictures: a panoramic picture with an illumination mark (sun position) is used as a training sample, and a deep convolutional neural network is trained to predict parameters such as the sun position, the atmospheric turbidity, the camera position and the like. 4) The method for directly acquiring the light source information comprises the following steps: such as a method of directly acquiring information such as the position of a light source inside a studio by using an additional fisheye lens. 5) Simultaneously acquiring depth information using an RGB-D camera: and detecting the specular reflection area while simultaneously completing the positioning and mapping tasks, and estimating the position and the intensity of the discrete light source. 6) Simple estimation of ambient light average intensity: the pixel average luminance is used as an estimate. The methods using the face probe mainly include the following two types: 1) the method based on the face front picture comprises the following steps: and (3) using the sampling pixels of the front face picture with the real illumination label as training samples to train an illumination parameter model described by a linear equation. 2) The method based on the face depth information comprises the following steps: the method has the advantages that 3D model data of the human face are collected by the aid of the front-mounted depth camera of the mobile phone, and simple estimation of main light sources and environment light parameters can be supported in application with human face tracking.
In the process of implementing the invention, the inventor finds that the prior art has the following defects: the scheme of the probe object with known geometric information needs to exist, the probe object needs to exist in a picture, special requirements on the geometric form and the surface reflection characteristic of the probe object are required, and the scheme is difficult to meet in reality. The scheme that a special reference region is required to be calibrated manually to serve as a probe requires a manual calibration step in the calculation process, and the calibration step can be automatically carried out only by means of an algorithm, so that the whole scheme is difficult to automate and apply in a large scale. The scheme of using the panoramic image sampling picture for illumination estimation is only used for outdoor scenes, the data set acquisition cost is high, and the method precision is low. The method for directly acquiring the light source information is not suitable for shot videos, has high requirements on acquisition equipment and needs hardware reconstruction. The scheme of simultaneously acquiring depth information by using the RGB-D camera has high requirements on equipment, and is not suitable for common videos and non-depth cameras lacking depth information. The method needs a simple scheme for estimating the average intensity of the ambient light, cannot meet the simulation effect of complex illumination, and even simple shadows cannot be simulated and reconstructed due to the lack of description of a light source. The method based on the face front face picture requires that the use scene is limited to the condition that the face front face exists in the picture, and the model is invalid when the face rotation angle is large. The method based on the face depth information has high requirements on equipment, and is not suitable for common videos and non-depth cameras lacking depth information.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a storage medium for determining an illumination parameter, which are used for improving the accuracy and universality of illumination characteristic parameter estimation.
In a first aspect, an embodiment of the present invention provides a method for determining an illumination parameter, including:
identifying at least one face sampling point in a face area in a target picture;
determining a reflection coefficient set matched with the face sampling points by a geometric method and/or a training set method;
calculating an illumination coefficient set corresponding to the face sampling points according to the pixel information respectively corresponding to the face sampling points and the reflection coefficient set;
and determining an illumination parameter corresponding to the target picture according to the illumination coefficient set corresponding to the face sampling point.
In a second aspect, an embodiment of the present invention further provides an apparatus for determining an illumination parameter, including:
the sampling point identification module is used for identifying at least one face sampling point in a face area in a target picture;
the reflection coefficient set determining module is used for determining a reflection coefficient set matched with the face sampling points through a geometric method and/or a training set method;
the illumination coefficient set calculation module is used for calculating an illumination coefficient set corresponding to the face sampling point according to the pixel information respectively corresponding to the face sampling point and the reflection coefficient set;
and the illumination parameter determining module is used for determining the illumination parameters corresponding to the target picture according to the illumination coefficient set corresponding to the face sampling point.
In a third aspect, an embodiment of the present invention further provides a computer device, where the computer device includes:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method for determining an illumination parameter provided by any embodiment of the present invention.
In a fourth aspect, an embodiment of the present invention further provides a computer storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for determining an illumination parameter provided in any embodiment of the present invention.
The embodiment of the invention identifies at least one face sampling point in a face area in a target picture, determines a reflection coefficient set matched with the face sampling point by adopting a geometric method and/or a training set method, calculates an illumination coefficient set corresponding to the face sampling point according to pixel information and the reflection coefficient set respectively corresponding to the face sampling point, and finally determines an illumination parameter corresponding to the target picture according to the illumination coefficient set corresponding to the face sampling point.
Drawings
Fig. 1 is a flowchart of a method for determining an illumination parameter according to an embodiment of the present invention;
fig. 2a is a flowchart of a method for determining an illumination parameter according to a second embodiment of the present invention;
fig. 2b is a schematic diagram illustrating an effect of obtaining a face sampling point according to a key point according to the second embodiment of the present invention;
fig. 2c is a schematic diagram illustrating an effect of obtaining a face sampling point according to a standard 3D face model according to a second embodiment of the present invention;
fig. 2D is a schematic diagram illustrating an effect of determining an illumination parameter according to a 3D face sampling point according to the second embodiment of the present invention;
fig. 2e is a schematic diagram illustrating the effect of lighting the virtual billboard in the picture based on the determined lighting parameters according to the second embodiment of the present invention;
fig. 3 is a schematic diagram of an illumination parameter determination apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention.
It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
Fig. 1 is a flowchart of a method for determining an illumination parameter according to an embodiment of the present invention, where the present embodiment is applicable to accurately determine an illumination parameter in a picture, and the method may be executed by an illumination parameter determining apparatus, which may be implemented by software and/or hardware, and may be generally integrated in a computer device. Accordingly, as shown in fig. 1, the method comprises the following operations:
s110, identifying at least one face sampling point in the face area in the target picture.
The target picture is a picture needing to determine the illumination parameters.
It can be understood that human face is one of the core elements of many videos (especially short videos, a new content form on the internet). For example, in videos such as various facial beautification, virtual makeup trial, self-shooting, anchor video, movie and television works and the like, the definition of the face is high, and the face can be used as an ideal illumination estimation probe. Although the face geometry is not convex, and the 3D features of different faces are not identical. But to some extent it can be assumed that the overall 3D characteristic variation variance of the face is not large. When lighting parameters are given, there is a certain general law of light and shade, and such a law can be described by modeling. The requirement for the presence of a human face is easier to achieve than requiring the presence of other specific forms of known objects in the video or scene, such as standard spheres that satisfy specular reflection, or cubes with known color dimensions and surface characteristics.
Therefore, in the embodiment of the present invention, a human face needs to be present in the target picture, and color brightness change and shadow generated by illumination of the light source can be observed on the human face (generally, the human face included in the picture can meet the requirements of the color brightness change and the shadow, because the condition is not detected in the embodiment, and only the target picture is guaranteed to include the human face), so as to determine the illumination parameter according to the target picture in which the human face exists. When the illumination parameters of the target picture need to be determined, the illumination parameters can be determined through at least one face sampling point in the face area in the target picture.
In an optional embodiment of the present invention, the target picture may be a frame image in a target video file.
Wherein the target video file may be a video file for determining the lighting parameters. Accordingly, the target picture may be one of the frames of the image in the target video file.
In the embodiment of the invention, when the target picture is from the target video file, the determined illumination parameter of the target picture is the illumination parameter of the determined target video file. Therefore, the illumination parameter determination method provided by the embodiment of the invention can fully mine the illumination characteristics of the picture and the video, becomes an important characteristic dimension for describing the content of the picture and the video, and has a key value for numerous applications such as advertisement implantation, video classification, video scene understanding, video recommendation and the like.
And S120, determining a reflection coefficient set matched with the face sampling points by a geometric method and/or a training set method.
The geometric method may be a method for determining a reflection coefficient set by using physical geometric information, for example, a reflection coefficient set matched with a face sampling point is determined by using parameters such as an incident light direction and a normal of a light source. The training set method can establish a model according to the face sampling points so as to describe the illumination characteristics through the model. The set of reflection coefficients may be a correlation function or a relation or the like for solving the reflection function.
In the embodiment of the invention, after at least one face sampling point in the target picture is obtained, a reflection coefficient set matched with each face sampling point can be determined by a geometric method and/or a training set method.
S130, calculating an illumination coefficient set corresponding to the face sampling point according to the pixel information respectively corresponding to the face sampling point and the reflection coefficient set.
The pixel information may be a pixel value corresponding to a face sampling point in the target picture, for example, a pixel color of the face sampling point, and the pixel information may be an observed target illumination effect. The illumination coefficient set can be a correlation function or a relational expression for reflecting actual illumination information of the face sampling point.
Correspondingly, in the embodiment of the invention, the pixel information can be obtained according to the observation of the face sampling points, the determined reflection coefficient set is combined as two known quantities, and the illumination coefficient set corresponding to each face sampling point is calculated according to the two known quantities.
S140, determining an illumination parameter corresponding to the target picture according to the illumination coefficient set corresponding to the face sampling point.
In the embodiment of the invention, the illumination parameter information corresponding to the target picture can be finally determined according to the illumination coefficient set corresponding to each face sampling point. Since the illumination coefficient sets corresponding to each face sampling point may not be completely consistent, data processing (such as averaging) needs to be performed on each illumination coefficient set to obtain the final illumination parameter.
The illumination determination scheme provided by the embodiment of the invention does not need to be supported by complex equipment, so that the requirements on related equipment are low, the automation and large-scale application are easy, the universality is stronger, the complex illumination can be accurately estimated, and the simulation effect of the complex illumination can be met.
In an optional embodiment of the present invention, after determining the illumination parameter corresponding to the target picture according to the illumination coefficient set corresponding to the face sampling point, the method may further include: acquiring illumination parameters respectively determined by at least two frames of images in the target video file; processing each illumination parameter according to a set data processing technology to obtain an illumination parameter matched with the target video file; the illumination parameters comprise an illumination incidence direction and illumination intensity.
The setting data processing technique may be a method used for further processing and conversion of the obtained data. For example, the setting data processing technique may be averaging or filtering, and the specific form of the setting data processing technique is not limited in the embodiments of the present invention.
In the embodiment of the present invention, if the method for determining the illumination parameter is applied to the video illumination feature extraction, in order to improve the accuracy of the video illumination feature extraction, the multi-frame image in the target video file may be analyzed to obtain the corresponding illumination parameter. It will be appreciated that in general, especially for short videos, the lighting parameters (including the direction of incidence and the lighting intensity) in the video are unchanged. Therefore, the multi-frame images in the target video file can be calculated to determine the corresponding illumination parameters, and the obtained multiple illumination parameters are processed according to the set data processing technology, so that the final illumination parameters matched with the target video file are obtained. The final illumination parameter of the target video file is determined through the illumination parameters of the multiple frames of images, so that the error of the illumination parameter of the single frame of image can be weakened or eliminated, and the accuracy of the illumination parameter of the target video file is further improved.
In an optional embodiment of the present invention, the processing of each of the illumination parameters according to the setting data processing technique may include at least one of:
voting is carried out on each illumination parameter, and the mean value or median of each parameter is obtained to be used as the illumination parameter matched with the target video file;
calculating a moving average value of each illumination parameter by using a time domain sliding window to serve as the illumination parameter matched with the target video file; and
and filtering each illumination parameter by using a Kalman filter to obtain the illumination parameter matched with the target video file.
In the embodiment of the present invention, the setting data processing technique includes, but is not limited to, a voting process moving average process, a filtering process, and the like. Specifically, the voting processing mode may be directly calculating an average or median of the determined illumination parameters of the multiple frames of images, and using the average or median as the illumination parameter matched with the target video file. The moving average processing mode can be that a time domain sliding window is used for calculating the average value of each illumination parameter in a certain time period, and the accuracy of the illumination parameters of the video file with illumination parameter change in the time region can be improved. The filtering processing mode may be to filter each illumination parameter by using a relevant filter, such as a kalman filter, to filter noise to obtain a more robust illumination parameter estimation, and to use the illumination parameter obtained by the final filtering as an illumination parameter matched with the target video file.
The embodiment of the invention identifies at least one face sampling point in a face area in a target picture, determines a reflection coefficient set matched with the face sampling point by adopting a geometric method and/or a training set method, calculates an illumination coefficient set corresponding to the face sampling point according to pixel information and the reflection coefficient set respectively corresponding to the face sampling point, and finally determines an illumination parameter corresponding to the target picture according to the illumination coefficient set corresponding to the face sampling point.
Example two
Fig. 2a is a flowchart of a method for determining an illumination parameter according to a second embodiment of the present invention, which is embodied based on the second embodiment, and in this embodiment, a specific implementation manner is provided for identifying at least one face sampling point in a face region in a target picture, determining a reflection coefficient set matched with the face sampling point, and calculating an illumination coefficient set corresponding to the face sampling point according to pixel information respectively corresponding to the face sampling point and the reflection coefficient set. Accordingly, as shown in fig. 2a, the method of the present embodiment may include:
s210, identifying at least one face sampling point in the face area in the target picture.
Correspondingly, S210 may specifically include the following operations:
s211, inputting the target picture into a face detector, and acquiring at least three key detection points marked by the face detector in a face area in the target picture.
The face detector may be a face detection technology for detecting a face region, such as a Viola-Jones face detector. Any technique that can be used for face detection may be used as the face detector, which is not limited in the embodiment of the present invention. The key detection points may be key points for recognizing a human face, such as points where edge portions of the human face, such as eyes, eyebrows, and mouth, are characteristically prominent.
It should be noted that, in the embodiment of the present invention, key detection points in a detected face region are not used as face sampling points. Because the key detection points are usually located at the edge of the face, eyes, eyebrows and mouth, and the illumination parameters of these parts are usually affected by various factors, for example, the illumination intensity is low due to shadow occlusion, and the illumination parameters in the actual environment cannot be truly reflected. Therefore, in the embodiment of the present invention, when a face sampling point is obtained, at least three key detection points in a face region in a target picture may be obtained first, and a face sampling point that may reflect real illumination parameters at other positions in a face may be further obtained according to the obtained key detection points.
And S212, triangularizing the three adjacent key detection points, and generating a plurality of characteristic points in the obtained triangle to serve as the face sampling points.
The feature points may be some points with characteristics inside the triangle, such as a center or a gravity center.
Fig. 2b is a schematic diagram illustrating an effect of obtaining a face sampling point according to a key point according to the second embodiment of the present invention. As shown in fig. 2b, after the key detection points in the face region are obtained, three adjacent key detection points may be sequentially connected (which may be real existing solid lines or virtual existing connection lines) to form a triangle, so as to implement triangularization on the key detection points. And further acquiring the characteristic points inside each triangle as face sampling points according to the obtained triangles. The above-mentioned acquisition mode of the face sampling point can effectively avoid regarding the key detection point that the illumination parameter difference is great as the face sampling point, can regard the pixel point that can truly reflect the illumination parameter in the face as the face sampling point to guarantee the accuracy of illumination parameter.
In an alternative embodiment of the present invention, generating a plurality of feature points inside the obtained triangle as the face sampling points may include: and determining the gravity center point of the triangle, connecting the gravity center point with a plurality of new triangles obtained from each vertex of the triangle, and taking the gravity center point of the new triangle and the gravity center point of the triangle as the face sampling points.
Furthermore, when the face sampling points are obtained, in addition to the fact that the centers of gravity of triangles formed by the key detection points can be used as the face sampling points, in order to increase the number of the face sampling points, the centers of gravity of the interiors of triangles formed by the key detection points can be connected with the vertexes of corresponding triangles to form a plurality of (3) new triangles, and then the centers of gravity of the new triangles are obtained to be used as the face sampling points.
And S220, determining a reflection coefficient set matched with the face sampling points by a geometric method and/or a training set method.
Accordingly, when the set of reflection coefficients matching the face sampling points is determined geometrically, S220 may specifically include the following operations:
s221a, acquiring a 3D face comparison model, aligning the 3D face comparison model with a face region in the target picture, and determining face posture information corresponding to the face region.
The 3D face comparison model may be a pre-established model for comparing faces in the target picture. The face pose information includes, but is not limited to, pitch, tilt, rotation and other information, and the face pose information may be used to assist in determining a deflection angle of the 3D face comparison model after the model is aligned with the face region.
In the embodiment of the present invention, a 3D face comparison model may be aligned with a detected face region. Because the normal information and the self-shielding information of the 3D face comparison model are known when the 3D face comparison model is not deflected, when the 3D face comparison model is aligned to a face region in a target picture, the normal and the self-shielding information of each sampling point can be obtained by combining the 3D face comparison model according to face posture information corresponding to the face region.
In an optional embodiment of the present invention, the 3D face comparison model may include: and a pre-established standard 3D face model, or a 3D face model reconstructed according to the face region in the target picture by adopting a three-dimensional reconstruction technology.
Fig. 2c is a schematic diagram of an effect of obtaining face sampling points according to a standard 3D face model according to the second embodiment of the present invention. In fig. 2c, the picture labeled (1) is a standard 3D human face model, the picture labeled (2) is a schematic view of visualization of a model normal, the picture labeled (3) is a schematic view of sampling a human face projection picture, and the picture labeled (4) is a model view obtained by performing mesh reconstruction using a position and normal information obtained by sampling. In the embodiment of the present invention, as shown in fig. 2c, the 3D face comparison model may adopt a pre-established standard 3D face model. In view of the difference of the geometric shapes of different faces, a three-dimensional reconstruction technology can be adopted for specific faces, and a 3D face model reconstructed according to a face region in a target picture is used as a 3D face comparison model.
S222a, generating a plurality of incident directions of incident light rays.
Correspondingly, after the 3D face comparison model is aligned with the face region in the target picture, a plurality of incident directions of different incident light rays can be randomly given to the face region in the target picture (which is also the 3D face comparison model) to simulate light source information of the face region in a real scene.
S223a, according to the formula:respectively calculating the incident light omegaiNext, the reflection function R (x, ω) corresponding to the face sampling point xi)。
Wherein i ∈ [1, m ]]M is the total number of incident directions; omegaiIs the ith incident direction, nxFor the normal direction of the x point determined according to the face pose information, dot () is the point multiplication operation of the vector, max () is the maximum value operation, V (x, omega)i) Is omegaiAnd (3) the visibility between one point and the x point in the direction, wherein rho is the surface albedo of the face.
In the embodiment of the invention, the human face is assumed to be made of diffuse reflection material, and the given incident light direction omega isiCan be according to a formulaCalculating a reflection function R (x, omega) corresponding to the face sampling point xi) Reflection function R (x, ω)i) Can be used to calculate the set of reflection coefficients.
S224a, according to the formula: rj=∫SR(x,ωi)Yj(ωi) Calculating the reflection coefficient R of each order matched with the x pointjForming said set of reflection coefficients.
Wherein j is ∈ [1, n ]]N is a preset total expansion order, S is a hemisphere determined by the normal of the point x at the point x, and Y isj(ωi) Is omegaiSpherical harmonic basis function of j-th order in the direction.
Further, the reflection function R (x, omega) corresponding to the face sampling point x is calculatedi) Then, can be according to formula Rj=∫SR(x,ωi)Yj(ωi) Estimating the integral value by using a Monte Carlo method to solve the reflection coefficient R of each order matched with the x pointj. Wherein, formula Rj=∫SR(x,ωi)Yj(ωi) Spherical harmonic basis function Y inj(ωi) At omegaiAfter determination, it is a known quantity. After the reflection coefficients corresponding to the face sampling points are obtained, the reflection coefficients corresponding to all the face sampling points can form a reflection coefficient set.
It should be noted that for a given face sample point x, in the reflection functionThe parameter is a constant. Here constant numberOnly the light intensity is influenced in the calculation process, but the direction of the light is not influenced, and the accurate determination of the light intensity parameters requires the parameter rho to be solved by using a calibration or error minimization method. This constant can be ignored if the direction of the light is of primary concern when determining the light intensity parameter. If an accurate light intensity estimate is to be made, p needs to be solved further using calibration or error minimization.
As a simple example, after ρ is set to a value, an illumination parameter of a picture is calculated based on ρ, the illumination parameter is used to perform simulated polishing on the picture again, the simulated light intensity of the simulated polishing result is compared with the original light intensity of the picture, ρ can be dynamically reduced if the simulated light intensity is greater than the original light intensity, and ρ can be dynamically increased if the simulated light intensity is less than the original light intensity, so as to actually perform accurate estimation on ρ.
The reflection coefficient set method which is matched with the face sampling points and determined by a geometric method does not need a training step, so that the method does not depend on training data, and simultaneously has no requirement on the angle posture of the face.
Accordingly, when determining the set of reflection coefficients matching the face sampling points by the training set method, S220 may specifically include the following operations:
s221b, inputting the face sampling points into a pre-trained reflection coefficient set determination model, and obtaining the output result of the reflection coefficient set determination model as a reflection coefficient set matched with the face sampling points.
The reflection coefficient set determining model is generated by training a face image generated when a first number of faces are respectively irradiated by a second number of point light sources on a spherical surface surrounding the faces as a training sample, wherein sampling points in the face image and a reflection coefficient set corresponding to the sampling points are marked in the training sample in advance.
The reflection coefficient set determination model can be used for determining a reflection coefficient set matched with the face sampling points. The first number and the second number in the reflection coefficient set determination model may be values set according to actual requirements, and the specific values of the first number and the second number are not limited in the embodiment of the present invention.
In an embodiment of the present invention, the reflection coefficient set determining model may use a training set consisting of a product of a first number (assumed to be M) of faces and a second number (assumed to be N) of point light sources distributed on a spherical surface surrounding the faces. The size of the face can be ignored relative to the distance from the light source to the face, that is, each point light source can be approximated to a directional light source, so that the illumination corresponding to each picture can be decomposed into a spherical harmonic basis weighted form. M multiplied by N linear equality relations can be established for each face sampling point, and then a linear least square solution of the problem is obtained by utilizing singular value decomposition or machine learning method fitting and is used as Rj. By the scheme, a single reflection coefficient set can be established for each face sampling point to determine a model to describe the illumination characteristics of the face sampling point.
It should be noted that, considering that the position of a face sampling point obtained by the same face under N kinds of lighting conditions may have a deviation (the face may not be detected in the worst case), a certain face may be labeled once only with a key face detection point in the training process of the reflection coefficient set determination model. Alternatively, the key detection points of the face may be marked under the best lighting condition (generally, the light source is located right in front of the face). The method for determining the reflection coefficient set matched with the face sampling points by the training set method can also be applied to training face data in any angle posture, and is not limited to the front face.
S230, according to a formula:calculating the illumination coefficient L of each orderjThe set of illumination coefficients is constructed.
Wherein j belongs to [1, n ], and n is a preset total expansion order; l (x) is the pixel information of the face sample point x.
In the embodiment of the present invention, optionally, after determining the set of reflection coefficients matched with the face sampling points by using a geometric method and/or a training set method, the set of reflection coefficients can be determined according to a formulaEstablishing a linear relation among the pixel values of the face sampling points, the reflection coefficient set and the illumination coefficient set, and calculating the illumination coefficient L of each orderj. Wherein, n in the above formula can be a value selected arbitrarily, and the larger the value of n is, the more accurate the obtained result is. Optionally, the above formula may be solved by using a least square solution.
S240, determining an illumination parameter corresponding to the target picture according to the illumination coefficient set corresponding to the face sampling point.
Fig. 2D is a schematic diagram illustrating an effect of determining an illumination parameter according to a 3D face sampling point according to the second embodiment of the present invention. Fig. 2e is a schematic diagram of an effect of lighting the virtual billboard in the picture based on the determined lighting parameters according to the second embodiment of the present invention. As shown in fig. 2d, the method for determining an illumination parameter provided in the embodiment of the present invention can truly reflect the illumination parameter corresponding to each point pixel of a human face. Meanwhile, as shown in fig. 2e, when the method for determining the illumination parameters provided by the embodiment of the invention is applied to the field of advertisement implantation (the projection of the virtual billboard in the form of "company a" at the lower left corner of fig. 2e on a table is the effect of lighting the virtual billboard according to the determined illumination parameters), a good visual effect can be obtained, so that the advertisement implantation is more realistic.
It should be noted that fig. 2a is only a schematic diagram of an implementation manner, and there is no precedence relationship between S221a-S224a and S221b, and the two may be implemented alternatively.
By adopting the technical scheme, the reflection coefficient set matched with the face sampling point is determined by a geometric method and/or a training set method, so that the accuracy of the reflection coefficient set can be ensured, and the accuracy of illumination parameter estimation is further improved.
EXAMPLE III
Fig. 3 is a schematic diagram of an apparatus for determining an illumination parameter according to a third embodiment of the present invention, as shown in fig. 3, the apparatus includes: a sample point identification module 310, a reflection coefficient set determination module 320, an illumination coefficient set calculation module 330, and an illumination parameter determination module 340, wherein:
the sampling point identification module 310 is configured to identify at least one face sampling point in a face region in a target picture;
a reflection coefficient set determining module 320, configured to determine, through a geometric method and/or a training set method, a reflection coefficient set that matches the face sampling point;
an illumination coefficient set calculating module 330, configured to calculate an illumination coefficient set corresponding to the face sampling point according to the pixel information and the reflection coefficient set respectively corresponding to the face sampling point;
and the illumination parameter determining module 340 is configured to determine an illumination parameter corresponding to the target picture according to the illumination coefficient set corresponding to the face sampling point.
The embodiment of the invention identifies at least one face sampling point in a face area in a target picture, determines a reflection coefficient set matched with the face sampling point by adopting a geometric method and/or a training set method, calculates an illumination coefficient set corresponding to the face sampling point according to pixel information and the reflection coefficient set respectively corresponding to the face sampling point, and finally determines an illumination parameter corresponding to the target picture according to the illumination coefficient set corresponding to the face sampling point.
Optionally, the sampling point identification module 310 includes: a key detection point acquisition unit, configured to input the target picture to a face detector, and acquire at least three key detection points in a face region in the target picture marked by the face detector; and the sampling point acquisition unit is used for triangularizing the three adjacent key detection points and generating a plurality of characteristic points inside the obtained triangle to be used as the face sampling points.
Optionally, the sampling point obtaining unit is specifically configured to determine a center of gravity point of the triangle, connect the center of gravity point with a plurality of new triangles obtained from each vertex of the triangle, and use the center of gravity point of the new triangle and the center of gravity point of the triangle as the face sampling point.
Optionally, the reflection coefficient set determining module 320 is configured to obtain a 3D face comparison model, align the 3D face comparison model with a face region in the target picture, and determine face pose information corresponding to the face region; generating a plurality of incident directions of incident light rays; according to the formula:respectively calculating the incident light omegaiNext, the reflection function R (x, ω) corresponding to the face sampling point xi) (ii) a Wherein i ∈ [1, m ]]M is the total number of incident directions; omegaiIs the ith incident direction, nxFor the normal direction of the x point determined according to the face pose information, dot () is the point multiplication operation of the vector, max () is the maximum value operation, V (x, omega)i) Is omegaiVisibility between a point in the direction and the point x, wherein rho is the surface albedo of the face; root of herbaceous plantAccording to the formula: rj=∫SR(x,ωi)Yj(ωi) Calculating the reflection coefficient R of each order matched with the x pointjForming said set of reflection coefficients; wherein j is ∈ [1, n ]]N is a preset total expansion order, S is a hemisphere determined by the normal of the point x at the point x, and Y isj(ωi) Is omegaiSpherical harmonic basis function of j-th order in the direction.
Optionally, the 3D face comparison model includes: and a pre-established standard 3D face model, or a 3D face model reconstructed according to the face region in the target picture by adopting a three-dimensional reconstruction technology.
Optionally, the reflection coefficient set determining module 320 is further configured to input the face sampling points into a pre-trained reflection coefficient set determining model, and obtain a result output by the reflection coefficient set determining model as a reflection coefficient set matched with the face sampling points; the reflection coefficient set determining model is generated by training a face image generated when a first number of faces are respectively irradiated by a second number of point light sources on a spherical surface surrounding the faces as a training sample, wherein sampling points in the face image and a reflection coefficient set corresponding to the sampling points are marked in the training sample in advance.
Optionally, the illumination coefficient set calculating module 330 is specifically configured to: calculating the illumination coefficient L of each orderjConstructing the set of illumination coefficients; wherein j is ∈ [1, n ]]N is a preset total expansion order; l (x) is the pixel information of the face sample point x.
Optionally, the target picture is a frame of image in the target video file; the device further comprises: the multi-illumination parameter acquisition module is used for acquiring illumination parameters respectively determined by at least two frames of images in the target video file; the illumination parameter processing module is used for processing each illumination parameter according to a set data processing technology to obtain an illumination parameter matched with the target video file; the illumination parameters comprise an illumination incidence direction and illumination intensity.
Optionally, the illumination parameter processing module is specifically configured to vote for each illumination parameter, and obtain a mean value or a median of each parameter as an illumination parameter matched with the target video file; calculating a moving average value of each illumination parameter by using a time domain sliding window to serve as the illumination parameter matched with the target video file; and filtering each illumination parameter by using a Kalman filter to obtain the illumination parameter matched with the target video file.
The device for determining the illumination parameters can execute the method for determining the illumination parameters provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For details of the technology not described in detail in this embodiment, reference may be made to the method for determining the illumination parameter provided in any embodiment of the present invention.
Example four
Fig. 4 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention. FIG. 4 illustrates a block diagram of a computer device 412 suitable for use in implementing embodiments of the present invention. The computer device 412 shown in FIG. 4 is only one example and should not impose any limitations on the functionality or scope of use of embodiments of the present invention.
As shown in FIG. 4, computer device 412 is in the form of a general purpose computing device. Components of computer device 412 may include, but are not limited to: one or more processors 416, a storage device 428, and a bus 418 that couples the various system components including the storage device 428 and the processors 416.
The computer device 412 may also communicate with one or more external devices 414 (e.g., keyboard, pointing device, camera, display 424, etc.), with one or more devices that enable a user to interact with the computer device 412, and/or with any devices (e.g., network card, modem, etc.) that enable the computer device 412 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 422. Also, computer device 412 may communicate with one or more networks (e.g., a Local Area Network (LAN), Wide Area Network (WAN), and/or a public Network, such as the internet) through Network adapter 420. As shown, network adapter 420 communicates with the other modules of computer device 412 over bus 418. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the computer device 412, including but not limited to: microcode, device drivers, Redundant processing units, external disk drive Arrays, disk array (RAID) systems, tape drives, and data backup storage systems, to name a few.
The processor 416 executes various functional applications and data processing, such as a redrawing method of the activti flowchart provided by the above-described embodiment of the present invention, by running a program stored in the storage 428.
That is, the processing unit implements, when executing the program: identifying at least one face sampling point in a face area in a target picture; determining a reflection coefficient set matched with the face sampling points by a geometric method and/or a training set method; calculating an illumination coefficient set corresponding to the face sampling points according to the pixel information respectively corresponding to the face sampling points and the reflection coefficient set; and determining an illumination parameter corresponding to the target picture according to the illumination coefficient set corresponding to the face sampling point.
The method comprises the steps of identifying at least one face sampling point in a face area in a target picture through the computer equipment, determining a reflection coefficient set matched with the face sampling point by adopting a geometric method and/or a training set method, calculating an illumination coefficient set corresponding to the face sampling point according to pixel information and the reflection coefficient set respectively corresponding to the face sampling point, and finally determining an illumination parameter corresponding to the target picture according to the illumination coefficient set corresponding to the face sampling point.
EXAMPLE five
An embodiment of the fifth invention further provides a computer storage medium storing a computer program, where the computer program is used to execute the method for determining an illumination parameter according to any one of the above embodiments of the present invention when executed by a computer processor: identifying at least one face sampling point in a face area in a target picture; determining a reflection coefficient set matched with the face sampling points by a geometric method and/or a training set method; calculating an illumination coefficient set corresponding to the face sampling points according to the pixel information respectively corresponding to the face sampling points and the reflection coefficient set; and determining an illumination parameter corresponding to the target picture according to the illumination coefficient set corresponding to the face sampling point.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM) or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (12)
1. A method for determining an illumination parameter, comprising:
identifying at least one face sampling point in a face area in a target picture;
determining a reflection coefficient set matched with the face sampling points by a geometric method and/or a training set method;
calculating an illumination coefficient set corresponding to the face sampling points according to the pixel information respectively corresponding to the face sampling points and the reflection coefficient set;
and determining an illumination parameter corresponding to the target picture according to the illumination coefficient set corresponding to the face sampling point.
2. The method of claim 1, wherein identifying at least one face sample point within a face region in a target picture comprises:
inputting the target picture into a face detector, and acquiring at least three key detection points in a face region in the target picture marked by the face detector;
and triangularizing the three adjacent key detection points, and generating a plurality of characteristic points in the obtained triangle to be used as the face sampling points.
3. The method of claim 2, wherein generating a plurality of feature points inside the obtained triangle as the face sample points comprises:
and determining the gravity center point of the triangle, connecting the gravity center point with a plurality of new triangles obtained from each vertex of the triangle, and taking the gravity center point of the new triangle and the gravity center point of the triangle as the face sampling points.
4. The method of claim 1, wherein geometrically determining the set of reflection coefficients that match the face sample points comprises:
acquiring a 3D face comparison model, aligning the 3D face comparison model with a face region in the target picture, and determining face posture information corresponding to the face region;
generating a plurality of incident directions of incident light rays;
according to the formula:respectively calculating the incident light omegaiNext, the reflection function R (x, ω) corresponding to the face sampling point xi);
Wherein i ∈ [1, m ]]M is the total number of incident directions; omegaiIs the ith incident direction, nxFor the normal direction of the x point determined according to the face pose information, dot () is the point multiplication operation of the vector, max () is the maximum value operation, V (x, omega)i) Is omegaiVisibility between a point in the direction and the point x, wherein rho is the surface albedo of the face;
according to the formula: rj=∫SR(x,ωi)Yj(ωi) Calculating the reflection coefficient R of each order matched with the x pointjForming said set of reflection coefficients;
wherein j is ∈ [1, n ]]N is a preset total expansion order, S is a hemisphere determined by the normal of the point x at the point x, and Y isj(ωi) Is omegaiSpherical harmonic basis function of j-th order in the direction.
5. The method of claim 4, wherein the 3D face alignment model comprises:
a pre-established standard 3D face model, or
And reconstructing a 3D face model according to the face region in the target picture by adopting a three-dimensional reconstruction technology.
6. The method of claim 1, wherein determining the set of reflection coefficients matching the face sample points by a training set method comprises:
inputting the face sampling points into a pre-trained reflection coefficient set determination model, and acquiring a result output by the reflection coefficient set determination model as a reflection coefficient set matched with the face sampling points;
the reflection coefficient set determining model is generated by training a face image generated when a first number of faces are respectively irradiated by a second number of point light sources on a spherical surface surrounding the faces as a training sample, wherein sampling points in the face image and a reflection coefficient set corresponding to the sampling points are marked in the training sample in advance.
7. The method of claim 1, wherein calculating the set of illumination coefficients corresponding to the face sampling points according to the set of reflection coefficients and pixel information corresponding to the face sampling points respectively comprises:
according to the formula:calculating the illumination coefficient L of each orderjConstructing the set of illumination coefficients;
wherein j belongs to [1, n ], and n is a preset total expansion order; l (x) is the pixel information of the face sample point x.
8. The method of claim 1, wherein the target picture is a frame of image in a target video file;
after determining the illumination parameter corresponding to the target picture according to the illumination coefficient set corresponding to the face sampling point, the method further includes:
acquiring illumination parameters respectively determined by at least two frames of images in the target video file;
processing each illumination parameter according to a set data processing technology to obtain an illumination parameter matched with the target video file;
the illumination parameters comprise an illumination incidence direction and illumination intensity.
9. The method of claim 8, wherein processing each of the illumination parameters in accordance with a set data processing technique comprises at least one of:
voting is carried out on each illumination parameter, and the mean value or median of each parameter is obtained to be used as the illumination parameter matched with the target video file;
calculating a moving average value of each illumination parameter by using a time domain sliding window to serve as the illumination parameter matched with the target video file; and
and filtering each illumination parameter by using a Kalman filter to obtain the illumination parameter matched with the target video file.
10. An apparatus for determining an illumination parameter, comprising:
the sampling point identification module is used for identifying at least one face sampling point in a face area in a target picture;
the reflection coefficient set determining module is used for determining a reflection coefficient set matched with the face sampling points through a geometric method and/or a training set method;
the illumination coefficient set calculation module is used for calculating an illumination coefficient set corresponding to the face sampling point according to the pixel information respectively corresponding to the face sampling point and the reflection coefficient set;
and the illumination parameter determining module is used for determining the illumination parameters corresponding to the target picture according to the illumination coefficient set corresponding to the face sampling point.
11. A computer device, the device comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of determining lighting parameters of any of claims 1-9.
12. A computer storage medium having stored thereon a computer program, characterized in that the program, when being executed by a processor, carries out the method of determining an illumination parameter according to any one of the claims 1-9.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811108374.XA CN109214350B (en) | 2018-09-21 | 2018-09-21 | Method, device and equipment for determining illumination parameters and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811108374.XA CN109214350B (en) | 2018-09-21 | 2018-09-21 | Method, device and equipment for determining illumination parameters and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN109214350A CN109214350A (en) | 2019-01-15 |
| CN109214350B true CN109214350B (en) | 2020-12-22 |
Family
ID=64985375
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201811108374.XA Active CN109214350B (en) | 2018-09-21 | 2018-09-21 | Method, device and equipment for determining illumination parameters and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN109214350B (en) |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109883414B (en) * | 2019-03-20 | 2021-08-27 | 百度在线网络技术(北京)有限公司 | Vehicle navigation method and device, electronic equipment and storage medium |
| CN112115747B (en) * | 2019-06-21 | 2024-11-19 | 阿里巴巴集团控股有限公司 | Liveness detection and data processing method, device, system and storage medium |
| CN110428491B (en) * | 2019-06-24 | 2021-05-04 | 北京大学 | 3D face reconstruction method, device, equipment and medium based on single frame image |
| CN110310224B (en) * | 2019-07-04 | 2023-05-30 | 北京字节跳动网络技术有限公司 | Light effect rendering method and device |
| WO2022011621A1 (en) * | 2020-07-15 | 2022-01-20 | 华为技术有限公司 | Face illumination image generation apparatus and method |
| CN114998092B (en) * | 2021-03-01 | 2025-07-15 | 阿里巴巴集团控股有限公司 | Image processing method, device and system |
| CN113409468B (en) * | 2021-05-10 | 2024-06-21 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8194072B2 (en) * | 2010-03-26 | 2012-06-05 | Mitsubishi Electric Research Laboratories, Inc. | Method for synthetically relighting images of objects |
| CN102346857B (en) * | 2011-09-14 | 2014-01-15 | 西安交通大学 | High-precision Simultaneous Estimation Method of Illumination Parameters and De-illumination Map of Face Image |
| US9928441B2 (en) * | 2015-11-20 | 2018-03-27 | Infinity Augmented Reality Israel Ltd. | Method and a system for determining radiation sources characteristics in a scene based on shadowing analysis |
| CN105894050A (en) * | 2016-06-01 | 2016-08-24 | 北京联合大学 | Multi-task learning based method for recognizing race and gender through human face image |
-
2018
- 2018-09-21 CN CN201811108374.XA patent/CN109214350B/en active Active
Also Published As
| Publication number | Publication date |
|---|---|
| CN109214350A (en) | 2019-01-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109214350B (en) | Method, device and equipment for determining illumination parameters and storage medium | |
| JP6246757B2 (en) | Method and system for representing virtual objects in field of view of real environment | |
| CN108895981B (en) | Three-dimensional measurement method, device, server and storage medium | |
| Sarel et al. | Separating transparent layers through layer information exchange | |
| Handa et al. | A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM | |
| Meilland et al. | 3d high dynamic range dense visual slam and its application to real-time object re-lighting | |
| US20100296724A1 (en) | Method and System for Estimating 3D Pose of Specular Objects | |
| EP3051793B1 (en) | Imaging apparatus, systems and methods | |
| CN107408315A (en) | Process and method for real-time, physically accurate and realistic eyewear try-on | |
| JP2017010562A (en) | Rapid 3d modeling | |
| US11663775B2 (en) | Generating physically-based material maps | |
| EP2234064A1 (en) | Method for estimating 3D pose of specular objects | |
| CN115039137B (en) | Related method for rendering virtual object based on brightness estimation and related product | |
| EP3629303B1 (en) | Method and system for representing a virtual object in a view of a real environment | |
| US12260591B2 (en) | Method, apparatus and system for image processing | |
| Alhakamy et al. | Real-time illumination and visual coherence for photorealistic augmented/mixed reality | |
| US20240046561A1 (en) | Method for assessing the physically based simulation quality of a glazed object | |
| CN118570424B (en) | Virtual reality tour guide system | |
| JPH04130587A (en) | Three-dimensional picture evaluation device | |
| CN119251430A (en) | A method, medium and device for constructing a virtual scene based on point cloud data | |
| JP5441752B2 (en) | Method and apparatus for estimating a 3D pose of a 3D object in an environment | |
| CN109166176B (en) | Three-dimensional face image generation method and device | |
| Ackermann et al. | Removing the example from example-based photometric stereo | |
| Koc et al. | Estimation of environmental lighting from known geometries for mobile augmented reality | |
| Xing et al. | Online illumination estimation of outdoor scenes based on videos containing no shadow area |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20190115 Assignee: Beijing Intellectual Property Management Co.,Ltd. Assignor: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd. Contract record no.: X2023110000095 Denomination of invention: A method, device, device, and storage medium for determining lighting parameters Granted publication date: 20201222 License type: Common License Record date: 20230821 |
|
| EE01 | Entry into force of recordation of patent licensing contract |