CN112395912B - A face segmentation method, electronic device and computer-readable storage medium - Google Patents
A face segmentation method, electronic device and computer-readable storage medium Download PDFInfo
- Publication number
- CN112395912B CN112395912B CN201910750349.XA CN201910750349A CN112395912B CN 112395912 B CN112395912 B CN 112395912B CN 201910750349 A CN201910750349 A CN 201910750349A CN 112395912 B CN112395912 B CN 112395912B
- Authority
- CN
- China
- Prior art keywords
- image
- foreground
- depth
- face
- pixel point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0007—Image acquisition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本公开实施例涉及图像处理领域,具体涉及一种人脸分割方法、电子设备及计算机可读存储介质。Embodiments of the present disclosure relate to the field of image processing, and in particular to a face segmentation method, electronic equipment, and a computer-readable storage medium.
背景技术Background technique
随着现代社会的高速发展,以指纹识别、虹膜识别、人脸识别等为代表的生物特征认证技术作为安全认证的重要方向,受到了越来越多的关注。其中,人脸分割作为人脸识别的基础,广泛的应用在人脸识别领域。With the rapid development of modern society, biometric authentication technologies represented by fingerprint recognition, iris recognition, and face recognition have received more and more attention as an important direction of security authentication. Among them, face segmentation, as the basis of face recognition, is widely used in the field of face recognition.
由于三维人脸包含了许多二维人脸所没有的信息,可有效解决二维人脸识别存在的容易受到姿态、表情、光照的变化以及自身遮挡的问题,因此,为了将面部从图像中分割出来,通常采用的是三维人脸分割。然而,现有的三维人脸分割需要计算图像中所有像素点信息,存在人脸分割速度慢,进而导致人脸识别效率低的问题。Since the 3D face contains a lot of information that the 2D face does not have, it can effectively solve the problems of 2D face recognition that are susceptible to changes in posture, expression, illumination, and self-occlusion. Therefore, in order to segment the face from the image When it comes out, three-dimensional face segmentation is usually used. However, the existing 3D face segmentation needs to calculate all the pixel information in the image, which has the problem of slow face segmentation speed, which leads to low face recognition efficiency.
发明内容Contents of the invention
本公开实施例期望提供一种人脸分割方法、电子设备及计算机可读存储介质,能够提高三维人脸分割的速度,进而提升三维人脸识别效率。Embodiments of the present disclosure expect to provide a face segmentation method, an electronic device, and a computer-readable storage medium, which can increase the speed of 3D face segmentation and further improve the efficiency of 3D face recognition.
本公开实施例的技术方案是这样实现的:The technical scheme of the embodiment of the present disclosure is realized in this way:
第一方面,本公开实施例提供一种人脸分割的方法,应用于电子设备中,所述电子设备设置有摄像头,所述方法包括:In a first aspect, an embodiment of the present disclosure provides a method for face segmentation, which is applied to an electronic device, where the electronic device is provided with a camera, and the method includes:
通过所述摄像头获取深度图像的像素点信息,所述深度图像的像素点信息用于表征所述深度图像的像素点深度信息和所述像素点对应的平面坐标信息;Obtaining pixel point information of a depth image through the camera, where the pixel point information of the depth image is used to represent the pixel point depth information of the depth image and the plane coordinate information corresponding to the pixel point;
根据所述深度图像的像素点信息,获取所述深度图像中的前景图像,所述前景图像用于表征拍摄对象所在的图像;Acquiring a foreground image in the depth image according to the pixel point information of the depth image, the foreground image being used to represent the image where the subject is located;
通过人脸检测算法获取所述拍摄对象的人脸高度;Obtaining the face height of the subject through a face detection algorithm;
根据预设叠加深度模型、所述前景图像的像素点信息和所述人脸高度,得到所述前景图像中的人脸图像,所述预设叠加深度模型用于将所述前景图像转化为前景直方图;According to the preset superposition depth model, the pixel point information of the foreground image and the height of the human face, the face image in the foreground image is obtained, and the preset superposition depth model is used to convert the foreground image into a foreground histogram;
对所述人脸图像进行三维处理,得到所述拍摄对象的三维人脸图像,并显示所述三维人脸图像。Performing three-dimensional processing on the face image to obtain a three-dimensional face image of the object to be photographed, and displaying the three-dimensional face image.
第二方面,本公开实施例提供一种电子设备,所述电子设备设置有摄像头,所述电子设备包括采集单元、获取单元、分割单元、三维处理单元和显示单元,其中,In a second aspect, an embodiment of the present disclosure provides an electronic device, the electronic device is provided with a camera, and the electronic device includes an acquisition unit, an acquisition unit, a segmentation unit, a three-dimensional processing unit, and a display unit, wherein,
所述采集单元,用于通过所述摄像头获取深度图像的像素点信息,所述深度图像的像素点信息用于表征所述深度图像的像素点深度信息和所述像素点对应的平面坐标信息;The acquisition unit is configured to acquire pixel point information of a depth image through the camera, and the pixel point information of the depth image is used to represent the pixel point depth information of the depth image and the plane coordinate information corresponding to the pixel point;
所述获取单元,用于通过人脸检测算法获取拍摄对象的人脸高度;The acquisition unit is used to acquire the height of the face of the subject through a face detection algorithm;
所述分割单元,用于根据所述深度图像的像素点信息,获取所述深度图像中的前景图像,所述前景图像用于表征所述拍摄对象所在的图像;The segmentation unit is configured to acquire a foreground image in the depth image according to the pixel point information of the depth image, and the foreground image is used to represent the image where the subject is located;
所述分割单元,还用于根据预设叠加深度模型、所述前景图像的像素点信息和所述人脸高度,得到所述前景图像中的人脸图像,所述预设叠加深度模型用于将所述前景图像转化为前景直方图;The segmentation unit is further configured to obtain a face image in the foreground image according to a preset superposition depth model, pixel information of the foreground image, and the face height, and the preset superposition depth model is used for converting the foreground image into a foreground histogram;
所述三维处理单元,用于对所述人脸图像进行三维处理,得到所述拍摄对象的三维人脸图像;The three-dimensional processing unit is configured to perform three-dimensional processing on the face image to obtain a three-dimensional face image of the subject;
所述显示单元,用于显示所述三维人脸图像。The display unit is configured to display the three-dimensional face image.
第三方面,本公开实施例提供一种电子设备,所述电子设备设置有摄像头,所述电子设备至少包括处理器、存储有所述处理器可执行指令的存储器,和用于连接所述处理器、所述存储器和所述摄像头的总线,当所述可执行指令被执行时,所述处理器执行时实现上述人脸分割方法中的步骤。In a third aspect, an embodiment of the present disclosure provides an electronic device, the electronic device is provided with a camera, and the electronic device includes at least a processor, a memory storing instructions executable by the processor, and a device for connecting the processing device, the memory, and the bus of the camera, and when the executable instructions are executed, the processor implements the steps in the above method for face segmentation.
第四方面,本公开实施例提供一种计算机可读存储介质,其上存储可执行指令,所述可执行指令被处理器执行时实现上述人脸分割方法中的步骤。In a fourth aspect, an embodiment of the present disclosure provides a computer-readable storage medium on which executable instructions are stored, and when the executable instructions are executed by a processor, the steps in the above-mentioned face segmentation method are implemented.
本公开实施例提供了一种人脸分割的方法、电子设备及计算机可读存储介质,该人脸分割方法应用于电子设备中,电子设备设置有摄像头,包括:通过摄像头获取深度图像的像素点信息,深度图像的像素点信息用于表征深度图像的像素点深度信息和像素点对应的平面坐标信息;根据深度图像的像素点信息,获取深度图像中的前景图像,前景图像用于表征拍摄对象所在的图像;通过人脸检测算法获取拍摄对象的人脸高度;根据预设叠加深度模型、前景图像的像素点信息和人脸高度,得到前景图像中的人脸图像,预设叠加深度模型用于将前景图像转化为前景直方图;对人脸图像进行三维处理,得到拍摄对象的三维人脸图像,并显示三维人脸图像,也就是说,一方面,本公开实施例先将深度图像分割得到前景图像;再将前景图像分割得到人脸图像,如此,只需要计算前景图像的像素点信息便可以实现人脸分割,提高了三维人脸分割速度;另一方面,本公开实施例通过预设叠加深度模型将前景图像转化为前景直方图,并通过人脸高度对转化后前景直方图进行分割,能够进一步提高人脸分割的速度,进而提升了三维人脸识别的效率。Embodiments of the present disclosure provide a face segmentation method, an electronic device, and a computer-readable storage medium. The face segmentation method is applied to an electronic device, and the electronic device is provided with a camera, including: acquiring pixels of a depth image through the camera Information, the pixel point information of the depth image is used to represent the pixel point depth information of the depth image and the plane coordinate information corresponding to the pixel point; according to the pixel point information of the depth image, the foreground image in the depth image is obtained, and the foreground image is used to represent the shooting object The image where it is located; the face height of the subject is obtained through the face detection algorithm; according to the preset overlay depth model, the pixel information of the foreground image and the height of the face, the face image in the foreground image is obtained, and the preset overlay depth model is used To convert the foreground image into a foreground histogram; perform three-dimensional processing on the face image, obtain the three-dimensional face image of the subject, and display the three-dimensional face image, that is to say, on the one hand, the embodiment of the present disclosure first divides the depth image Obtain the foreground image; then the foreground image is segmented to obtain the face image, so that the face segmentation can be realized only by calculating the pixel information of the foreground image, which improves the speed of three-dimensional face segmentation; The stacking depth model is used to convert the foreground image into a foreground histogram, and the transformed foreground histogram is segmented by the height of the face, which can further improve the speed of face segmentation, thereby improving the efficiency of 3D face recognition.
附图说明Description of drawings
图1为本公开实施例提供一种人脸分割方法的结构框图;Fig. 1 provides a structural block diagram of a face segmentation method according to an embodiment of the present disclosure;
图2为本公开实施例提供的一种人脸分割的实现流程示意图;FIG. 2 is a schematic diagram of an implementation process of face segmentation provided by an embodiment of the present disclosure;
图3为本公开实施例提供的深度图像分割示意图;FIG. 3 is a schematic diagram of depth image segmentation provided by an embodiment of the present disclosure;
图4为本公开实施例提供的前景图像分割示意图;FIG. 4 is a schematic diagram of foreground image segmentation provided by an embodiment of the present disclosure;
图5为本公开实施例提供的一种电子设备的组成结构示意图一;FIG. 5 is a first schematic diagram of the composition and structure of an electronic device provided by an embodiment of the present disclosure;
图6为本公开实施例提供的一种电子设备的组成结构示意图二。FIG. 6 is a second schematic diagram of the composition and structure of an electronic device provided by an embodiment of the present disclosure.
具体实施方式detailed description
为了使本公开实施例的目的、技术方案和优点更加清楚,下面将结合附图对本公开实施例作进一步地详细描述,所描述的实施例不应视为对本公开实施例的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本公开实施例保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present disclosure clearer, the embodiments of the present disclosure will be further described in detail below in conjunction with the accompanying drawings. The described embodiments should not be regarded as limiting the embodiments of the present disclosure. All other embodiments obtained by the skilled person without creative work belong to the scope of protection of the embodiments of the present disclosure.
在人脸识别过程中,通常是采用三维人脸分割方法从一张完整的图像中将人脸图像分割出来。目前主要的三维人脸分割方法可以分为两种,一种是基于集合信息的分割算法,其分割过程是先根据图像中涉及的每个顶点,计算其相关的几何信息,如曲率、法向、拟合面等,再根据该相关的几何信息对图像分割出大致的人脸区域,再根据计算得到的几何信息中的曲率对图像进行修整,完成分割;另一种是基于人脸生理特征的分割算法,其分割过程是先预先构建一个人脸模型,再根据检测图像得到的关键点信息对该人脸模型进行调整,完成分割。In the process of face recognition, a three-dimensional face segmentation method is usually used to segment a face image from a complete image. At present, the main three-dimensional face segmentation methods can be divided into two types. One is the segmentation algorithm based on collective information. The segmentation process is to calculate the relevant geometric information, such as curvature and normal, according to each vertex involved in the image. , fitting surface, etc., and then segment the image into a rough face area according to the relevant geometric information, and then trim the image according to the curvature in the calculated geometric information to complete the segmentation; the other is based on the physiological characteristics of the face The segmentation algorithm, the segmentation process is to build a face model in advance, and then adjust the face model according to the key point information obtained from the detected image to complete the segmentation.
然而,对于真实的三维人脸模型来说,图像中的三维像素点和三维像素点构成的面数量大,计算所有的三维像素点的几何信息,其计算量巨大,存在人脸分割时间长效率低的问题。另外,目前通常是采用激光等设备扫描图像以采集深度图像,也存在成本高的问题。However, for a real 3D face model, the number of 3D pixels in the image and the faces composed of 3D pixels is large, and the calculation of the geometric information of all 3D pixels is huge, and there is a long time for face segmentation. low problem. In addition, at present, devices such as lasers are usually used to scan images to collect depth images, which also has the problem of high cost.
因此,考虑到三维人脸分割中从一整张深度图像中分割得到人脸图像只是得到三维人脸图像的一个中间步骤,后续还需要对人脸图像进行滤波、反投影、配准和显示等处理,本公开实施例提出了一种人脸分割方法。图1为本公开实施例提供一种人脸分割方法的结构框图,如图1所示,该人脸分割方法的结构框图包括:采集单元、分割单元、三维处理单元和显示单元,其中,Therefore, considering that in 3D face segmentation, obtaining a face image from a whole depth image is only an intermediate step in obtaining a 3D face image, the face image needs to be filtered, back-projected, registered and displayed, etc. processing, the embodiment of the present disclosure proposes a face segmentation method. FIG. 1 is a structural block diagram of a face segmentation method provided by an embodiment of the present disclosure. As shown in FIG. 1 , the structural block diagram of the face segmentation method includes: an acquisition unit, a segmentation unit, a three-dimensional processing unit, and a display unit, wherein,
采集单元,用于采集深度图像;分割单元,用于分割深度图像中的前景图像和后景图像,和分割前景图像中的人脸图像和躯干图像;三维处理单元,用于对人脸图像进行滤波、反投影、配准等三维处理得到三维人脸图像;显示单元,用于显示该三维人脸图像。The acquisition unit is used to collect the depth image; the segmentation unit is used to segment the foreground image and the background image in the depth image, and the face image and the torso image in the segmentation foreground image; the three-dimensional processing unit is used to process the face image Three-dimensional processing such as filtering, back projection, registration, etc. to obtain a three-dimensional face image; a display unit for displaying the three-dimensional face image.
通过本公开实施例提供的人脸分割方法的结构框图,可以得出本公开实施例不再计算深度图像中所有像素点信息,只需要计算前景图像中的像素点信息,提高了人脸分割的速度,进而提升了人脸识别的效率。Through the structural block diagram of the face segmentation method provided by the embodiment of the present disclosure, it can be concluded that the embodiment of the present disclosure no longer calculates all pixel information in the depth image, but only needs to calculate the pixel information in the foreground image, which improves the accuracy of face segmentation. Speed, thereby improving the efficiency of face recognition.
图2为本公开实施例提供的一种人脸分割的实现流程示意图,如图2所示,该人脸分割方法应用于电子设备,该电子设备设置有摄像头,电子设备实现人脸分割方法可以包括以下步骤:FIG. 2 is a schematic diagram of a face segmentation implementation process provided by an embodiment of the present disclosure. As shown in FIG. 2, the face segmentation method is applied to an electronic device, the electronic device is provided with a camera, and the electronic device can realize the face segmentation method. Include the following steps:
S101、通过摄像头获取深度图像的像素点信息。S101. Obtain pixel point information of a depth image through a camera.
本公开实施例中,电子设备设置有摄像头,电子设备通过该摄像头可以获取深度图像的像素点信息,该深度图像的像素点信息用于表征深度图像的像素点深度信息和像素点对应的平面坐标信息。In the embodiment of the present disclosure, the electronic device is provided with a camera, through which the electronic device can obtain the pixel point information of the depth image, and the pixel point information of the depth image is used to represent the pixel point depth information of the depth image and the plane coordinates corresponding to the pixel point information.
需要说明的是,上述深度图像由至少一个像素点组成的,每一个像素点都对应者该像素点深度信息以及该像素点对应的平面坐标信息,该像素点深度信息用于表征该像素点对应的平面坐标与摄像头的距离。It should be noted that the above-mentioned depth image is composed of at least one pixel point, and each pixel point corresponds to the pixel point depth information and the corresponding plane coordinate information of the pixel point, and the pixel point depth information is used to represent the pixel point corresponding The distance between the plane coordinates of and the camera.
示例性地,为了降低采集深度图像的成本,电子设备可以为Kinect设备,通过Kinect设备上的摄像头以视频的速率抓拍想要进行三维显示的拍摄对象,以获取深度图像,需要说明的是,该深度图像为Kinect设备采集的视频图像中的序列深度图像。Exemplarily, in order to reduce the cost of acquiring a depth image, the electronic device may be a Kinect device, and the camera on the Kinect device captures an object to be displayed in three dimensions at a video rate to obtain a depth image. It should be noted that the The depth image is a sequence of depth images in video images collected by the Kinect device.
本公开实施例中,电子设备上设置有摄像头,该摄像头的个数可以为至少一个,当电子设备的摄像头的个数为一个,且该摄像头为单目摄像头时,电子设备通过该摄像头获取深度图像的像素点信息中的像素点深度信息的过程为:通过单目摄像头采集拍摄对象的视频图像;提取该视频图像中的深度图像帧;对该视频图像中不同深度图像帧之间的关系进行立体匹配,获取该深度图像的像素点信息中的像素点深度信息。In the embodiment of the present disclosure, the electronic device is provided with a camera, and the number of the camera may be at least one. When the number of the camera of the electronic device is one, and the camera is a monocular camera, the electronic device obtains the depth through the camera. The process of the pixel point depth information in the pixel point information of the image is: collect the video image of the shooting object through the monocular camera; extract the depth image frame in the video image; analyze the relationship between different depth image frames in the video image Stereo matching, obtaining pixel depth information in the pixel information of the depth image.
当电子设备设置摄像头的个数为两个时,电子设备通过该两个摄像头获取深度图像的像素点信息中的像素点深度信息的过程为:获取该两个摄像头分别到该深度图像中该像素点的第一距离和第二距离;根据第一距离、第二距离和预设两个摄像头之间的距离,确定深度图像的像素点信息中的像素点深度信息。When the number of cameras in the electronic device is set to two, the process for the electronic device to obtain pixel depth information in the pixel information of the depth image through the two cameras is: The first distance and the second distance of the point; according to the first distance, the second distance and the preset distance between the two cameras, the pixel depth information in the pixel information of the depth image is determined.
示例性地,当电子设备为Kinect设备时,该Kinect设备上设置有两个红外摄像头,Kinect设备可以通过两个红外摄像头获取深度图像的像素点信息中的像素点深度信息,本公开实施例这里不做限制。Exemplarily, when the electronic device is a Kinect device, the Kinect device is provided with two infrared cameras, and the Kinect device can obtain the pixel depth information in the pixel point information of the depth image through the two infrared cameras. No restrictions.
S102、根据深度图像的像素点信息,获取深度图像中的前景图像。S102. Acquire a foreground image in the depth image according to pixel point information of the depth image.
本公开实施例中,电子设备在获取深度图像之后,电子设备需要根据深度图像的像素点信息,获取深度图像中的前景图像。In the embodiment of the present disclosure, after the electronic device acquires the depth image, the electronic device needs to acquire the foreground image in the depth image according to the pixel point information of the depth image.
需要说明的是,前景图像用于表征深度图像中该拍摄对象所在的图像,而在深度图像中除了前景图像以外的其他图像可以为后景图像,该后景图像用于表征深度图像中的环境图像,因此,电子设备为了获取拍摄对象的人脸图像,电子设备需要先分割深度图像中的前景图像和后景图像以获取前景图像。It should be noted that the foreground image is used to represent the image of the subject in the depth image, and other images in the depth image except the foreground image can be background images, and the background image is used to represent the environment in the depth image Therefore, in order to obtain the face image of the subject, the electronic device needs to first segment the foreground image and the background image in the depth image to obtain the foreground image.
本公开实施例中,前景图像中的拍摄对象相对于环境来说,拍摄对象所对应的像素点与摄像头之间的距离是最短的,又像素点深度信息对应深度值用于表征像素点拍摄对象与摄像头之间的距离,因此,可以通过连通分量分析方法即两个相邻像素点的深度信息对应的深度值,来获取深度图像中的前景图像。In the embodiment of the present disclosure, compared with the environment, the object in the foreground image has the shortest distance between the pixel corresponding to the object and the camera, and the depth value corresponding to the pixel depth information is used to represent the pixel object Therefore, the foreground image in the depth image can be obtained by the connected component analysis method, that is, the depth value corresponding to the depth information of two adjacent pixel points.
具体地,电子设备在根据深度图像的像素点信息,获取深度图像中的前景图像的过程为:从深度图像的像素点深度信息中,获取两个相邻像素点深度信息;根据两个相邻像素点深度信息,获取两个相邻像素点的深度差值;当深度差值小于预设差值阈值时,将两个相邻像素点所在的图像作为深度图像中的前景图像。Specifically, the electronic device obtains the foreground image in the depth image according to the pixel point information of the depth image as follows: from the pixel point depth information of the depth image, obtains the depth information of two adjacent pixels; The pixel depth information is used to obtain the depth difference between two adjacent pixels; when the depth difference is less than the preset difference threshold, the image where the two adjacent pixels are located is used as the foreground image in the depth image.
需要说明的是,当深度差值小于预设差值阈值时,则该两个相邻的像素点所在的图像为前景图像,当深度差值大于或者等于预设差值阈值时,则说明该两个相邻的像素点所在的图像为后景图像,需要去除该图像。如此,便可以将深度图像中的前景图像和后景图像进行分离,得到需要的前景图像。It should be noted that when the depth difference is less than the preset difference threshold, the image where the two adjacent pixels are located is the foreground image, and when the depth difference is greater than or equal to the preset difference threshold, it means that the The image where two adjacent pixels are located is the background image, which needs to be removed. In this way, the foreground image and the background image in the depth image can be separated to obtain the required foreground image.
示例性地,如图3所示,图3为本公开实施例提供的深度图像分割示意图,其中,标号30对应的图像为深度图像,编号31对应的图像为深度图像中的后景图像,该后景图像为需要去除的图像;标号32对应的图像为深度图像中的前景图像,其是深度图像中除后景图像以外的图像,如此,本公开实施例通过将深度图像中的前景图像和后景图像进行分割,去除后景图像,能够方便后续直接从前景图像中分割出人脸图像,而不需要还计算后景图像中的像素点信息,进而提高了人脸分割的速度。Exemplarily, as shown in FIG. 3, FIG. 3 is a schematic diagram of depth image segmentation provided by an embodiment of the present disclosure, wherein the image corresponding to the
S103、通过人脸检测算法获取拍摄对象的人脸高度。S103. Obtain the face height of the object to be photographed through a face detection algorithm.
本公开实施例中,在人脸分割的过程中,电子设备可以通过人脸检测算法获取需要进行人脸分割的拍摄对象的人脸高度,以便于电子设备可以通过该人脸高度进行人脸分割。In the embodiment of the present disclosure, in the process of face segmentation, the electronic device can obtain the face height of the object to be segmented through the face detection algorithm, so that the electronic device can perform face segmentation based on the face height .
需要说明的是,电子设备获取的拍摄对象的人脸高度为拍摄对象的真实人脸高度,其为拍摄对象的额头到下巴之间的距离。It should be noted that the face height of the subject acquired by the electronic device is the real face height of the subject, which is the distance from the forehead to the chin of the subject.
示例性地,上述人脸检测算法可以为可变型组件DPM(Deformable Part Model)算法、归一化的像素差异特征(Normalized Pixel Difference,NPD)算法和Viola-Jones人脸检测算法,本公开实施例这里不作限制。需要说明的是,通过Viola-Jones人脸检测算法检测人脸高度的过程可以为:检测拍摄对象,通过调整矩形框使其包括拍摄对象的人脸,以标记处人脸的高度。Exemplarily, the above-mentioned face detection algorithm may be a deformable component DPM (Deformable Part Model) algorithm, a normalized pixel difference feature (Normalized Pixel Difference, NPD) algorithm and a Viola-Jones face detection algorithm, the embodiment of the present disclosure There is no limit here. It should be noted that the process of detecting the height of a human face by using the Viola-Jones face detection algorithm may be as follows: detecting the subject, and adjusting the rectangular frame to include the subject's face to mark the height of the human face.
S104、根据预设叠加深度模型、前景图像的像素点信息和人脸高度,得到前景图像中的人脸图像。S104. Obtain the face image in the foreground image according to the preset superposition depth model, the pixel point information of the foreground image, and the height of the face.
本公开实施例中,电子设备在获取前景图像和获取人脸高度之后,电子设备需要根据预设叠加深度模型、前景图像的像素点信息和人脸高度,得到前景图像中的人脸图像。In the embodiment of the present disclosure, after the electronic device acquires the foreground image and the height of the face, the electronic device needs to obtain the face image in the foreground image according to the preset superimposed depth model, the pixel information of the foreground image, and the height of the face.
需要说明的是,S103可以是在S104之前任意一个步骤执行,S103的撰写顺序并不意味着严格的执行顺序,而对本公开实施例的实施过程构成任何限定,该步骤的具体执行顺序应该以其功能和可能的内在逻辑确定。It should be noted that S103 can be performed in any step before S104, and the writing order of S103 does not imply a strict execution order, but constitutes any limitation on the implementation process of the embodiment of the present disclosure, and the specific execution order of the steps should be based on its Functionality and possible internal logic determined.
本公开实施例中,人脸图像用于表征前景图像中该拍摄对象的人脸所在的图像,而在前景图像中除了人脸图像以外的其他图像可以为躯干图像,因此,为了获取拍摄对象的人脸图像,电子设备需要分割前景图像中的人脸图像和躯干图像以获取人脸图像。In the embodiment of the present disclosure, the face image is used to represent the image where the face of the subject is located in the foreground image, and the images other than the face image in the foreground image may be torso images. Therefore, in order to obtain the subject's For the face image, the electronic device needs to segment the face image and the torso image in the foreground image to obtain the face image.
示例性地,如图4所示,图4为本公开实施例提供的前景图像分割示意图,其中,标号40对应的图像为前景图像,编号41对应的图像为前景图像中的躯干图像,该躯干图像为需要去除的图像,标号42对应的图像为前景图像中的人脸图像,如此,本公开实施例只需要计算前景图像中的像素点信息,便可以分割出人脸图像,不再需要计算深度图像中所有的像素点信息,进而提高了人脸分割的速度。Exemplarily, as shown in FIG. 4, FIG. 4 is a schematic diagram of foreground image segmentation provided by an embodiment of the present disclosure, wherein the image corresponding to the
需要说明的是,本公开实施例中预设叠加深度模型用于表征将前景图像转化为前景直方图。It should be noted that in the embodiment of the present disclosure, the preset stacking depth model is used to characterize the conversion of the foreground image into the foreground histogram.
示例性地,预设叠加深度模型可以为公式(1):Exemplarily, the preset stacking depth model can be formula (1):
其中,h(v)为前景图像的像素点纵坐标v对应的叠加深度特征值,Mτ(u,v)为前景图像的像素点平面坐标(u,v)对应的深度特征值。为了获取前景图像中的人脸图像,电子设备可以先通过预设叠加深度模型将前景图像转化为前景直方图,再根据拍摄对象的人脸高度和该前景直方图,获取人脸图像。Among them, h(v) is the superimposed depth feature value corresponding to the vertical coordinate v of the pixel point of the foreground image, and M τ (u, v) is the depth feature value corresponding to the pixel plane coordinate (u, v) of the foreground image. In order to obtain the face image in the foreground image, the electronic device may first convert the foreground image into a foreground histogram through a preset stacking depth model, and then obtain the face image according to the face height of the subject and the foreground histogram.
具体地,电子设备根据预设叠加深度模型、前景图像的像素点信息和人脸高度,得到前景图像中的人脸图像的过程为:根据预设叠加深度模型和前景图像的像素点信息,获取前景图像对应的前景直方图;通过人脸高度对前景直方图进行分割,得到前景图像中的人脸图像。Specifically, the electronic device obtains the face image in the foreground image according to the preset superposition depth model, the pixel point information of the foreground image, and the face height as follows: according to the preset superposition depth model and the pixel point information of the foreground image, obtain The foreground histogram corresponding to the foreground image; the foreground histogram is segmented by the face height to obtain the face image in the foreground image.
可以理解的是,电子设备通过预设叠加深度模型和前景图像的像素点信息,获取前景图像对应的前景直方图,能够将前景图像转化为前景直方图的呈现形式,以便于能够通过拍摄对象的人脸高度所对应的水平线对前景直方图进行分割,如此,直接通过人脸高度对应的水平线分割前景图像,提高了人脸分割的速度和准确性。It can be understood that the electronic device acquires the foreground histogram corresponding to the foreground image by presetting the superimposed depth model and the pixel point information of the foreground image, and can convert the foreground image into the presentation form of the foreground histogram, so that the object can be captured through The horizontal line corresponding to the face height is used to segment the foreground histogram. In this way, the foreground image is directly segmented by the horizontal line corresponding to the face height, which improves the speed and accuracy of face segmentation.
本公开实施例中,电子设备根据预设叠加深度模型和前景图像的像素点信息,获取前景图像对应的前景直方图的过程为:通过前景图像中各前景像素点深度信息对应的深度值,获取各前景像素点对应的各前景深度特征值;根据各前景深度特征值和预设叠加深度模型,获取至少一个前景像素点纵坐标对应的叠加深度特征值;由每一个前景像素点纵坐标和每一个前景像素点纵坐标对应的叠加深度特征值,构成前景图像对应的前景直方图。In the embodiment of the present disclosure, the electronic device obtains the foreground histogram corresponding to the foreground image according to the preset superimposed depth model and the pixel point information of the foreground image as follows: through the depth value corresponding to the depth information of each foreground pixel point in the foreground image, obtain Each foreground depth feature value corresponding to each foreground pixel point; according to each foreground depth feature value and a preset stacking depth model, obtain at least one stacking depth feature value corresponding to the vertical coordinate of the foreground pixel point; by each foreground pixel point vertical coordinate and each The superimposed depth feature value corresponding to the ordinate of a foreground pixel constitutes a foreground histogram corresponding to the foreground image.
需要说明的是,本公开实施例中,将前景图像转化为前景直方图用于后续电子设备直接通过人脸高度对前景直方图进行分割得到人脸图像,其是提高分割速度的重要过程。It should be noted that, in the embodiment of the present disclosure, converting the foreground image into a foreground histogram is used for subsequent electronic devices to directly segment the foreground histogram through the height of the face to obtain the face image, which is an important process to improve the segmentation speed.
示例性地,电子设备通过前景图像中各前景像素点深度信息对应的深度值,获取各前景像素点对应的各前景深度特征值,可以为当前景像素点深度信息对应有深度值时,则该前景像素点对应的前景深度特征值为1;当前景像素点深度信息对应没有深度值时,则该前景像素点对应的前景深度特征值为0,这时,该前景深度特征值不参与计算。Exemplarily, the electronic device obtains each foreground depth feature value corresponding to each foreground pixel point through the depth value corresponding to each foreground pixel point depth information in the foreground image, which may be that when the foreground pixel point depth information corresponds to a depth value, then the The foreground depth feature value corresponding to the foreground pixel is 1; when the depth information of the foreground pixel point does not correspond to a depth value, the foreground depth feature value corresponding to the foreground pixel point is 0, and at this time, the foreground depth feature value does not participate in the calculation.
另外,电子设备根据各前景深度特征值和预设叠加深度模型,获取至少一个前景像素点纵坐标对应的叠加深度特征值的过程可以为:当预设叠加深度模型为时,通过对各前景像素点对应的各前景深度特征值进行叠加,便可以获取至少一个前景像素点纵坐标中前景像素点纵坐标v对应的叠加深度特征值。In addition, according to each foreground depth characteristic value and the preset superimposition depth model, the process of obtaining the superimposition depth characteristic value corresponding to at least one foreground pixel point ordinate can be as follows: when the preset superposition depth model is , by superimposing each foreground depth feature value corresponding to each foreground pixel point, the superimposed depth feature value corresponding to the ordinate v of the foreground pixel point in the ordinate of at least one foreground pixel point can be obtained.
可以理解的是,本公开实施例中前景像素点纵坐标v对应的叠加深度特征值需要先遍历同一前景像素点纵坐标v对应的所有前景像素点横坐标,以得到前景像素点纵坐标v对应的所有前景像素点,再对得到的前景像素点纵坐标v对应的所有前景像素点对应的深度特征值进行叠加,以得到该前景像素点纵坐标v对应的叠加深度特征值。It can be understood that, in the embodiment of the present disclosure, the superimposed depth feature value corresponding to the ordinate v of the foreground pixel needs to first traverse all the abscissas of the foreground pixel corresponding to the ordinate v of the same foreground pixel to obtain the value corresponding to the ordinate v of the foreground pixel All the foreground pixels, and then superimpose the depth feature values corresponding to all the foreground pixels corresponding to the obtained foreground pixel ordinate v, to obtain the superimposed depth feature value corresponding to the foreground pixel ordinate v.
示例性地,如果前景像素点坐标分别为(1,1)、(1,2)、(2,1)、(2,2),且该所有前景像素点对应的深度特征值均为1,则前景像素点纵坐标2对应的所有前景像素点为(1,2)和(2,2),通过对像素点(1,2)对应的深度特征值1和像素点(2,2)对应的深度特征值1进行叠加,便可以得到该前景像素点纵坐标2对应的叠加深度特征值为2。Exemplarily, if the foreground pixel coordinates are (1, 1), (1, 2), (2, 1), (2, 2), and the depth feature values corresponding to all the foreground pixels are 1, Then all the foreground pixels corresponding to the ordinate 2 of the foreground pixel are (1, 2) and (2, 2), by corresponding to the depth feature value 1 corresponding to the pixel point (1, 2) and the pixel point (2, 2) By superimposing the depth feature value 1 of the foreground pixel point, the superposition depth feature value corresponding to the vertical coordinate 2 of the foreground pixel point is 2.
本公开实施例中,电子设备通过人脸高度对前景直方图进行分割,得到前景图像中的人脸图像的过程为:当前景直方图中前景像素点纵坐标小于等于拍摄对象的人脸高度,且前景像素点纵坐标对应的叠加深度特征值不等于预设叠加深度特征值时,将前景像素点纵坐标对应的各前景像素点所在的图像作为前景图像中的人脸图像,如此,本公开实施例通过前景直方图中前景像素点信息对应的纵坐标与和人脸高度进行比较,便可以直接的获取人脸图像,提高了人脸图像的分割速度和准确度。In the embodiment of the present disclosure, the electronic device divides the foreground histogram by the height of the face, and the process of obtaining the face image in the foreground image is as follows: the ordinate of the foreground pixel in the current foreground histogram is less than or equal to the face height of the subject, And when the superimposed depth feature value corresponding to the vertical coordinate of the foreground pixel point is not equal to the preset superimposed depth feature value, the image where each foreground pixel point corresponding to the vertical coordinate of the foreground pixel point is located is used as the face image in the foreground image, so, the present disclosure In the embodiment, by comparing the ordinate corresponding to the foreground pixel information in the foreground histogram with the face height, the face image can be directly obtained, which improves the segmentation speed and accuracy of the face image.
示例性地,预设叠加深度特征值可以根据用户实际需求进行设置,如可以设置叠加深度特征值为0,本公开实施例这里不作限制。当电子设备叠加深度特征值为0时,通过人脸高度对前景直方图进行分割,得到前景图像中的人脸图像的过程可以对应公式(2),其中,v为前景像素点纵坐标,y为人脸高度,h(v)为该前景像素点纵坐标v对应的叠加深度特征值,H为人脸图像所在的区域。Exemplarily, the preset characteristic value of the superimposed depth can be set according to the actual needs of the user, for example, the characteristic value of the superimposed depth can be set to 0, which is not limited in this embodiment of the present disclosure. When the superimposed depth feature value of the electronic device is 0, the process of segmenting the foreground histogram by the height of the face to obtain the face image in the foreground image can correspond to formula (2), where v is the ordinate of the foreground pixel point, y is the height of the face, h(v) is the superimposed depth feature value corresponding to the ordinate v of the foreground pixel point, and H is the area where the face image is located.
H={v丨v≤y and h(v)≠0} (2)H={v丨v≤y and h(v)≠0} (2)
本公开实施例中,电子设备通过人脸高度对前景直方图进行分割,同时也可以得到前景图像中的躯干图像,其得到前景图像中的躯干图像的过程为:当前景直方图中前景像素点纵坐标大于拍摄对象的人脸高度,且前景像素点纵坐标对应的叠加深度特征值不等于预设叠加深度特征值时,将前景像素点纵坐标对应的各前景像素点所在的图像作为前景图像中的躯干图像。In the embodiment of the present disclosure, the electronic device divides the foreground histogram by the face height, and can also obtain the torso image in the foreground image at the same time. The process of obtaining the torso image in the foreground image is: the foreground pixel in the current foreground histogram When the vertical coordinate is greater than the height of the face of the subject, and the superimposed depth feature value corresponding to the vertical coordinate of the foreground pixel point is not equal to the preset superimposed depth feature value, the image where each foreground pixel point corresponding to the vertical coordinate of the foreground pixel point is located is taken as the foreground image torso image in .
示例性地,当电子设备叠加深度特征值为0时,电子设备通过人脸高度对前景直方图进行分割,得到前景图像中的躯干图像的过程可以对应公式(3),其中,v为前景像素点纵坐标,y为人脸高度,h(v)为该前景像素点纵坐标v对应的叠加深度特征值,J为躯干图像所在的区域。Exemplarily, when the superimposed depth feature value of the electronic device is 0, the electronic device divides the foreground histogram by the height of the face, and the process of obtaining the torso image in the foreground image can correspond to formula (3), where v is the foreground pixel Point ordinate, y is the face height, h(v) is the superimposed depth feature value corresponding to the foreground pixel point ordinate v, J is the area where the torso image is located.
J={v丨v>y and h(v)≠0} (3)J={v丨v>y and h(v)≠0} (3)
在其他实施例中,电子设备在通过前景图像中各前景像素点深度信息对应的深度值,获取各前景像素点对应的各前景深度特征值之后,且在对人脸图像进行三维处理,得到拍摄对象的三维人脸图像之前,还可以根据前景像素点对应的前景深度特征值和拍摄对象的人脸高度,获取拍摄对象的人脸图像,具体为:当前景像素点对应的前景深度特征值为预设深度特征值,且前景像素点纵坐标小于等于拍摄对象的人脸高度时,将前景像素点所在的图像作为前景图像中的人脸图像,如此,本公开实施例还基于前景像素点信息对应的纵坐标与人脸高度,以另一种形式来直接的获取人脸图像,提高了人脸图像的分割速度和准确度,同时能够灵活的获取人脸图像。In other embodiments, after the electronic device obtains each foreground depth feature value corresponding to each foreground pixel point through the depth value corresponding to the depth information of each foreground pixel point in the foreground image, and performs three-dimensional processing on the face image to obtain the photographed Before the 3D face image of the subject, the face image of the subject can also be obtained according to the foreground depth feature value corresponding to the foreground pixel point and the face height of the subject, specifically: the foreground depth feature value corresponding to the current foreground pixel point is When the depth feature value is preset, and the ordinate of the foreground pixel point is less than or equal to the height of the face of the subject, the image where the foreground pixel point is located is used as the face image in the foreground image. In this way, the embodiment of the present disclosure is also based on the foreground pixel point information The corresponding ordinate and the height of the face can directly obtain the face image in another form, which improves the segmentation speed and accuracy of the face image, and can flexibly obtain the face image at the same time.
需要说明的是,电子设备将前景像素点纵坐标与人脸高度进行比较以及确定前景像素点是否有前景像素点深度信息对应的深度特征值,也可以确定拍摄对象的人脸图像。It should be noted that the electronic device may also determine the face image of the subject by comparing the ordinate of the foreground pixel with the face height and determining whether the foreground pixel has a depth feature value corresponding to the depth information of the foreground pixel.
示例性地,预设深度特征值可以根据用户实际需求进行设置,如可以设置预设深度特征值为1,本公开实施例这里不作限制。Exemplarily, the preset depth feature value may be set according to the actual needs of the user, for example, the preset depth feature value may be set to 1, which is not limited in this embodiment of the present disclosure.
当预设深度特征值为1,电子设备根据前景像素点对应的前景深度特征值和拍摄对象的人脸高度,获取拍摄对象的人脸图像的过程可以对应公式(4),其中,Dτ(u,v)为前景图像的像素点深度信息对应的深度值,D'τ(u,v)为前景图像中人脸图像的像素点深度信息对应的深度值,Mτ(u,v)为前景像素点平面坐标(u,v)对应的深度特征值,v为前景像素点的纵坐标,y为拍摄对象的人脸高度。When the preset depth feature value is 1, the electronic device obtains the face image of the subject according to the foreground depth feature value corresponding to the foreground pixel point and the face height of the subject, which can correspond to formula (4), where D τ ( u, v) is the depth value corresponding to the pixel point depth information of the foreground image, D' τ (u, v) is the depth value corresponding to the pixel point depth information of the face image in the foreground image, and M τ (u, v) is The depth feature value corresponding to the plane coordinates (u, v) of the foreground pixel point, v is the vertical coordinate of the foreground pixel point, and y is the face height of the subject.
S105、对人脸图像进行三维处理,得到拍摄对象的三维人脸图像,并显示三维人脸图像。S105. Perform three-dimensional processing on the face image to obtain a three-dimensional face image of the subject, and display the three-dimensional face image.
本公开实施例中,电子设备在获取人脸图像之后,可以对人脸图像进行三维处理,得到拍摄对象的三维人脸图像。In the embodiment of the present disclosure, after acquiring the face image, the electronic device may perform three-dimensional processing on the face image to obtain a three-dimensional face image of the subject.
需要说明的,对人脸图像进行的三维处理可以是对人脸图像进行滤波、反投影、配准等三维处理,具体地,电子设备对人脸图像进行三维处理,得到拍摄对象的三维人脸图像的过程为:对人脸图像进行滤波,获取滤波人脸图像;对滤波人脸图像进行反投影,获取滤波人脸图像的三维点云和三维点云的法向量;对滤波人脸图像对应的三维点云进行配准,获取配准人脸图像的三维点云;根据配准人脸图像的三维点云和三维点云对应的法向量,获取拍摄对象的三维人脸图像。It should be noted that the three-dimensional processing of the face image may be three-dimensional processing such as filtering, back-projection, and registration of the face image. Specifically, the electronic device performs three-dimensional processing on the face image to obtain the three-dimensional face image of the subject. The image process is: filter the face image to obtain the filtered face image; back-project the filtered face image to obtain the 3D point cloud of the filtered face image and the normal vector of the 3D point cloud; correspond to the filtered face image According to the 3D point cloud of the registered face image and the normal vector corresponding to the 3D point cloud, the 3D face image of the subject is obtained.
本公开实施例中,电子设备通过深度图像的像素点信息获取的人脸图像可能存在锯齿,为了得到精确地三维人脸图像,需要对人脸图像进行滤波处理,示例性地,电子设备可以通过双边滤波器对人脸图像进行滤波处理,以在去除噪声的同时保存人脸图像中的像素点深度信息对应的不连续深度值。In the embodiment of the present disclosure, the face image obtained by the electronic device through the pixel information of the depth image may have jaggedness. In order to obtain an accurate three-dimensional face image, it is necessary to filter the face image. For example, the electronic device can pass The bilateral filter performs filtering processing on the face image to save the discontinuous depth value corresponding to the pixel depth information in the face image while removing the noise.
其中,上述对人脸图像进行滤波处理,获取滤波人脸图像的具体过程为:从人脸图像的人脸像素点深度信息中,获取人脸像素点深度信息对应的深度方差值;当深度方差值小于预设深度方差阈值时,将人脸像素点所在的图像作为滤波人脸图像。Among them, the above-mentioned filter processing of the face image, the specific process of obtaining the filtered face image is: from the face pixel depth information of the face image, obtain the depth variance value corresponding to the face pixel depth information; when the depth When the variance value is less than the preset depth variance threshold, the image where the face pixels are located is used as the filtered face image.
示例性地,对人脸图像进行滤波处理,获取滤波人脸图像的具体过程可以对应公式(5),其中,D”τ(u,v)为滤波人脸图像的像素点信息对应的深度值,D'τ(u,v)为前景图像中人脸图像的像素点深度信息对应的深度值,δ(u,v)2为人脸像素点深度信息对应的深度方差值,δ2 max为预设深度方差阈值。Exemplarily, the face image is filtered, and the specific process of obtaining the filtered face image may correspond to formula (5), wherein, D" τ (u, v) is the depth value corresponding to the pixel point information of the filtered face image , D' τ (u, v) is the depth value corresponding to the pixel depth information of the face image in the foreground image, δ(u, v) 2 is the depth variance value corresponding to the face pixel depth information, and δ 2 max is Default depth variance threshold.
本公开实施例中,在获取滤波人脸图像之后,电子设备还需要对滤波人脸图像进行反投影,以便于获取人脸图像的三维点云。电子设备对滤波人脸图像进行反投影,获取滤波人脸图像的三维点云和该三维点云对应的法向量的过程为:获取三维坐标模型和法向量模型;根据滤波人脸图像的像素点深度信息、滤波人脸图像的像素点平面坐标信息和三维计算模型,获取滤波人脸图像的三维点云;根据滤波人脸图像的三维点云和法向量模型,获取三维点云对应的法向量。In the embodiment of the present disclosure, after obtaining the filtered face image, the electronic device also needs to back-project the filtered face image, so as to obtain a three-dimensional point cloud of the face image. The electronic device back-projects the filtered face image, and the process of obtaining the three-dimensional point cloud of the filtered face image and the normal vector corresponding to the three-dimensional point cloud is as follows: obtaining the three-dimensional coordinate model and the normal vector model; according to the pixel points of the filtered face image Obtain the 3D point cloud of the filtered face image based on the depth information, the pixel plane coordinate information of the filtered face image and the 3D calculation model; obtain the normal vector corresponding to the 3D point cloud according to the 3D point cloud and normal vector model of the filtered face image .
需要说明的是,三维坐标模型用于表征滤波人脸图像的像素点对应的三维信息,通过将滤波人脸图像的像素点深度信息和滤波人脸图像的像素点平面坐标信息输入至三维计算模型,可以得到滤波人脸图像的三维点云。It should be noted that the 3D coordinate model is used to represent the 3D information corresponding to the pixels of the filtered face image, by inputting the pixel depth information of the filtered face image and the pixel plane coordinate information of the filtered face image into the 3D calculation model , the 3D point cloud of the filtered face image can be obtained.
示例性地,滤波人脸图像中的一个三维像素点坐标[X,Y,Z]T映射到滤波人脸图像的像素点平面坐标[u,v]T中的公式为(6),其中,D”τ(u,v)为滤波人脸图像的像素点信息对应的深度值,fx和fy为相机的焦距,cx和cy为相机的投影中心,该投影中心为三维点到二维像素点的平面坐标的投影映射。Exemplarily, a three-dimensional pixel point coordinate [X, Y, Z] T in the filter face image is mapped to the pixel point plane coordinate [u, v] T of the filter face image in formula (6), wherein, D” τ (u, v) is the depth value corresponding to the pixel point information of the filtered face image, f x and f y are the focal length of the camera, c x and cy are the projection centers of the camera, and the projection center is the three-dimensional point to Projective mapping of planar coordinates of 2D pixel points.
其中,K为相机的固有矩阵,K可以为表达式(7)。Among them, K is the inherent matrix of the camera, and K can be the expression (7).
通过对公式(6)左边等式进行计算,可得像素点平面坐标u的表达式(8)。By calculating the equation on the left side of the formula (6), the expression (8) of the plane coordinate u of the pixel point can be obtained.
u=X*fx/D”τ(u,v)+cx (8)u=X*f x /D” τ (u,v)+c x (8)
通过对公式(6)左边等式进行计算,可得像素点平面坐标v的表达式(9)。By calculating the equation on the left side of the formula (6), the expression (9) of the plane coordinate v of the pixel point can be obtained.
v=Y*fy/D”τ(u,v)+cy (9)v=Y*f y /D” τ (u,v)+c y (9)
根据公式(6)、公式(8)和公式(9),可得三维像素点坐标[X,Y,Z]T的公式为(10)。According to formula (6), formula (8) and formula (9), the formula of the three-dimensional pixel point coordinate [X, Y, Z] T can be obtained as (10).
通过对公式(10)进行转化,可得到滤波人脸图像的像素点平面坐标(u,v)对应的三维点云Vτ(u,v)的公式为(11)。By transforming the formula (10), the formula of the three-dimensional point cloud V τ (u, v) corresponding to the pixel plane coordinates (u, v) of the filtered face image can be obtained as (11).
Vτ(u,v)=D”τ(u,v)*K-1*[u,v,1]T (11)V τ (u,v)=D” τ (u,v)*K -1 *[u,v,1] T (11)
本公开实施例中,法向量模型用于表征三维人脸图像的曲面信息,通过将人脸图像中的相邻的三维点云输入到法向量模型,可以得到该相邻三维点云所形成平面的法向量,即三维点云对应的法向量,法向量为以相邻三维点云为顶点所形成的平面的法向量。In the embodiment of the present disclosure, the normal vector model is used to represent the surface information of the 3D face image, and by inputting the adjacent 3D point clouds in the face image into the normal vector model, the plane formed by the adjacent 3D point clouds can be obtained The normal vector of , that is, the normal vector corresponding to the three-dimensional point cloud, the normal vector is the normal vector of the plane formed by the adjacent three-dimensional point cloud as the vertex.
示例性地,该法向量模型可以为公式(12),其中,Nτ(u,v)为三维点云对应的法向量,Vτ(u+1,v)和Vτ(u,v+1)分别为滤波人脸图像平面像素点(u,v)对应的相邻坐标的三维点云。Exemplarily, the normal vector model can be formula (12), where N τ (u,v) is the normal vector corresponding to the three-dimensional point cloud, V τ (u+1,v) and V τ (u,v+ 1) are the three-dimensional point clouds of adjacent coordinates corresponding to the plane pixel points (u, v) of the filtered face image.
Nτ(u,v)=[Vτ(u+1,v)-Vτ(u,v)]×[Vτ(u,v+1)-Vτ(u,v)] (12)N τ (u,v)=[V τ (u+1,v)-V τ (u,v)]×[V τ (u,v+1)-V τ (u,v)] (12)
本公开实施例中,电子设备获取的深度图像为至少一幅深度图像,通过先分割得到前景图像、再分割得到人脸图像,以及滤波和反投影的三维处理,可以得到至少一幅滤波人脸图像的三维点云,为了得到精确度和清晰度高的三维人脸图像,可以对多幅滤波人脸图像对应的三维点云进行配准,获取配准人脸图像的三维点云。In the embodiment of the present disclosure, the depth image acquired by the electronic device is at least one depth image, and at least one filtered face can be obtained by first segmenting to obtain the foreground image, then segmenting to obtain the face image, and three-dimensional processing of filtering and back projection. For the 3D point cloud of the image, in order to obtain a 3D face image with high accuracy and clarity, the 3D point cloud corresponding to multiple filtered face images can be registered to obtain the 3D point cloud of the registered face image.
示例性地,对多幅滤波人脸图像对应的三维点云进行配准可以采用稀疏迭代最近点算法进行配准,本公开实施例这里不做限制。Exemplarily, the registration of the three-dimensional point clouds corresponding to the multiple filtered face images may be performed using a sparse iterative closest point algorithm, which is not limited in this embodiment of the present disclosure.
在获取配准人脸图像的三维点云之后,电子设备根据配准人脸图像的三维点云和三维点云对应的法向量,获取拍摄对象的三维人脸图像。对于多幅配准人脸图像的三维点云,可以先对多幅配准人脸图像的三维点云进行融合,再根据融合后的三维点云以及该三维点云对应的法向量,获取拍摄对象的三维人脸图像。After obtaining the 3D point cloud of the registered face image, the electronic device obtains the 3D face image of the subject according to the 3D point cloud of the registered face image and the normal vector corresponding to the 3D point cloud. For the 3D point cloud of multiple registered face images, the 3D point cloud of multiple registered face images can be fused first, and then according to the fused 3D point cloud and the normal vector corresponding to the 3D point cloud, the shooting 3D face image of the subject.
示例性地,对多幅配准人脸图像的三维点云进行融合可以采用立体集成技术进行融合,本公开实施例这里不做限制。Exemplarily, the fusion of the 3D point clouds of multiple registered face images may be fused using a stereo integration technology, which is not limited in this embodiment of the present disclosure.
本公开实施例中,在获取拍摄对象的三维人脸图像之后,电子设备绘制三维人脸图像,并进行显示该人脸图像。In the embodiment of the present disclosure, after acquiring the three-dimensional face image of the subject, the electronic device draws the three-dimensional face image and displays the face image.
需要说明的是,绘制的三维人脸图像是拍摄对象的当前状态,可以通过实时采集深度图像来更新该三维人脸图像,同时,还可以通过拍摄对象的当前状态重新定义人脸图像所对应的三维点云和对应的法向量,以更新获取的三维人脸图像,进一步提高鲁棒性。It should be noted that the drawn 3D face image is the current state of the subject, and the 3D face image can be updated by collecting depth images in real time. At the same time, the current state of the subject can also be used to redefine the 3D point cloud and corresponding normal vectors to update the acquired 3D face image to further improve the robustness.
示例性地,绘制三维人脸图像可以选用开放图形库(Open Graphics Library,OpenGL),该OpenGL用于渲染三维矢量图像的跨语言、跨平台的专业图形程序接口,可以通过调用底层图形数据库中的图像来绘制三维人脸图像,本公开实施例这里不作限制。Exemplarily, drawing a three-dimensional human face image can be selected Open Graphics Library (Open Graphics Library, OpenGL), this OpenGL is used to render the cross-language, cross-platform professional graphics program interface of three-dimensional vector image, can be by calling the bottom layer graphics database The image is used to draw a three-dimensional face image, which is not limited in this embodiment of the present disclosure.
通过本公开实施例,电子设备先将深度图像分割得到前景图像;再将前景图像分割得到人脸图像,如此,一方面,只需要计算前景图像的像素点信息便可以实现人脸分割,提高了三维人脸分割速度效率;另一方面,本公开实施例通过预设叠加深度模型将前景图像转化为前景直方图,并通过人脸高度对转化后前景直方图进行分割,能够进一步提高人脸分割的速度,进而提升了三维人脸识别的效率,同时本公开实施例对人脸图像进行滤波、反投影和配准等三维处理,能够去除噪声,提高了三维人脸图像的精确度。Through the embodiments of the present disclosure, the electronic device first divides the depth image to obtain the foreground image; Three-dimensional face segmentation speed efficiency; on the other hand, the embodiment of the present disclosure converts the foreground image into a foreground histogram through the preset stacking depth model, and segments the converted foreground histogram through the height of the face, which can further improve the face segmentation. The speed of the three-dimensional face recognition is further improved, and the embodiment of the present disclosure performs three-dimensional processing such as filtering, back-projection and registration on the face image, which can remove noise and improve the accuracy of the three-dimensional face image.
此外,本公开实施例提供的人脸分割方法可以应用在三维人脸识别系统和三维人脸匹配系统中,通过采用统一计算设备架构(Compute Unified Device Architecture,CUDA),能够提高人脸分割方法的运行速度,使得分割效率更高。In addition, the face segmentation method provided by the embodiments of the present disclosure can be applied in a 3D face recognition system and a 3D face matching system. By adopting a unified computing device architecture (Compute Unified Device Architecture, CUDA), the face segmentation method can be improved. The running speed makes the segmentation more efficient.
基于上述实施例的同一发明构思,图5为本公开实施例提供的一种电子设备的组成结构示意图一,如图5所示,电子设备1000设置有摄像头,电子设备包括采集单元1001、获取单元1002、分割单元1003、三维处理单元1004和显示单元1005,其中,Based on the same inventive concept of the above-mentioned embodiments, FIG. 5 is a first schematic diagram of the composition and structure of an electronic device provided by an embodiment of the present disclosure. As shown in FIG. 1002, segmentation unit 1003, three-dimensional processing unit 1004 and display unit 1005, wherein,
采集单元1001,用于通过摄像头获取深度图像的像素点信息,深度图像的像素点信息用于表征深度图像的像素点深度信息和像素点对应的平面坐标信息;The acquisition unit 1001 is configured to acquire pixel point information of the depth image through the camera, and the pixel point information of the depth image is used to represent the pixel point depth information of the depth image and the plane coordinate information corresponding to the pixel point;
获取单元1002,用于通过人脸检测算法拍摄对象的人脸高度;An acquisition unit 1002, configured to photograph the face height of the subject through a face detection algorithm;
分割单元1003,用于根据深度图像的像素点信息,获取深度图像中的前景图像,前景图像用于表征拍摄对象所在的图像;The segmentation unit 1003 is configured to acquire a foreground image in the depth image according to the pixel point information of the depth image, and the foreground image is used to represent the image where the subject is located;
分割单元1003,还用于根据预设叠加深度模型、前景图像的像素点信息和人脸高度,得到前景图像中的人脸图像,预设叠加深度模型用于将前景图像转化为前景直方图;The segmentation unit 1003 is also used to obtain the face image in the foreground image according to the preset superimposed depth model, the pixel information of the foreground image and the face height, and the preset superimposed depth model is used to convert the foreground image into a foreground histogram;
三维处理单元1004,用于对人脸图像进行三维处理,得到拍摄对象的三维人脸图像;A three-dimensional processing unit 1004, configured to perform three-dimensional processing on the face image to obtain a three-dimensional face image of the subject;
显示单元1005,用于显示三维人脸图像。The display unit 1005 is configured to display a three-dimensional face image.
在其他实施例中,分割单元1003还包括:In other embodiments, the segmentation unit 1003 also includes:
第一分割单元1006,用于根据预设叠加深度模型和前景图像的像素点信息,获取前景图像对应的前景直方图;The first segmentation unit 1006 is configured to obtain a foreground histogram corresponding to the foreground image according to the preset superimposed depth model and the pixel point information of the foreground image;
第二分割单元1007,用于通过人脸高度对前景直方图进行分割,得到前景图像中的人脸图像。The second segmentation unit 1007 is configured to segment the foreground histogram by the face height to obtain the face image in the foreground image.
在其他实施例中,第一分割单元1006具体用于:通过前景图像中各前景像素点深度信息对应的深度值,获取各前景像素点对应的各前景深度特征值;根据各前景深度特征值和预设叠加深度模型,获取至少一个前景像素点纵坐标对应的叠加深度特征值;由每一个前景像素点纵坐标和每一个前景像素点纵坐标对应的叠加深度特征值,构成前景图像对应的前景直方图。In other embodiments, the first segmentation unit 1006 is specifically configured to: obtain each foreground depth feature value corresponding to each foreground pixel point through the depth value corresponding to each foreground pixel point depth information in the foreground image; according to each foreground depth feature value and Preset the overlay depth model to obtain at least one foreground pixel ordinate corresponding to the overlay depth feature value; each foreground pixel ordinate and the overlay depth feature value corresponding to each foreground pixel ordinate form the foreground corresponding to the foreground image histogram.
在其他实施例中,第二分割单元1007具体用于:当前景直方图中前景像素点纵坐标小于等于拍摄对象的人脸高度,且前景像素点纵坐标对应的叠加深度特征值不等于预设叠加深度特征值时,将前景像素点纵坐标对应的各前景像素点所在的图像作为前景图像中的人脸图像。In other embodiments, the second segmentation unit 1007 is specifically configured to: the ordinate of the foreground pixel in the current foreground histogram is less than or equal to the face height of the subject, and the superimposed depth feature value corresponding to the ordinate of the foreground pixel is not equal to the preset When superimposing the depth feature value, the image where each foreground pixel corresponding to the ordinate of the foreground pixel is used as the face image in the foreground image.
在其他实施例中,电子设备1000还包括:In other embodiments, the electronic device 1000 also includes:
第三分割单元1008,用于在通过前景图像中各前景像素点深度信息对应的深度值,获取各前景像素点对应的各前景深度特征值之后,且在对人脸图像进行三维处理,得到拍摄对象的三维人脸图像之前,当前景像素点对应的前景深度特征值为预设深度特征值,且前景像素点纵坐标小于等于拍摄对象的人脸高度时,将前景像素点所在的图像作为前景图像中的人脸图像。The third segmentation unit 1008 is configured to obtain the foreground depth feature values corresponding to each foreground pixel point through the depth value corresponding to the depth information of each foreground pixel point in the foreground image, and perform three-dimensional processing on the face image to obtain the photographed Before the three-dimensional face image of the object, when the foreground depth feature value corresponding to the foreground pixel point is a preset depth feature value, and the ordinate of the foreground pixel point is less than or equal to the height of the face of the subject, the image where the foreground pixel point is located is taken as the foreground The face image in the image.
在其他实施例中,三维处理单元1004包括:In other embodiments, the three-dimensional processing unit 1004 includes:
滤波单元1009,用于对人脸图像进行滤波,获取滤波人脸图像;Filtering unit 1009, configured to filter the face image to obtain the filtered face image;
反投影单元1010,用于对滤波人脸图像进行反投影,获取滤波人脸图像的三维点云和三维点云的法向量;The back-projection unit 1010 is used to back-project the filtered face image, and obtain the normal vector of the three-dimensional point cloud and the three-dimensional point cloud of the filtered face image;
配准单元1011,用于对滤波人脸图像对应的三维点云进行配准,获取配准人脸图像的三维点云;Registration unit 1011, configured to register the 3D point cloud corresponding to the filtered face image, and obtain the 3D point cloud of the registered face image;
第一获取单元1012,用于根据配准人脸图像的三维点云和三维点云对应的法向量,获取拍摄对象的三维人脸图像。The first acquiring unit 1012 is configured to acquire the 3D face image of the subject according to the 3D point cloud of the registered face image and the normal vector corresponding to the 3D point cloud.
在其他实施例中,滤波单元1009具体用于:从人脸图像的人脸像素点深度信息中,获取人脸像素点深度信息对应的深度方差值;当深度方差值小于预设深度方差阈值时,将人脸像素点所在的图像作为滤波人脸图像。In other embodiments, the filtering unit 1009 is specifically configured to: acquire the depth variance value corresponding to the face pixel depth information from the face pixel depth information of the face image; when the depth variance value is less than the preset depth variance When thresholding, the image where the face pixels are located is used as the filtered face image.
在其他实施例中,反投影单元1010具体用于:获取三维坐标模型和法向量模型,三维坐标模型用于表征滤波人脸图像的像素点对应的三维信息,法向量模型用于表征三维人脸图像的曲面信息;根据滤波人脸图像的像素点深度信息、滤波人脸图像的像素点平面坐标信息和三维计算模型,获取滤波人脸图像的三维点云;根据滤波人脸图像的三维点云和法向量模型,获取三维点云对应的法向量。In other embodiments, the back-projection unit 1010 is specifically configured to: obtain a three-dimensional coordinate model and a normal vector model, the three-dimensional coordinate model is used to represent the three-dimensional information corresponding to the pixels of the filtered face image, and the normal vector model is used to represent the three-dimensional face The surface information of the image; according to the pixel depth information of the filtered face image, the pixel plane coordinate information of the filtered face image and the 3D calculation model, the 3D point cloud of the filtered face image is obtained; according to the 3D point cloud of the filtered face image and normal vector model to obtain the normal vector corresponding to the 3D point cloud.
在其他实施例中,分割单元1003具体用于:从深度图像的像素点深度信息中,获取两个相邻像素点深度信息;根据两个相邻像素点深度信息,获取两个相邻像素点的深度差值;当深度差值小于预设差值阈值时,将两个相邻像素点所在的图像作为深度图像中的前景图像。In other embodiments, the segmentation unit 1003 is specifically configured to: obtain the depth information of two adjacent pixels from the depth information of pixels in the depth image; obtain the depth information of two adjacent pixels according to the depth information of two adjacent pixels The depth difference; when the depth difference is less than the preset difference threshold, the image where two adjacent pixels are located is taken as the foreground image in the depth image.
通过本公开实施例,一方面,本公开实施例先将深度图像分割得到前景图像;再将前景图像分割得到人脸图像,如此,只需要计算前景图像的像素点信息便可以实现人脸分割,提高了三维人脸分割速度;另一方面,本公开实施例通过预设叠加深度模型将前景图像转化为前景直方图,并通过人脸高度对转化后前景直方图进行分割,能够进一步提高人脸分割的速度,进而提升了三维人脸识别的效率,同时本公开实施例对人脸图像进行滤波、反投影和配准等三维处理,能够去除噪声,提高了三维人脸图像的精确度。Through the embodiments of the present disclosure, on the one hand, the embodiments of the present disclosure firstly segment the depth image to obtain the foreground image; then segment the foreground image to obtain the face image, so that the face segmentation can be realized only by calculating the pixel point information of the foreground image, The speed of three-dimensional face segmentation is improved; on the other hand, the embodiment of the present disclosure converts the foreground image into a foreground histogram through a preset stacking depth model, and segments the converted foreground histogram through the height of the face, which can further improve the accuracy of the face. The speed of segmentation improves the efficiency of 3D face recognition. At the same time, the embodiment of the present disclosure performs 3D processing such as filtering, back-projection and registration on the face image, which can remove noise and improve the accuracy of the 3D face image.
基于前述实施例的同一发明构思,本公开实施例提供一种电子设备,图6为本公开实施例提供的一种电子设备的组成结构示意图二,如图6所示,电子设备设置有摄像头05,电子设备还至少包括处理器01、存储器02、通信接口03和通信总线04,其中,通信总线04用于实现处理器01、存储器02、通信接口03和摄像头05的连接通信;通信接口03用于通过摄像头获取深度图像的像素点信息;处理器01用于执行存储器02中存储的可执行指令,以实现上述实施例提供的人脸分割方法中的步骤。Based on the same inventive concept of the foregoing embodiments, an embodiment of the present disclosure provides an electronic device. FIG. 6 is a schematic diagram of the composition and structure of an electronic device provided by an embodiment of the present disclosure. As shown in FIG. 6 , the electronic device is provided with a
通过本公开实施例,一方面,本公开实施例先将深度图像分割得到前景图像;再将前景图像分割得到人脸图像,如此,只需要计算前景图像的像素点信息便可以实现人脸分割,提高了三维人脸分割速度;另一方面,本公开实施例通过预设叠加深度模型将前景图像转化为前景直方图,并通过人脸高度对转化后前景直方图进行分割,能够进一步提高人脸分割的速度,进而提升了三维人脸识别的效率,同时本公开实施例对人脸图像进行滤波、反投影和配准等三维处理,能够去除噪声,提高了三维人脸图像的精确度。Through the embodiments of the present disclosure, on the one hand, the embodiments of the present disclosure firstly segment the depth image to obtain the foreground image; then segment the foreground image to obtain the face image, so that the face segmentation can be realized only by calculating the pixel point information of the foreground image, The speed of three-dimensional face segmentation is improved; on the other hand, the embodiment of the present disclosure converts the foreground image into a foreground histogram through a preset stacking depth model, and segments the converted foreground histogram through the height of the face, which can further improve the accuracy of the face. The speed of segmentation improves the efficiency of 3D face recognition. At the same time, the embodiment of the present disclosure performs 3D processing such as filtering, back-projection and registration on the face image, which can remove noise and improve the accuracy of the 3D face image.
另外,在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。In addition, each component in this embodiment may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware or in the form of software function modules.
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:磁性随机存取存储器(FRAM,ferromagnetic random access memory)、只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,Erasable Programmable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read-Only Memory)、快闪存储器(FlashMemory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory)等各种可以存储程序代码的介质,本公开实施例不作限制。If the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of this embodiment is essentially or It is said that the part that contributes to the prior art or the whole or part of the technical solution can be embodied in the form of a software product, the computer software product is stored in a storage medium, and includes several instructions to make a computer device (which can It is a personal computer, a server, or a network device, etc.) or a processor (processor) that executes all or part of the steps of the method described in this embodiment. The aforementioned storage medium includes: magnetic random access memory (FRAM, ferromagnetic random access memory), read-only memory (ROM, Read Only Memory), programmable read-only memory (PROM, Programmable Read-Only Memory), erasable Programmable Read-Only Memory (EPROM, Erasable Programmable Read-Only Memory), Electrically Erasable Programmable Read-Only Memory (EEPROM, Electrically Erasable Programmable Read-Only Memory), Flash Memory (FlashMemory), Magnetic Surface Memory, CD-ROM, or CD-ROM (Compact Disc Read-Only Memory, Compact Disc Read-Only Memory) and other media capable of storing program codes, which are not limited by the embodiments of the present disclosure.
基于前述实施例,本公开实施例提供了一种计算机可读存储介质,其上存储有可执行指令,该可执行指令被上述处理器执行时实现上述实施例中的人脸分割方法中步骤。Based on the foregoing embodiments, embodiments of the present disclosure provide a computer-readable storage medium, on which executable instructions are stored, and when the executable instructions are executed by the above-mentioned processor, the steps in the face segmentation method in the above-mentioned embodiments are implemented.
本领域内的技术人员应明白,本公开实施例可提供为方法、系统、或计算机程序产品。因此,本公开实施例可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of the present disclosure may be provided as methods, systems, or computer program products. Accordingly, embodiments of the present disclosure may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to magnetic disk storage, optical storage, etc.) having computer-usable program code embodied therein.
本公开实施例是参照根据本公开实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。Embodiments of the present disclosure are described with reference to flowcharts and/or block diagrams of methods, devices (systems), and computer program products according to embodiments of the present disclosure. It should be understood that each procedure and/or block in the flowchart and/or block diagram, and a combination of procedures and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions may be provided to a general purpose computer, special purpose computer, embedded processor, or processor of other programmable data processing equipment to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing equipment produce a An apparatus for realizing the functions specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions The device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device, causing a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process, thereby The instructions provide steps for implementing the functions specified in the flow chart or blocks of the flowchart and/or the block or blocks of the block diagrams.
以上所述,仅为本公开实施例中的较佳实施例而已,并非用于限定本公开实施例的保护范围。The above descriptions are only preferred embodiments of the embodiments of the present disclosure, and are not intended to limit the protection scope of the embodiments of the present disclosure.
Claims (9)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910750349.XA CN112395912B (en) | 2019-08-14 | 2019-08-14 | A face segmentation method, electronic device and computer-readable storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910750349.XA CN112395912B (en) | 2019-08-14 | 2019-08-14 | A face segmentation method, electronic device and computer-readable storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112395912A CN112395912A (en) | 2021-02-23 |
| CN112395912B true CN112395912B (en) | 2022-12-13 |
Family
ID=74601442
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910750349.XA Active CN112395912B (en) | 2019-08-14 | 2019-08-14 | A face segmentation method, electronic device and computer-readable storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112395912B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119090902B (en) * | 2024-11-05 | 2025-02-11 | 慧诺瑞德(北京)科技有限公司 | Plant point cloud extraction method, device and equipment |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104427291A (en) * | 2013-08-19 | 2015-03-18 | 华为技术有限公司 | Image processing method and device |
| CN107370958A (en) * | 2017-08-29 | 2017-11-21 | 广东欧珀移动通信有限公司 | Image virtualization processing method, device and shooting terminal |
-
2019
- 2019-08-14 CN CN201910750349.XA patent/CN112395912B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104427291A (en) * | 2013-08-19 | 2015-03-18 | 华为技术有限公司 | Image processing method and device |
| CN107370958A (en) * | 2017-08-29 | 2017-11-21 | 广东欧珀移动通信有限公司 | Image virtualization processing method, device and shooting terminal |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112395912A (en) | 2021-02-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11928800B2 (en) | Image coordinate system transformation method and apparatus, device, and storage medium | |
| CN106503671B (en) | The method and apparatus for determining human face posture | |
| CN108549873A (en) | Three-dimensional face identification method and three-dimensional face recognition system | |
| TWI394093B (en) | An image synthesis method | |
| CN106981078B (en) | Sight line correction method and device, intelligent conference terminal and storage medium | |
| KR101510312B1 (en) | 3D face-modeling device, system and method using Multiple cameras | |
| CN114463832B (en) | Point cloud-based traffic scene line of sight tracking method and system | |
| EP3699865B1 (en) | Three-dimensional face shape derivation device, three-dimensional face shape deriving method, and non-transitory computer readable medium | |
| CN111325828B (en) | Three-dimensional face acquisition method and device based on three-dimensional camera | |
| CN110647782A (en) | Three-dimensional face reconstruction and multi-pose face recognition method and device | |
| CN103810475A (en) | Target object recognition method and apparatus | |
| WO2021218568A1 (en) | Image depth determination method, living body recognition method, circuit, device, and medium | |
| CN113362467B (en) | Point cloud preprocessing and ShuffleNet-based mobile terminal three-dimensional pose estimation method | |
| CN111881841B (en) | A face detection and recognition method based on binocular vision | |
| US20240242318A1 (en) | Face deformation compensating method for face depth image, imaging device, and storage medium | |
| CN115797451A (en) | Acupuncture point identification method, device and equipment and readable storage medium | |
| JP2001291108A (en) | Image processing apparatus and method, and program recording medium | |
| CN112395912B (en) | A face segmentation method, electronic device and computer-readable storage medium | |
| CN112800966B (en) | Sight tracking method and electronic equipment | |
| CN112073640B (en) | Panoramic information acquisition pose acquisition method, device and system | |
| CN111914790B (en) | Real-time human rotation angle recognition method in different scenarios based on dual cameras | |
| JP2008059108A (en) | Image processing apparatus, image processing method, program thereof, and human flow monitoring system | |
| Maninchedda et al. | Face reconstruction on mobile devices using a height map shape model and fast regularization | |
| Jiménez et al. | Face tracking and pose estimation with automatic three-dimensional model construction | |
| CN110706357B (en) | Navigation system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |