CN110472460A - Face image processing process and device - Google Patents
Face image processing process and device Download PDFInfo
- Publication number
- CN110472460A CN110472460A CN201810453319.8A CN201810453319A CN110472460A CN 110472460 A CN110472460 A CN 110472460A CN 201810453319 A CN201810453319 A CN 201810453319A CN 110472460 A CN110472460 A CN 110472460A
- Authority
- CN
- China
- Prior art keywords
- face image
- face
- group
- images
- center
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
本公开提供了一种人脸图像处理方法及装置,该方法包括:获取多幅人脸图像,所述多幅人脸图像包括至少一个人的不同角度人脸图像;对所述多幅人脸图像进行聚类,得到多个人脸图像组;以及根据用户输入和/或预设规则去除至少一个人脸图像组中的部分人脸图像,使得所述至少一个人脸图像组中每个人脸图像组中的各幅人脸图像对应于同一人。利用本公开提供的方案可以获取多个人脸图像组,其中,一个人脸图像组中的多个不同角度人脸图像对应一个人,以便于利用人脸图像组训练可以对不同角度人脸图像进行识别的人脸识别模型。
The present disclosure provides a face image processing method and device, the method comprising: acquiring a plurality of face images, the plurality of face images including at least one person's face images from different angles; The images are clustered to obtain a plurality of face image groups; and some face images in at least one face image group are removed according to user input and/or preset rules, so that each face image in the at least one face image group Each face image in the group corresponds to the same person. Using the solution provided by the present disclosure, multiple face image groups can be obtained, wherein a plurality of different angle face images in one face image group correspond to a person, so that the training of face image groups can be performed on different angle face images Recognized face recognition model.
Description
技术领域technical field
本公开涉及计算机技术领域,更具体地,涉及一种用于人脸图像处理方法及装置。The present disclosure relates to the field of computer technology, and more specifically, to a method and device for processing human face images.
背景技术Background technique
在线下无人零售,智慧零售的场景中要实现顾客刷脸支付的刷脸认证,其技术基础是对于顾客人脸的高精度识别和检测。在上述场景中,人脸信息一般是通过布设于店内的监控摄像头进行采集,不同于实验环境中拍摄到的标准人脸数据,例如:证件照,具有如下特点:正脸、无俯角、背景单一、五官清晰、无遮挡等,然而,上述开放环境中监控摄像头中拍摄到的人脸图像往往是异常复杂的,远远达不到证件照的标准。In the offline unmanned retail and smart retail scenarios, it is necessary to realize the authentication of customers' facial recognition for payment, and its technical basis is the high-precision recognition and detection of customers' human faces. In the above scenarios, face information is generally collected by surveillance cameras deployed in the store, which is different from the standard face data captured in the experimental environment, such as ID photos, which have the following characteristics: frontal face, no depression angle, single background , clear facial features, no occlusion, etc. However, the face images captured by the surveillance cameras in the above-mentioned open environment are often extremely complex, far below the standard of ID photos.
现有的人脸识别算法,其训练数据集一般为标准的人脸识别数据集。这些数据集中,人脸拍摄场景相对于监控摄像头拍摄的场景过于简单,导致无法覆盖监控拍到的多角度的人脸图像的情形。这就造成直接利用标准的人脸识别数据集训练好的模型对监控摄像头拍摄的人脸图像进行识别的识别精度很低。For existing face recognition algorithms, the training data set is generally a standard face recognition data set. In these data sets, the face shooting scene is too simple compared to the scene shot by the surveillance camera, which makes it impossible to cover the multi-angle face images captured by the surveillance. This results in a very low recognition accuracy for directly using a standard face recognition data set trained model to recognize face images captured by surveillance cameras.
在实现本公开构思的过程中,发明人发现现有技术中至少存在如下问题:智慧零售等场景中,被采集到的用户的人脸图像存在各种角度,利用现有的标准的人脸识别数据集训练好的模型对监控摄像头拍摄的人脸图像进行识别的识别精度很低。In the process of realizing the concept of the present disclosure, the inventors found that there are at least the following problems in the prior art: in scenarios such as smart retail, the face images of users are collected from various angles, and the use of existing standard face recognition The recognition accuracy of the model trained in the dataset to recognize face images captured by surveillance cameras is very low.
发明内容Contents of the invention
有鉴于此,本公开提供了一种人脸图像处理方法及装置,以解决现有技术利用标准的人脸识别数据集训练的模型,对存在多角度的人脸图像识别的场景中的识别精度低的问题。In view of this, the present disclosure provides a face image processing method and device to solve the problem of recognition accuracy in scenes with multi-angle face image recognition using a model trained on a standard face recognition data set in the prior art. low problem.
本公开的一个方面提供了一种用于人脸图像处理方法,包括:首先获取多幅人脸图像,为了得到每个人的多幅不同角度的人脸图像,要求所述多幅人脸图像包括至少一个人的不同角度人脸图像;然后对得到的所述多幅人脸图像进行聚类,即可得到多个人脸图像组,由于每个人具有自身特殊的面部特征,因此,聚类结果可以使得每个人的不同角度人脸图像经过聚类后,被聚在同一个人脸图像组中,这样实际上就实现了自动对所述多幅人脸图像进标注的功能,例如,人脸图像组1及人脸图像组1对应的多个不同角度的人脸图像。此外,在得到所述多个人脸图像组之后,根据用户输入和/或预设规则,去除至少一个人脸图像组中的部分人脸图像,使得所述至少一个人脸图像组中每个人脸图像组中的各幅人脸图像对应于同一人,以修正错误的聚类结果,提升数据标注的准确度。One aspect of the present disclosure provides a method for processing human face images, including: first acquiring multiple facial images, in order to obtain multiple facial images of each person from different angles, the multiple facial images are required to include At least one person's face images from different angles; then the obtained multiple face images are clustered to obtain a plurality of face image groups, because each person has its own special facial features, so the clustering result can be After the face images of different angles of each person are clustered, they are gathered in the same face image group, which actually realizes the function of automatically labeling the multiple face images, for example, the face image group 1 and multiple face images from different angles corresponding to face image group 1. In addition, after obtaining the plurality of human face image groups, according to user input and/or preset rules, part of the human face images in at least one human face image group is removed, so that each human face in the at least one human face image group Each face image in the image group corresponds to the same person to correct the wrong clustering results and improve the accuracy of data annotation.
根据本公开的实施例,所述获取多幅人脸图像可以包括以下操作:获取视频,视频可以为人脸识别环境中的摄像头所采集的视频;以及以帧为单位获取所述视频中的人脸图像,由于人在视频中往往是活动的,因此,被拍摄的多帧图像中往往包含着该人的不同角度的人脸图像。具体地,所述以帧为单位获取所述视频中的人脸图像可以包括以下操作:根据经验、实验效果等,以逐帧或者间隔设定个数帧获取所述视频的视频帧,以使得视频帧中包含有高质量的同一人的不同角度的人脸图像;然后获取所述视频帧的人脸图像以获得高质量的同一人的不同角度的人脸图像。According to an embodiment of the present disclosure, the acquiring multiple face images may include the following operations: acquiring a video, which may be a video captured by a camera in a face recognition environment; and acquiring a human face in the video in units of frames Image, because the person is often active in the video, therefore, the multi-frame images that are captured often contain the face images of the person from different angles. Specifically, the acquisition of the human face image in the video in units of frames may include the following operations: according to experience, experimental results, etc., to acquire video frames of the video frame by frame or set a number of frames at intervals, so that The video frame contains high-quality face images of the same person at different angles; and then the face images of the video frame are acquired to obtain high-quality face images of the same person at different angles.
根据本公开的实施例,所述方法还可以包括如下操作:在得到所述多个人脸图像组之后,分别获取至少一个人脸图像组的组中心,然后将所述组中心作为对应人脸图像组的标识,这样可以实现利用人类图像组的组中心对人脸图像组进行标注,即每个人脸图像组的标注信息可以为该人脸图像组的组中心,例如,人脸图像组包括:组中心及该组中心对应的多个不同角度的人脸图像。According to an embodiment of the present disclosure, the method may further include the following operations: after obtaining the plurality of face image groups, respectively obtain the group center of at least one face image group, and then use the group center as the corresponding face image Group identification, so that the group center of the human image group can be used to mark the face image group, that is, the label information of each face image group can be the group center of the face image group, for example, the face image group includes: A group center and multiple face images corresponding to the group center from different angles.
根据本公开的实施例,所述至少一个人脸图像组的组中心可以通过以下方案获取,例如,包括以下操作:对于所述至少一个人脸图像组中的每个人脸图像组:According to an embodiment of the present disclosure, the group center of the at least one human face image group may be obtained through the following scheme, for example, including the following operations: For each human face image group in the at least one human face image group:
首先分别计算该人脸图像组中每个人脸图像与其他人脸图像的距离,然后将距离该人脸图像组中其他人脸图像的距离之和最小的人脸图像作为组中心。Firstly, the distances between each face image in the face image group and other face images are calculated separately, and then the face image with the minimum sum of distances from other face images in the face image group is taken as the group center.
根据本公开的实施例,所述方法还可以包括如下操作:在得到所述组中心之后,对于该人脸图像组,分别计算该人脸图像组中每个人脸图像相对于所述组中心的旋转角度;然后利用得到的旋转角度对相应人脸图像进行角度标注。也就是说,本实施例中的人脸图像组还可以包括旋转角度的标注信息,该标注信息可以应用在需要旋转角度标注信息的场景中。According to an embodiment of the present disclosure, the method may further include the following operations: after obtaining the group center, for the group of human face images, respectively calculate the distance of each face image in the group of human face images relative to the group center Rotation angle; then use the obtained rotation angle to annotate the angle of the corresponding face image. That is to say, the group of face images in this embodiment may also include annotation information of rotation angle, and the annotation information may be applied in a scene where the annotation information of rotation angle is required.
根据本公开的实施例,还提供了计算旋转角度的具体方案,具体地,可以包括如下操作,对于所述每个人脸图像:首先获取该人脸图像的指定个数的关键点,以及获取所述组中心的指定个数的关键点,然后获取该人脸图像的关键点之间的位置关系,以及获取所述组中心的关键点之间的位置关系,接着根据该人脸图像的关键点之间的位置关系和所述组中心的关键点之间的位置关系,确定该人脸图像相对于组中心的旋转角度。According to an embodiment of the present disclosure, a specific solution for calculating the rotation angle is also provided, specifically, the following operations may be included, for each face image: first obtain a specified number of key points of the face image, and obtain all The specified number of key points of the group center, then obtain the positional relationship between the key points of the face image, and obtain the positional relationship between the key points of the group center, and then according to the key points of the face image and the positional relationship between the key points of the group center to determine the rotation angle of the face image relative to the group center.
根据本公开的实施例,所述方法还可以包括如下操作:对于该人脸图像组,根据人脸图像与所述组中心的旋转角度和/或距离对人脸图像组中的人脸图像进行排序;以及展示排序后的人脸图像。According to an embodiment of the present disclosure, the method may further include the following operations: for the group of human face images, performing a rotation on the face images in the group of human face images according to the rotation angle and/or distance between the face images and the center of the group Sorting; and displaying the sorted face images.
根据本公开的实施例,所述方法还可以包括如下操作,在得到多个人脸图像组之后,利用所述多个人脸图像组训练人脸识别模型,具体地,在得到所述多个人脸图像组之后,将人脸图像组对应的不同角度的人脸图像的人脸特征、及所属人脸图像组输入所述第一人脸识别模型进行训练,得到模型参数数值。According to an embodiment of the present disclosure, the method may further include the following operations: after obtaining a plurality of face image groups, using the plurality of face image groups to train a face recognition model, specifically, after obtaining the plurality of face images After grouping, the face features of the face images from different angles corresponding to the face image group and the face image group to which they belong are input into the first face recognition model for training to obtain model parameter values.
相应地,本公开的另一个方面提供了一种人脸图像处理装置,可以包括如下模块:获取模块,用于获取多幅人脸图像,所述多幅人脸图像包括至少一个人的不同角度人脸图像;Correspondingly, another aspect of the present disclosure provides a human face image processing device, which may include the following modules: an acquisition module, configured to acquire a plurality of human face images, the plurality of human face images including different angles of at least one person face image;
聚类模块,用于对所述多幅人脸图像进行聚类,得到多个人脸图像组;以及A clustering module, configured to cluster the multiple face images to obtain multiple face image groups; and
修正模块,用于根据用户输入和/或预设规则去除至少一个人脸图像组中的部分人脸图像,使得所述至少一个人脸图像组中每个人脸图像组中的各幅人脸图像对应于同一人。A correction module, configured to remove some face images in at least one face image group according to user input and/or preset rules, so that each face image in each face image group in the at least one face image group correspond to the same person.
根据本公开的实施例,所述获取模块可以包括如下单元:According to an embodiment of the present disclosure, the acquisition module may include the following units:
视频获取单元,用于获取视频;A video acquisition unit, configured to acquire video;
视频帧获取单元,用于逐帧或者间隔设定个数帧获取所述视频的视频帧;以及A video frame acquisition unit, configured to acquire video frames of the video frame by frame or at intervals of a set number of frames; and
人脸图像获取单元,用于获取所述视频帧的人脸图像。A human face image acquiring unit, configured to acquire the human face image of the video frame.
根据本公开的实施例,所述装置还可以包括如下模块:与所述聚类模块相连的组中心获取模块、以及标识模块,其中,所述组中心获取模块用于分别获取至少一个人脸图像组的组中心;所述标识模块用于将所述类组中心作为对应人脸图像组的标识。According to an embodiment of the present disclosure, the device may further include the following modules: a group center acquisition module connected to the clustering module, and an identification module, wherein the group center acquisition module is used to respectively acquire at least one face image The group center of the group; the identification module is used to use the class group center as the identification of the corresponding face image group.
根据本公开的实施例,所述组中心获取模块可以包括如下单元:距离计算单元以及中心获取单元,其中,所述距离计算单元用于对于所述至少一个人脸图像组中的每个人脸图像组,分别计算该人脸图像组中每个人脸图像与其他人脸图像的距离;所述中心获取单元用于将距离该人脸图像组中其他人脸图像的距离之和最小的人脸图像作为组中心。According to an embodiment of the present disclosure, the group center acquisition module may include the following units: a distance calculation unit and a center acquisition unit, wherein the distance calculation unit is used for each face image in the at least one face image group group, respectively calculate the distance between each face image in the face image group and other face images; the center acquisition unit is used to minimize the sum of distances from other face images in the face image group as the group center.
根据本公开的实施例,所述装置还可以包括如下模块:与所述组中心获取模块相连的旋转角度计算模块,以及角度标注模块。其中,所述旋转角度计算模块用于对于该人脸图像组,分别计算该人脸图像组中每个人脸图像相对于所述组中心的旋转角度;所述角度标注模块,用于利用得到的旋转角度对相应人脸图像进行角度标注。According to an embodiment of the present disclosure, the device may further include the following modules: a rotation angle calculation module connected to the group center acquisition module, and an angle labeling module. Wherein, the rotation angle calculation module is used to calculate the rotation angle of each face image in the face image group with respect to the group center for the face image group; the angle labeling module is used to use the obtained The rotation angle is used to annotate the angle of the corresponding face image.
根据本公开的实施例,所述旋转角度计算模块可以包括如下单元:关键点获取单元、位置关系获取单元和旋转角度计算单元,其中,所述关键点获取单元用于对于所述每个人脸图像,获取该人脸图像的指定个数的关键点,以及获取所述组中心的指定个数的关键点;所述位置关系获取单元用于获取该人脸图像的关键点之间的位置关系,以及获取所述组中心的关键点之间的位置关系;所述旋转角度计算单元用于根据该人脸图像的关键点之间的位置关系和所述组中心的关键点之间的位置关系,确定该人脸图像相对于组中心的旋转角度。According to an embodiment of the present disclosure, the rotation angle calculation module may include the following units: a key point acquisition unit, a positional relationship acquisition unit, and a rotation angle calculation unit, wherein the key point acquisition unit is used for each face image , obtaining the specified number of key points of the face image, and obtaining the specified number of key points of the group center; the positional relationship acquisition unit is used to obtain the positional relationship between the key points of the human face image, And obtain the positional relationship between the key points of the group center; the rotation angle calculation unit is used for according to the positional relationship between the key points of the face image and the positional relationship between the key points of the group center, Determines the rotation angle of this face image relative to the center of the group.
根据本公开的实施例,所述装置还可以包括如下模块:排序模块和展示模块,其中,所述排序模块用于对于该人脸图像组,根据人脸图像与所述组中心的旋转角度和/或距离对人脸图像组中的人脸图像进行排序;以及所述展示模块用于展示排序后的人脸图像。According to an embodiment of the present disclosure, the device may further include the following modules: a sorting module and a display module, wherein the sorting module is used for the face image group, according to the rotation angle and the rotation angle between the face image and the group center Sorting the face images in the face image group by distance; and the display module is used to display the sorted face images.
根据本公开的实施例,所述装置还可以包括如下模块:人脸识别模型训练模块,用于在得到多个人脸图像组之后,利用所述多个人脸图像组训练人脸识别模型。According to an embodiment of the present disclosure, the device may further include the following module: a face recognition model training module, configured to use the multiple face image groups to train the face recognition model after obtaining the multiple face image groups.
本公开的另一个方面提供了一种人脸图像处理装置,用于人脸图像处理,该人脸图像处理装置可以包括:Another aspect of the present disclosure provides a face image processing device for face image processing, the face image processing device may include:
一个或多个处理器;以及存储装置,用于存储可执行指令,所述可执行指令在被所述处理器执行时,实现如上所述的任一种方法。one or more processors; and storage means for storing executable instructions that, when executed by the processors, implement any of the methods described above.
本公开的另一方面提供了一种非易失性存储介质,存储有计算机可执行指令,所述指令在被执行时用于实现如上所述的任一种方法。Another aspect of the present disclosure provides a non-volatile storage medium storing computer-executable instructions, which are used to implement any one of the above methods when executed.
本公开的另一方面提供了一种计算机程序,所述计算机程序包括计算机可执行指令,所述指令在被执行时用于实现如上所述的方法。Another aspect of the present disclosure provides a computer program comprising computer-executable instructions for implementing the method as described above when executed.
根据本公开的实施例,由于每个人具有自身特殊的面部特征,因此,聚类结果可以使得每个人的不同角度人脸图像经过聚类后,被聚在同一个人脸图像组中,这样实际上就实现了自动对所述多幅人脸图像进标注的功能,例如,人脸图像组1及人脸图像组1对应的多个不同角度的人脸图像,这样使得可以利用聚类得到的人脸图像组进行人脸识别模型训练,例如可以将一个或多个人的多角度的人脸图像的人脸特征及所属的人脸图像组输入预先构建的人脸识别模型进行模型训练,能训练出能满足多角度的人脸图像识别的模型,可以至少部分地解决利用现有的标准的人脸识别数据集训进行训练的模型,对监控摄像头拍摄的人脸图像进行人脸识别时,由于存在不同角度的人脸图像,导致识别精度很低的问题,并因此可以实现提升多角度人脸图像识别精度的技术效果。According to the embodiment of the present disclosure, since each person has his own special facial features, the clustering result can make the face images of different angles of each person clustered and clustered in the same face image group, so that in fact The function of automatically labeling the multiple face images is realized, for example, face image group 1 and a plurality of face images from different angles corresponding to face image group 1, so that the human face images obtained by clustering can be used The face image group is used for face recognition model training. For example, the face features of one or more multi-angle face images and the face image group to which they belong can be input into a pre-built face recognition model for model training. A model that can meet multi-angle face image recognition can at least partially solve the problem of using the existing standard face recognition data training model for face recognition when performing face recognition on face images captured by surveillance cameras. Face images from different angles lead to the problem of low recognition accuracy, and therefore the technical effect of improving the recognition accuracy of multi-angle face images can be achieved.
根据本公开的实施例,由于还进一步检验每个人对应的多角度的人脸图像是否正确,如果不正确则通过去除不正确的人脸图像来提升人脸图像组内人脸图像的正确度,这样可以尽可能保证人脸图像组中的人脸图像对应同一人,避免训练数据的标注错误,以实现提升多角度人脸图像识别精度的技术效果。According to the embodiment of the present disclosure, since it is further checked whether the multi-angle face image corresponding to each person is correct, if it is not correct, the correctness of the face image in the face image group is improved by removing the incorrect face image, In this way, it is possible to ensure that the face images in the face image group correspond to the same person as much as possible, avoid labeling errors in training data, and achieve the technical effect of improving the recognition accuracy of multi-angle face images.
根据本公开的实施例,还进一步提供了获取多幅人脸图像的具体方法,该方法可以模拟实际的人脸识别环境,从而采集得到与实际人脸识别环境相同或相似的视频,进而可以得到与实际的人脸识别环境相同的人脸图像,有助于提升利用本公开得到的带有标注信息的人脸图像组中的人脸图像与实际使用环境中采集到的人脸图像的相似度,使得利用本实施例提供的人脸图像组对人脸识别模型进行训练后的模型识别准确度更高。According to the embodiments of the present disclosure, a specific method for acquiring multiple face images is further provided. The method can simulate the actual face recognition environment, thereby collecting videos that are the same as or similar to the actual face recognition environment, and then can obtain The same face image as the actual face recognition environment helps to improve the similarity between the face images in the face image group with annotation information obtained by using the present disclosure and the face images collected in the actual use environment , so that the recognition accuracy of the model after training the face recognition model by using the face image group provided in this embodiment is higher.
根据本公开的实施例,由于在得到聚类结果之后,还分别获取所述人脸图像组的组中心,进而可以以该组中心作为对应人脸图像组的标识,这样便于检查聚类得到的人脸图像组是否正确。According to the embodiment of the present disclosure, after obtaining the clustering results, the group centers of the face image groups are respectively obtained, and then the group centers can be used as the identification of the corresponding face image groups, which is convenient for checking the clustering results. Whether the face image group is correct.
根据本公开的实施例,由于在得到人脸图像组的组中心之后,还计算当前人脸图像组中每个人脸图像相对于组中心之间的旋转角度,然后利用得到的旋转角度对当前人脸图像进行角度标注。这样使得本公开得到的训练集还包括了个人脸图像的旋转角度信息,因此,可以在特定场景下,例如在特定的支付环境中,摄像头与支付者之间存在一定旋转角度时,可以从训练集中选取特定旋转角度的人脸图像对模型进行优化,这样有助于提升模型识别精度。According to the embodiment of the present disclosure, after the group center of the face image group is obtained, the rotation angle of each face image in the current face image group relative to the group center is also calculated, and then the current face image is calculated using the obtained rotation angle Angle labeling of face images. In this way, the training set obtained in this disclosure also includes the rotation angle information of the face image. Therefore, in a specific scenario, for example, in a specific payment environment, when there is a certain rotation angle between the camera and the payer, it can be obtained from the training Focus on selecting face images with a specific rotation angle to optimize the model, which helps to improve the accuracy of model recognition.
根据本公开的实施例,还提供了多种人脸识别模型的训练方法,该训练方法可以简单高效的对人脸识别模型进行训练,训练得到的人脸识别模型可以识别多角度的人脸图像,识别精度较高。According to the embodiments of the present disclosure, a variety of face recognition model training methods are also provided, the training method can simply and efficiently train the face recognition model, and the trained face recognition model can recognize multi-angle face images , the recognition accuracy is higher.
附图说明Description of drawings
通过以下参照附图对本公开实施例的描述,本公开的上述以及其他目的、特征和优点将更为清楚,在附图中:The above and other objects, features and advantages of the present disclosure will be more clearly described through the following description of the embodiments of the present disclosure with reference to the accompanying drawings, in which:
图1示意性示出了根据本公开实施例的可以应用人脸图像处理的示例性系统架构;FIG. 1 schematically shows an exemplary system architecture that can be applied to face image processing according to an embodiment of the present disclosure;
图2示意性示出了根据本公开实施例的不同角度的人脸图像;FIG. 2 schematically shows face images from different angles according to an embodiment of the present disclosure;
图3A示意性示出了根据本公开实施例的用于人脸图像处理方法的流程图;FIG. 3A schematically shows a flow chart of a method for processing a face image according to an embodiment of the present disclosure;
图3B示意性示出了根据本公开另一实施例的用于人脸图像处理方法的流程图;FIG. 3B schematically shows a flow chart of a method for processing a face image according to another embodiment of the present disclosure;
图3C示意性示出了根据本公开实施例的人脸图像组的示意图;Fig. 3C schematically shows a schematic diagram of a face image group according to an embodiment of the present disclosure;
图4A示意性示出了根据本公开实施例的获取多幅人脸图像的方法的流程图;FIG. 4A schematically shows a flow chart of a method for acquiring multiple face images according to an embodiment of the present disclosure;
图4B示意性示出了根据本公开另一实施例的获取多幅人脸图像的方法的流程图;FIG. 4B schematically shows a flow chart of a method for acquiring multiple face images according to another embodiment of the present disclosure;
图4C示意性示出了根据本公开实施例的从视频帧中获取人脸图像的方法的流程图;FIG. 4C schematically shows a flow chart of a method for acquiring a face image from a video frame according to an embodiment of the present disclosure;
图5示意性示出了根据本公开另一实施例的用于人脸图像处理方法的流程图;FIG. 5 schematically shows a flow chart of a method for processing a face image according to another embodiment of the present disclosure;
图6示意性示出了根据本公开实施例的获取人脸图像组的组中心的方法的流程图;FIG. 6 schematically shows a flow chart of a method for acquiring a group center of a face image group according to an embodiment of the present disclosure;
图7示意性示出了根据本公开另一实施例的用于人脸图像处理方法的流程图;FIG. 7 schematically shows a flow chart of a method for processing a face image according to another embodiment of the present disclosure;
图8示意性示出了根据本公开实施例的用于计算旋转角度的方法的流程图;FIG. 8 schematically shows a flowchart of a method for calculating a rotation angle according to an embodiment of the present disclosure;
图9示意性示出了根据本公开实施例的视频帧中人脸图像和关键点的示意图;Fig. 9 schematically shows a schematic diagram of a human face image and key points in a video frame according to an embodiment of the present disclosure;
图10示意性示出了根据本公开实施例的关键点的示意图;Fig. 10 schematically shows a schematic diagram of key points according to an embodiment of the present disclosure;
图11示意性示出了根据本公开另一实施例的人脸图像处理的方法的流程图;Fig. 11 schematically shows a flow chart of a method for processing a face image according to another embodiment of the present disclosure;
图12示意性示出了根据本公开另一实施例的人脸图像处理的方法的流程图;Fig. 12 schematically shows a flow chart of a method for processing a face image according to another embodiment of the present disclosure;
图13示意性示出了根据本公开的实施例的人脸图像处理装置的框图;Fig. 13 schematically shows a block diagram of a face image processing device according to an embodiment of the present disclosure;
图14示意性示出了根据本公开的另一实施例的人脸图像处理装置的框图;Fig. 14 schematically shows a block diagram of a face image processing device according to another embodiment of the present disclosure;
图15示意性示出了根据本公开的另一实施例的人脸图像处理装置的框图;Fig. 15 schematically shows a block diagram of a face image processing device according to another embodiment of the present disclosure;
图16示意性示出了根据本公开的另一实施例的人脸图像处理装置的框图;Fig. 16 schematically shows a block diagram of a face image processing device according to another embodiment of the present disclosure;
图17示意性示出了根据本公开的另一实施例的人脸图像处理装置的框图;Fig. 17 schematically shows a block diagram of a face image processing device according to another embodiment of the present disclosure;
图18示意性示出了根据本公开的另一实施例的人脸图像处理装置的框图;Fig. 18 schematically shows a block diagram of a face image processing device according to another embodiment of the present disclosure;
图19示意性示出了根据本公开的另一实施例的人脸图像处理装置的框图;Fig. 19 schematically shows a block diagram of a face image processing device according to another embodiment of the present disclosure;
图20示意性示出了根据本公开的另一实施例的人脸图像处理装置的框图;Fig. 20 schematically shows a block diagram of a face image processing device according to another embodiment of the present disclosure;
图21示意性示出了根据本公开的另一实施例的人脸图像处理装置的框图;Fig. 21 schematically shows a block diagram of a face image processing device according to another embodiment of the present disclosure;
图22示意性示出了根据本公开实施例的适于实现人脸图像处理方法的计算机系统的方框图。Fig. 22 schematically shows a block diagram of a computer system suitable for implementing the face image processing method according to an embodiment of the present disclosure.
具体实施方式Detailed ways
以下,将参照附图来描述本公开的实施例。但是应该理解,这些描述只是示例性的,而并非要限制本公开的范围。在下面的详细描述中,为便于解释,阐述了许多具体的细节以提供对本公开实施例的全面理解。然而,明显地,一个或多个实施例在没有这些具体细节的情况下也可以被实施。此外,在以下说明中,省略了对公知结构和技术的描述,以避免不必要地混淆本公开的概念。Hereinafter, embodiments of the present disclosure will be described with reference to the drawings. It should be understood, however, that these descriptions are exemplary only, and are not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Also, in the following description, descriptions of well-known structures and techniques are omitted to avoid unnecessarily obscuring the concepts of the present disclosure.
在此使用的术语仅仅是为了描述具体实施例,而并非意在限制本公开。在此使用的术语“包括”、“包含”等表明了所述特征、步骤、操作和/或部件的存在,但是并不排除存在或添加一个或多个其他特征、步骤、操作或部件。The terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting of the present disclosure. The terms "comprising", "comprising", etc. used herein indicate the presence of stated features, steps, operations and/or components, but do not exclude the presence or addition of one or more other features, steps, operations or components.
在此使用的所有术语(包括技术和科学术语)具有本领域技术人员通常所理解的含义,除非另外定义。应注意,这里使用的术语应解释为具有与本说明书的上下文相一致的含义,而不应以理想化或过于刻板的方式来解释。All terms (including technical and scientific terms) used herein have the meaning commonly understood by one of ordinary skill in the art, unless otherwise defined. It should be noted that the terms used herein should be interpreted to have a meaning consistent with the context of this specification, and not be interpreted in an idealized or overly rigid manner.
在使用类似于“A、B和C等中至少一个”这样的表述的情况下,一般来说应该按照本领域技术人员通常理解该表述的含义来予以解释(例如,“具有A、B和C中至少一个的系统”应包括但不限于单独具有A、单独具有B、单独具有C、具有A和B、具有A和C、具有B和C、和/或具有A、B、C的系统等)。在使用类似于“A、B或C等中至少一个”这样的表述的情况下,一般来说应该按照本领域技术人员通常理解该表述的含义来予以解释(例如,“具有A、B或C中至少一个的系统”应包括但不限于单独具有A、单独具有B、单独具有C、具有A和B、具有A和C、具有B和C、和/或具有A、B、C的系统等)。本领域技术人员还应理解,实质上任意表示两个或更多可选项目的转折连词和/或短语,无论是在说明书、权利要求书还是附图中,都应被理解为给出了包括这些项目之一、这些项目任一方、或两个项目的可能性。例如,短语“A或B”应当被理解为包括“A”或“B”、或“A和B”的可能性。Where expressions such as "at least one of A, B, and C, etc." are used, they should generally be interpreted as those skilled in the art would normally understand the expression (for example, "having A, B, and C A system of at least one of "shall include, but not be limited to, systems with A alone, B alone, C alone, A and B, A and C, B and C, and/or A, B, C, etc. ). Where expressions such as "at least one of A, B, or C, etc." are used, they should generally be interpreted as those skilled in the art would normally understand the expression (for example, "having A, B, or C A system of at least one of "shall include, but not be limited to, systems with A alone, B alone, C alone, A and B, A and C, B and C, and/or A, B, C, etc. ). Those skilled in the art should also understand that virtually any transitional conjunction and/or phrase that represents two or more alternative items, whether in the specification, claims, or drawings, should be understood to include these Possibility of one of the items, either of those items, or both. For example, the phrase "A or B" should be read to include the possibilities of "A" or "B," or "A and B."
本公开的实施例提供了一种用于人脸图像处理的方法及装置,可以自动对多幅人脸图像进行聚类,得到该多幅人脸图像对应的多个人图像组,由于每个人具有独有的人脸特征,当同一人的人脸图像处于不同的角度时,同一人对应的不同角度的人脸图像的人脸特征,仍然存在较强的相关性,因此,可以据此对多幅人脸图像进行聚类,使得聚类结果为对应同一人的多幅不同角度的人脸图像被聚在同一个人脸图像组中。这样就可以实现自动对多幅人脸图像进行标注:一个人脸图像应当对应哪个人脸图像组,其中,每个人脸图像组中的人脸图像对应同一个人。得到的人脸图像组可以用于训练人脸识别模型,由于人脸图像组中包含同一人的不同角度的人脸图像,使得据此训练出的人脸识别模型的识别准确度,相较于现有的基于标准的人脸图像训练出的人脸识别模型,在应用于具有多角度的人脸识别的场景中,识别准确率更高。Embodiments of the present disclosure provide a method and device for face image processing, which can automatically cluster multiple face images to obtain multiple person image groups corresponding to the multiple face images. Since each person has Unique face features, when the face images of the same person are at different angles, there is still a strong correlation between the face features of the face images of the same person corresponding to different angles, therefore, it can be used for multiple The face images are clustered, so that the clustering result is that multiple face images from different angles corresponding to the same person are clustered in the same face image group. In this way, multiple face images can be automatically marked: which face image group should a face image correspond to, wherein the face images in each face image group correspond to the same person. The obtained face image group can be used to train the face recognition model. Since the face image group contains the face images of the same person from different angles, the recognition accuracy of the face recognition model trained based on this is better than that of The existing face recognition models trained based on standard face images have higher recognition accuracy when applied to scenes with multi-angle face recognition.
图1示意性示出了根据本公开实施例的可以应用人脸图像处理的示例性系统架构100。需要注意的是,图1所示仅为可以应用本公开实施例的系统架构的示例,以帮助本领域技术人员理解本公开的技术内容,但并不意味着本公开实施例不可以用于其他设备、系统、环境或场景。FIG. 1 schematically shows an exemplary system architecture 100 to which face image processing can be applied according to an embodiment of the present disclosure. It should be noted that, what is shown in FIG. 1 is only an example of the system architecture to which the embodiments of the present disclosure can be applied, so as to help those skilled in the art understand the technical content of the present disclosure, but it does not mean that the embodiments of the present disclosure cannot be used in other device, system, environment or scenario.
如图1所示,根据该实施例的系统架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。系统架构100可以应用于公司人脸考勤打卡系统、社区管理、楼宇门禁系统等,由于可以实现多角度的人脸图像识别,使得用户使用更加方便,无需必须正对摄像头,这样可以提升用户体验度;此外,还可以应用于线下门店系统,例如,会员识别、刷脸支付等,如当用户进入店铺,通过监控摄像头采集并识别是否为店铺会员,以及通过监控摄像头识别支付者以确认支付者的身份信息,这样便于实现刷脸支付方式。As shown in FIG. 1 , a system architecture 100 according to this embodiment may include terminal devices 101 , 102 , 103 , a network 104 and a server 105 . The network 104 is used as a medium for providing communication links between the terminal devices 101 , 102 , 103 and the server 105 . Network 104 may include various connection types, such as wires, wireless communication links, or fiber optic cables, among others. The system architecture 100 can be applied to the company's face attendance check-in system, community management, building access control system, etc. Since it can realize multi-angle face image recognition, it makes it more convenient for users to use without having to face the camera, which can improve user experience. ; In addition, it can also be applied to offline store systems, such as member identification, facial recognition payment, etc., such as when a user enters the store, collect and identify whether he is a member of the store through the surveillance camera, and identify the payer through the surveillance camera to confirm the payer identity information, which facilitates the realization of face-swiping payment methods.
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。终端设备101、102、103上可以安装有各种通讯客户端应用,例如图像采集类应用、支付类应用、购物类应用、网页浏览器应用、搜索类应用、即时通信工具、邮箱客户端、社交平台软件等。Users can use terminal devices 101 , 102 , 103 to interact with server 105 via network 104 to receive or send messages and the like. Various communication client applications can be installed on the terminal devices 101, 102, 103, such as image collection applications, payment applications, shopping applications, web browser applications, search applications, instant messaging tools, email clients, social networking platform software, etc.
终端设备101、102、103可以是具有显示屏并且支持网页浏览的各种电子设备,包括但不限于智能手机、平板电脑、膝上型便携计算机和台式计算机等等,其中,终端设备101、102、103还可以包括信息采集设备,例如,图像采集设备、声音采集设备等,具体地,图像采集设备可以为外挂摄像头,如监控摄像头,或者机载摄像头,如膝上型便携计算机自带的摄像头等,声音采集设备可以为麦克风等。The terminal devices 101, 102, 103 may be various electronic devices with display screens and supporting web browsing, including but not limited to smart phones, tablet computers, laptop computers and desktop computers, etc., wherein the terminal devices 101, 102 , 103 may also include information collection equipment, for example, image collection equipment, sound collection equipment, etc., specifically, the image collection equipment may be an external camera, such as a monitoring camera, or an airborne camera, such as a camera that comes with a laptop computer etc., the sound collection device may be a microphone or the like.
服务器105可以是提供各种服务的服务器,例如对用户利用终端设备101、102、103所采集的人脸图像进行聚类、对所采集的视频进分帧处理、从视频帧中识别和获取人脸图像、获取人脸图像组的组中心、计算人脸图像组中各人脸图像相对于组中心的旋转角度、人脸识别模型的构建、人脸识别模型训练、以及利用训练好的人脸识别模型进行人脸识别等。后台管理服务器可以对接收到的用户请求等数据进行分析等处理,并将处理结果(例如根据用户请求获取或生成的网页、信息、或数据等)反馈给终端设备。The server 105 may be a server that provides various services, such as clustering the face images collected by the user using the terminal devices 101, 102, 103, processing the collected video into frames, identifying and obtaining human faces from the video frames. Face image, get the group center of the face image group, calculate the rotation angle of each face image in the face image group relative to the group center, build the face recognition model, face recognition model training, and use the trained face The recognition model performs face recognition, etc. The background management server can analyze and process received data such as user requests, and feed back processing results (such as webpages, information, or data obtained or generated according to user requests) to the terminal device.
需要说明的是,本公开实施例所提供的人脸图像处理方法一般可以由服务器105执行。相应地,本公开实施例所提供的人脸图像处理装置的聚类模块一般可以设置于服务器105中。本公开实施例所提供的人脸图像处理方法也可以由不同于服务器105且能够与终端设备101、102、103和/或服务器105通信的服务器或服务器集群执行。相应地,本公开实施例所提供的人脸图像处理装置也可以设置于不同于服务器105且能够与终端设备101、102、103和/或服务器105通信的服务器或服务器集群中。It should be noted that, generally, the face image processing method provided by the embodiment of the present disclosure may be executed by the server 105 . Correspondingly, the clustering module of the face image processing apparatus provided in the embodiment of the present disclosure can generally be set in the server 105 . The face image processing method provided by the embodiments of the present disclosure may also be executed by a server or server cluster that is different from the server 105 and can communicate with the terminal devices 101 , 102 , 103 and/or the server 105 . Correspondingly, the face image processing apparatus provided by the embodiments of the present disclosure may also be set in a server or server cluster that is different from the server 105 and can communicate with the terminal devices 101 , 102 , 103 and/or the server 105 .
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。It should be understood that the numbers of terminal devices, networks and servers in Fig. 1 are only illustrative. According to the implementation needs, there can be any number of terminal devices, networks and servers.
图2示意性示出了根据本公开实施例的不同角度的人脸图像。从图中可以看到,图2中的5幅人脸图像对应同一个人,但是每一幅图中人脸的角度不同,利用现有技术训练好的人脸识别模型对上述5幅人脸图像进行人脸识别时,可能会将图中5幅人脸图像判定为5个不同的人,这是由于现有技术训练数据集一般为标准的人脸识别数据集例如:ALFW、CASIA-WebFace、MS-Celeb-1M等提供的标准的人脸识别数据集,而标准的人脸识别数据集中的人脸图像通常包括以下特点中的一个或多个特点:正脸、无俯角、背景单一、五官清晰或无遮挡,例如,证件照等。利用这些标准的人脸识别数据集训练出的人脸识别模型,在对标准人脸进行识别时,效果较好。然而,现有技术中,对于人脸识别的需求不仅仅针对标准的人脸图像,例如,对于线下无人零售、智慧零售的场景中要实现顾客刷脸支付的刷脸认证,其技术基础是对于顾客人脸的高精度识别和检测。在上述场景中,人脸信息的采集一般是通过布设于店内的监控摄像头。不同于标准人脸图像采集环境中拍摄到的标准人脸图像,上述开放环境中监控摄像头中拍摄到的人脸图像往往不能满足标准人脸图像的采集要求,例如,采集到的人脸图像中人脸可能是正脸、侧脸、歪脸等,此外,摄像头往往放置于墙壁产生拍摄俯角等。综上,现有的基于标准的人脸识别数据集训练出的人脸识别模型无法满足如线下无人零售、智慧零售等场景中的人脸识别需求。Fig. 2 schematically shows human face images from different angles according to an embodiment of the present disclosure. It can be seen from the figure that the five face images in Figure 2 correspond to the same person, but the angles of the faces in each picture are different. When performing face recognition, the 5 face images in the picture may be judged as 5 different people. This is because the training data set in the prior art is generally a standard face recognition data set such as: ALFW, CASIA-WebFace, The standard face recognition data set provided by MS-Celeb-1M, etc., and the face images in the standard face recognition data set usually include one or more of the following characteristics: frontal face, no depression angle, single background, facial features Clear or unobstructed, for example, passport photos, etc. The face recognition model trained by using these standard face recognition data sets has a better effect on standard face recognition. However, in the existing technology, the demand for face recognition is not only for standard face images. For example, for offline unmanned retail and smart retail scenarios, the technical basis It is a high-precision recognition and detection of customer faces. In the above scenarios, face information is generally collected through surveillance cameras installed in the store. Different from the standard face images captured in the standard face image collection environment, the face images captured by the surveillance cameras in the above-mentioned open environment often cannot meet the collection requirements of standard face images. For example, in the collected face images The human face may be a frontal face, a side face, a crooked face, etc. In addition, the camera is often placed on the wall to produce a shooting depression angle, etc. In summary, the face recognition models trained by the existing standard-based face recognition datasets cannot meet the face recognition requirements in scenarios such as offline unmanned retail and smart retail.
本公开的实施例提供了一种人脸图像处理方法及装置,考虑到每个人具有自身特殊的面部特征,即使人脸图像相对于正脸图像存在一定角度时,同一人在不同角度时的人脸图像的面部特征之间仍然存在较大的相关性,因此,聚类结果可以使得每个人的不同角度人脸图像经过聚类后,被聚在同一个人脸图像组中,这样实际上就实现了自动对所述多幅人脸图像进标注的功能,例如,人脸图像组1及人脸图像组1对应的多个不同角度的人脸图像,这样使得可以利用聚类得到的人脸图像组进行人脸识别模型训练,例如可以将一个或多个人的多角度的人脸图像的人脸特征及所属的人脸图像组输入预先构建的人脸识别模型进行模型训练,能训练出能满足多角度的人脸图像识别的模型,可以至少部分地解决利用现有的标准的人脸识别数据集训进行训练的模型,对监控摄像头拍摄的人脸图像进行人脸识别时,由于存在不同角度的人脸图像,导致识别精度很低的问题,并因此可以实现提升多角度人脸图像识别精度的技术效果。The embodiments of the present disclosure provide a face image processing method and device. Considering that each person has their own special facial features, even if the face image has a certain angle with respect to the front face image, the face image of the same person at different angles There is still a large correlation between the facial features of the face image. Therefore, the clustering result can make the face images of different angles of each person clustered and clustered in the same face image group, which actually achieves The function of automatically labeling the multiple face images, for example, the face image group 1 and the face images from different angles corresponding to the face image group 1, so that the face images obtained by clustering can be used group for face recognition model training, for example, the face features of one or more people's multi-angle face images and the face image groups to which they belong can be input into a pre-built face recognition model for model training, which can be trained to meet The multi-angle face image recognition model can at least partially solve the problem of using the existing standard face recognition data set for training. Face images lead to the problem of very low recognition accuracy, and therefore the technical effect of improving the recognition accuracy of multi-angle face images can be achieved.
图3A示意性示出了根据本公开实施例的用于人脸图像处理方法的流程图。Fig. 3A schematically shows a flowchart of a method for processing a face image according to an embodiment of the present disclosure.
如图3A所示,该方法包括在操作S301,获取多幅人脸图像,所述多幅人脸图像包括至少一个人的不同角度人脸图像。As shown in FIG. 3A , the method includes, in operation S301 , acquiring multiple face images, where the multiple face images include face images of at least one person from different angles.
具体地,所述多幅人脸图像可以是通过拍照设备拍摄的至少一人的多个不同角度的照片,或者通过视频录取设备录制的至少一人的具有不同角度的视频,然后从视频帧中获取的不同角度的人脸图像。此外,还可以是从网络上抓取的至少一人的多个图片,例如,首先从网络上获取多幅人脸图像及对应的身份信息,然后根据身份信息,如手机号、姓名、身份识别号码等选取指定个数身份信息作为备选人脸图像,然后通过身份信息比对、去除重复图片、选取不同角度的人脸图像等中一种或多种操作得到至少一人的多幅包含不同角度的人脸图像的图片,然后从图片中获取人脸图像。Specifically, the plurality of face images may be multiple photos of at least one person taken from different angles by a photographing device, or videos of at least one person with different angles recorded by a video recording device, and then obtained from video frames Face images from different angles. In addition, it can also be multiple pictures of at least one person captured from the Internet. For example, first obtain multiple face images and corresponding identity information from the Internet, and then according to the identity information, such as mobile phone number, name, and identification number Select a specified number of identity information as a candidate face image, and then obtain multiple images of at least one person containing different angles through one or more operations in identity information comparison, removing duplicate images, and selecting face images from different angles. A picture of the face image, and then get the face image from the picture.
然后,在操作S302,对所述多幅人脸图像进行聚类,得到多个人脸图像组。Then, in operation S302, the multiple face images are clustered to obtain multiple face image groups.
具体地,由于每个人具有自身特殊的面部特征,同一人在不同角度时的人脸图像的面部特征具有相关性,因此,聚类结果可以使得每个人的不同角度人脸图像经过聚类后,被聚在同一个人脸图像组中,这样实际上就实现了自动对所述多幅人脸图像进标注的功能,例如,人脸图像组1及人脸图像组1对应的多个不同角度的人脸图像、…人脸图像组n及人脸图像组n应的多个不同角度的人脸图像,其中n为≥1的正整数。聚类方法可以同现有技术,例如,首先从各人脸图像中提取人脸特征,然后对人脸特征进行聚类,得到多个人脸图像组,每个人脸图像组中的人脸图像对应同一人。Specifically, since each person has its own special facial features, the facial features of the face images of the same person at different angles are correlated. Therefore, the clustering result can make the face images of each person at different angles clustered, are gathered in the same face image group, so that the function of automatically labeling the multiple face images is actually realized, for example, face image group 1 and a plurality of different angles corresponding to face image group 1 A face image, ... a face image group n and a plurality of face images from different angles corresponding to the face image group n, wherein n is a positive integer ≥ 1. The clustering method can be the same as the prior art, for example, first extract the facial features from each facial image, then cluster the facial features to obtain a plurality of facial image groups, and the corresponding facial images in each facial image group same person.
需要说明的是,如图3B所示,示意性示出了根据本公开另一实施例的用于人脸图像处理方法的流程图。在本实施例中,所述人脸图像处理方法还可以包括如下操作:It should be noted that, as shown in FIG. 3B , it schematically shows a flow chart of a method for processing a face image according to another embodiment of the present disclosure. In this embodiment, the face image processing method may also include the following operations:
在操作S303,根据用户输入和/或预设规则去除至少一个人脸图像组中的部分人脸图像,使得所述至少一个人脸图像组中每个人脸图像组中的各幅人脸图像对应于同一人。In operation S303, part of the face images in at least one face image group are removed according to user input and/or preset rules, so that each face image in each face image group in the at least one face image group corresponds to to the same person.
由于聚类得到的结果可能存在错误,例如,将非同一人的人脸图像聚在了同一个人脸图像组中,为了提升准确率,可以通过人工和/或基于一定规则对同一人脸图像组中的各人脸图像进行检查和修正,以确保每个人脸图像组中的各幅人脸图像对应于同一人。如图3C所示,示意性示出了根据本公开实施例的人脸图像组的示意图。具体地,可以由上述终端设备101、102、103等展示出各人脸图像组,其中,可以在候选区中分多行或多列分别展示人脸图像组,其中,人脸图像组的标识可以为人脸图像组名,例如,人脸图像组1、…人脸图像组n等,还可以是以人脸图像组中的某个人脸图像作为人脸图像组标识,例如,最接近正脸的人脸图像、将距离该人脸图像组中其他人脸图像的距离之和最小的人脸图像等。此外,展示的内容还可以包括标识结果区,用于展示具体某一个或多个人脸图像组,还可以用于针对自动标注有误的人脸图像进行修正和审核,从而更好地提升人脸图像组的正确度。Because the results obtained by clustering may have errors, for example, the face images of different people are clustered in the same face image group, in order to improve the accuracy rate, the same face image group can be manually and/or based on certain rules Each face image in each face image group is checked and corrected to ensure that each face image in each face image group corresponds to the same person. As shown in FIG. 3C , a schematic diagram of a face image group according to an embodiment of the present disclosure is schematically shown. Specifically, the above-mentioned terminal devices 101, 102, 103, etc. can display each group of face images, wherein the group of face images can be displayed in multiple rows or columns in the candidate area, wherein the identification of the group of face images It can be the name of the face image group, for example, face image group 1, ... face image group n, etc., or a certain face image in the face image group can be used as the face image group identification, for example, the closest to the front face face image, the face image with the minimum sum of distances from other face images in the face image group, etc. In addition, the displayed content can also include an identification result area, which is used to display a specific one or more face image groups, and can also be used to correct and review face images that are automatically labeled incorrectly, so as to better improve the face recognition. The correctness of the image group.
本公开的实施例提供的人脸图像处理方法可以利用聚类得到的人脸图像组进行人脸识别模型训练,例如可以将一个或多个人的多角度的人脸图像的人脸特征及所属的人脸图像组输入预先构建的人脸识别模型进行模型训练,能训练出能满足多角度的人脸图像识别的模型,可以至少部分地解决利用现有的标准的人脸识别数据集训进行训练的模型,对监控摄像头拍摄的人脸图像进行人脸识别时,由于存在不同角度的人脸图像,导致识别精度很低的问题,并因此可以实现提升多角度人脸图像识别精度的技术效果。The face image processing method provided by the embodiments of the present disclosure can use the face image group obtained by clustering to perform face recognition model training, for example, the face features and the associated face images of one or more people's multi-angle face images can be The face image group is input with a pre-built face recognition model for model training, and a model that can meet multi-angle face image recognition can be trained, which can at least partially solve the problem of using existing standard face recognition data sets for training. Model, when performing face recognition on the face images captured by the surveillance camera, due to the existence of face images from different angles, the recognition accuracy is very low, and therefore the technical effect of improving the recognition accuracy of multi-angle face images can be achieved.
下面参考图4A~图4C,结合具体实施例对图3A所示的方法做进一步说明。在图4A中,示意性示出了根据本公开实施例的获取多幅人脸图像的方法的流程图。The method shown in FIG. 3A will be further described below in conjunction with specific embodiments with reference to FIGS. 4A to 4C . FIG. 4A schematically shows a flow chart of a method for acquiring multiple human face images according to an embodiment of the present disclosure.
在本实施例中,所述获取多幅人脸图像可以包括如下操作:In this embodiment, the acquisition of multiple face images may include the following operations:
在操作S401中,获取视频。其中,可以采用摄像头围绕某人进行旋转拍摄,得到包含某人的多幅不同角度的人脸图像的视频;此外,也可以是固定摄像头,让某人在摄像头前进行一定角度的旋转,使得摄像头拍摄到包含某人的多幅不同角度的人脸图像的视频。另外,为了使得最终得到的人脸图像组中的各人脸图像与实际使用环境中获取的人脸图像更加相似,在已知具体使用环境的布局的情况下,可以采用与实际使用环境相同的摄像头布置方式、或者直接采用实际使用环境的摄像头,然后以相似的人流情形,如刷脸支付的人流情形,来拍摄包含至少一人的多幅不同角度的视频,据此得到的人脸图像组中人脸图像更加符合实际应用场景中获取的人脸图像,据此训练出的人脸识别模型的识别准确率更高。In operation S401, a video is acquired. Among them, the camera can be used to rotate and shoot around someone to obtain a video containing multiple face images of someone from different angles; in addition, it can also be a fixed camera, allowing someone to rotate at a certain angle in front of the camera, so that the camera A video is captured that contains multiple images of a person's face from different angles. In addition, in order to make the face images in the finally obtained face image group more similar to the face images acquired in the actual use environment, when the layout of the specific use environment is known, the same layout as the actual use environment can be used. The arrangement of the camera, or directly using the camera in the actual use environment, and then using a similar flow of people, such as the flow of people paying by swiping the face, to shoot multiple videos from different angles containing at least one person, and the face image group obtained accordingly The face image is more in line with the face image obtained in the actual application scene, and the recognition accuracy of the face recognition model trained based on this is higher.
在操作S402中,以帧为单位获取所述视频中的人脸图像。In operation S402, the face image in the video is acquired in units of frames.
在本实施例中,可以从至少两帧中的每一帧为对象,提取出每一帧中的人脸图像,由于视频拍摄过程中,不同时刻对应不同角度的人脸,因此,不同帧中的人脸图像的角度可以不同,可分别从不同帧中提取出不同角度的人脸图像。此外,在图4B中,示意性示出了根据本公开另一实施例的获取多幅人脸图像的方法的流程图。In this embodiment, each frame of at least two frames can be used as an object, and the face image in each frame can be extracted. Since in the video shooting process, different moments correspond to faces from different angles, therefore, in different frames The angles of the face images can be different, and the face images of different angles can be extracted from different frames. In addition, FIG. 4B schematically shows a flowchart of a method for acquiring multiple human face images according to another embodiment of the present disclosure.
在操作S401中,获取视频,可以如上一实施例,在此不再详述。In operation S401, video is acquired, which may be as in the previous embodiment, and will not be described in detail here.
在操作S403中,逐帧或者间隔设定个数帧获取所述视频的视频帧。在本实施例中,可以逐帧抽取或者隔帧抽取,得到视频帧frame1、frame2、……、frameN,其中,N为大于等于2的正整数。由于在刷脸支付的人流情形中,支付人出现在视频中的时间不长,为了尽量多的获取支付人的不同角度的人脸图像,可以采用逐帧获取所述视频的视频帧的方式得到视频帧。In operation S403, video frames of the video are acquired frame by frame or at intervals of a set number of frames. In this embodiment, video frames frame1, frame2, . Since the payer does not appear in the video for a long time in the face-swiping payment situation, in order to obtain as many face images of the payer as possible from different angles, the video frame of the video can be obtained frame by frame. video frame.
在操作S404中,获取所述视频帧的人脸图像。在本实施例中,可以采用现有的从图像中获取人脸图像的方法。In operation S404, a face image of the video frame is acquired. In this embodiment, an existing method for obtaining a face image from an image may be used.
在图4C中,示意性示出了根据本公开实施例的从视频帧中获取人脸图像的方法的流程图。在本实施例中,可以包括如下操作:FIG. 4C schematically shows a flow chart of a method for acquiring a face image from a video frame according to an embodiment of the present disclosure. In this embodiment, the following operations may be included:
在操作S4041中,设定人脸检测框。具体地,可以根据摄像头与人脸之间的距离来确定人脸检测框的大小,也可以根据其它规则来确定人脸检测框的大小,使得人脸检测框的窗口是一个人脸窗口当且仅当其恰好框住了一张人脸,即窗口的大小和人脸的大小是一致的,窗口基本贴合人脸的外轮廓。此外,当视频中出现多个人脸图像,而每个人与摄像头的距离不同时,会导致帧图像中的各人脸图像大小不同,此时不能以相同大小的人脸检测框来检测人脸图像,此时,可以利用不同大小的人脸检测框来检测人脸,或者以同样大小的人脸检测框、并对帧图像进行缩放以检测帧图像中的各人脸图像。In operation S4041, a face detection frame is set. Specifically, the size of the face detection frame can be determined according to the distance between the camera and the face, or can be determined according to other rules, so that the window of the face detection frame is a face window if and Only when it just frames a human face, that is, the size of the window is consistent with the size of the human face, and the window basically fits the outer contour of the human face. In addition, when multiple face images appear in the video, and the distance between each person and the camera is different, the size of each face image in the frame image will be different. At this time, the face image cannot be detected with the same size face detection frame , at this time, it is possible to use face detection frames of different sizes to detect faces, or use face detection frames of the same size and scale the frame image to detect each face image in the frame image.
在操作S4042中,提取所述视频帧被所述人脸检测框覆盖的部分的人脸特征。在本实施例中,进行人脸特征提取,该人脸特征是相对于上述脸部特征相应的,人脸特征是由人脸图像由计算机进行特征提取得到的脸部特征的向量表示形式,即特征向量,如X=(x1,x2,…,xd),d为大于等于1的正整数,其每一维是一个数值,这个数值是根据输入(图像区域)计算得到的,例如进行求和、相减、比较大小等。特征提取过程为从原始的输入数据(图像区域颜色值排列组成的矩阵)变换到对应的特征向量的过程。In operation S4042, human face features of the portion of the video frame covered by the human face detection frame are extracted. In this embodiment, face feature extraction is carried out, and the face feature is corresponding to the above-mentioned facial feature, and the face feature is a vector representation of the face feature obtained by feature extraction of the face image by a computer, that is, Feature vector, such as X=(x1, x2,...,xd), d is a positive integer greater than or equal to 1, and each dimension is a value, which is calculated according to the input (image area), such as summing , subtract, compare sizes, etc. The feature extraction process is the process of transforming the original input data (a matrix composed of color values arranged in the image area) to the corresponding feature vector.
在操作S4043中,将所述人脸特征输入预先训练好的人脸模型得到识别结果。其中,人脸模型可以为分类器、神经网络、随机森林模型等,以分类器为例进行说明,分类器以特征向量作为输入,通过数学计算,以类别作为输出,每个类别会对应到一个数值编码,称之为这个类别对应的标签,如将人脸窗口这一类编码为1,而非人脸窗口这一类编码为-1;分类器就是一个将特征向量变换到类别标签的函数,如式(1)所示:In operation S4043, the face features are input into a pre-trained face model to obtain a recognition result. Among them, the face model can be a classifier, a neural network, a random forest model, etc., and a classifier is used as an example for illustration. The classifier uses a feature vector as an input, and uses a category as an output through mathematical calculation. Each category will correspond to a Numerical encoding is called the label corresponding to this category. For example, the category of face window is encoded as 1, and the category of non-face window is encoded as -1; the classifier is a function that transforms the feature vector into a category label , as shown in formula (1):
其中,参数t可以采用模型训练的方式获取,例如,将正确的人脸图像标注为1,不正确的人脸图像标注为-1,然后将上述有标注信息的人脸图像输入到上述模型中,直至模型输出结果与标注信息的一致度达到设定阈值。Among them, the parameter t can be obtained by means of model training, for example, the correct face image is marked as 1, and the incorrect face image is marked as -1, and then the above-mentioned face image with label information is input into the above model , until the consistency between the model output and the labeled information reaches the set threshold.
在操作S4044中,根据识别结果确定人脸图像。具体地,将模型输出结果为人脸图像的人脸检测框对应的框内图像作为人脸图像。In operation S4044, a face image is determined according to the recognition result. Specifically, the frame image corresponding to the face detection frame whose output result of the model is a face image is taken as the face image.
在一个具体实施例中,所述对所述多幅人脸图像进行聚类包括:基于K-means或者层次聚类的方法对人脸图像的人脸特征进行聚类,得到至少一个人脸图像组及每个人脸图像组对应的人脸图像。这样可以实现对多幅人脸图像进行聚类得到至少一个人脸图像组。In a specific embodiment, the clustering of the plurality of face images includes: clustering the face features of the face images based on K-means or hierarchical clustering methods to obtain at least one face image groups and the face images corresponding to each face image group. In this way, it is possible to cluster multiple face images to obtain at least one face image group.
如图5所示,示意性示出了根据本公开另一实施例的用于人脸图像处理方法的流程图。在本实施例中,在得到所述多个人脸图像组之后,所述方法还可以包括如下操作:As shown in FIG. 5 , it schematically shows a flow chart of a method for processing a face image according to another embodiment of the present disclosure. In this embodiment, after obtaining the plurality of face image groups, the method may also include the following operations:
在操作S501中,分别获取至少一个人脸图像组的组中心。具体地,该组中心可以为距离该人脸图像组中其他人脸图像的距离之和最小的人脸图像,其中,该距离可以为两个人脸图像的人脸特征(特征向量)之间的余弦距离等,距离越大,表明两个人脸图像相差越大,距离越小,表明两个人脸图像越相似。In operation S501, group centers of at least one human face image group are obtained respectively. Specifically, the group center can be the face image with the smallest distance from other face images in the face image group, where the distance can be the distance between the face features (feature vectors) of the two face images Cosine distance, etc., the larger the distance, the greater the difference between the two face images, and the smaller the distance, the more similar the two face images.
在操作S502中,将所述组中心作为对应人脸图像组的标识。In operation S502, the center of the group is used as an identifier of the corresponding face image group.
由于组中心是距离人脸图像组内其它人脸图像的距离之和最小的人脸图像,也就是说,该组中心可以较好的代表人脸识别组中的其它人脸图像,因此,将其作为人脸图像组的标识更具有代表性。Since the group center is the face image with the smallest distance from other face images in the face image group, that is to say, the group center can better represent other face images in the face recognition group, therefore, the It is more representative as an identification of a face image group.
如图6所示,示意性示出了根据本公开实施例的获取人脸图像组的组中心的方法的流程图。在本实施例中,对于所述至少一个人脸图像组中的每个人脸图像组,可以包括如下操作:As shown in FIG. 6 , it schematically shows a flow chart of a method for obtaining a group center of a face image group according to an embodiment of the present disclosure. In this embodiment, for each face image group in the at least one face image group, the following operations may be included:
在操作S601中,分别计算该人脸图像组中每个人脸图像与其他人脸图像的距离。具体地,假设,某个人脸图像组中有m幅人脸图像,则对应m个人脸特征:X1、X2、…Xm,其中,X1为第一幅人脸图像的人脸特征,为n维特征向量,X1=(a1,a2,…an),X2为第二幅人脸图像的人脸特征,X2=(b1,b2,…bn),则第一幅人脸图像(脸1)和第二幅人脸图像(脸2)之间的距离如式(2)所示:In operation S601, the distances between each face image in the face image group and other face images are respectively calculated. Specifically, suppose that there are m face images in a certain face image group, corresponding to m face features: X1, X2, ... Xm, where X1 is the face feature of the first face image, which is n-dimensional Feature vector, X1=(a1, a2,...an), X2 is the face feature of the second face image, X2=(b1, b2,...bn), then the first face image (face 1) and The distance between the second face image (face 2) is shown in formula (2):
其中,i为≥1的正整数。Wherein, i is a positive integer ≥ 1.
在操作S602中,将距离该人脸图像组中其他人脸图像的距离之和最小的人脸图像作为组中心。In operation S602, the face image with the smallest sum of distances from other face images in the face image group is taken as the group center.
首先,计算人脸图像组中每幅人脸图像到人脸图像组中其它人脸图像的平均距离;以一个只有3幅人脸图像的人脸图像组为例进行说明,上述平均距离可以通过公式(3)获取:First, calculate the average distance from each face image in the face image group to other face images in the face image group; take a face image group with only 3 face images as an example, the above average distance can be obtained by Formula (3) obtains:
其中,L_脸1为人脸图像1相对于所在人脸图像组中其它人脸图像的平均距离,L_脸2为人脸图像2相对于所在人脸图像组中其它人脸图像的平均距离,L_脸3为人脸图像3相对于所在人脸图像组中其它人脸图像的平均距离。Wherein, L_face 1 is the average distance of face image 1 relative to other face images in the place face image group, and L_face 2 is the average distance of face image 2 relative to other face images in the place face image group, L_face3 is the average distance between face image 3 and other face images in the face image group.
相应地,中心脸可以通过公式(4)确定:Correspondingly, the central face can be determined by formula (4):
中心脸=min(L_脸1、L_脸2、L_脸3) (4)Center face = min(L_face1, L_face2, L_face3) (4)
通过上述操作即可得到组中心。The group center can be obtained through the above operations.
如图7所示,示意性示出了根据本公开另一实施例的用于人脸图像处理方法的流程图。在本实施例中,在得到所述组中心之后,所述方法还可以包括如下操作:As shown in FIG. 7 , it schematically shows a flow chart of a method for processing a face image according to another embodiment of the present disclosure. In this embodiment, after obtaining the group center, the method may further include the following operations:
在操作S701中,对于该人脸图像组,分别计算该人脸图像组中每个人脸图像相对于所述组中心的旋转角度。具体地,可以根据人脸图像中关键点之间的位置关和组中心的关键点之间的位置关系确定人脸图像相对于组中心的旋转角度,例如,根据组中心的双眼之间的距离和人脸图像中双眼之间的距离来确定人脸图像相对于组中心的在双眼方向上的旋转角度。当然,可以根据更多个关键点之间的位置关系确定人脸图像相对于组中心在水平和竖直方向上的旋转角度。In operation S701, for the group of human face images, respectively calculate the rotation angle of each human face image in the group of human face images relative to the center of the group. Specifically, the rotation angle of the face image relative to the group center can be determined according to the position relationship between the key points in the face image and the position relationship between the key points in the group center, for example, according to the distance between the eyes of the group center and the distance between the eyes in the face image to determine the rotation angle of the face image relative to the group center in the direction of the eyes. Of course, the horizontal and vertical rotation angles of the face image relative to the group center can be determined according to the positional relationship among more key points.
在操作S702中,利用得到的旋转角度对相应人脸图像进行角度标注。具体地,通过上一操作可以得到人脸图像组中各人脸图像相对于组中心的旋转角度,进而可以为人脸图像中每个人脸图像标注旋转角度信息。该旋转角度可以在特定的应用场景作为筛选训练数据的依据等。例如,在刷脸支付的场景中,由于摄像头通常采用监控摄像头,而监控摄像头通常与支付者的正面人脸呈一定角度,且角度基本处于一个大致范围内,如果采用具有各种不同角度的人脸图像的人脸图像组作为人脸识别模型的训练数据集,则该人脸识别模型可识别的人脸图像的角度范围较大,但是,相应的,由于没有针对特定角度的人脸图像进行优化,无法进一步提升该特定角度的人脸识别的精确度。而本实施例中由于在人脸图像组中还标注了人脸图像的旋转角度信息,使得可以从多个人脸图像组中挑选出具有特定旋转角度的人脸图像作为人脸识别模型的训练数据集,这样可以有助于提升人脸识别模型在特定旋转角度上的识别精确度,或者根据特定的应用场景训练用在特定旋转角度的人脸识别模型。In operation S702, use the obtained rotation angle to perform angle labeling on the corresponding face image. Specifically, through the above operation, the rotation angle of each face image in the face image group relative to the group center can be obtained, and then the rotation angle information can be marked for each face image in the face image. The rotation angle can be used as a basis for screening training data in specific application scenarios. For example, in the face-swiping payment scenario, since the camera usually uses a surveillance camera, and the surveillance camera is usually at a certain angle to the payer's frontal face, and the angle is basically within a general range, if people with various angles are used If the face image group of the face image is used as the training data set of the face recognition model, the angle range of the face images that can be recognized by the face recognition model is relatively large. Optimization, the accuracy of face recognition at this specific angle cannot be further improved. In this embodiment, since the rotation angle information of the face image is also marked in the face image group, the face image with a specific rotation angle can be selected from a plurality of face image groups as the training data of the face recognition model This can help improve the recognition accuracy of the face recognition model at a specific rotation angle, or train the face recognition model for a specific rotation angle according to a specific application scenario.
如图8所示,示意性示出了根据本公开实施例的用于计算旋转角度的方法的流程图。在本实施例中,对于一个每个人脸图像,可以包括如下操作:As shown in FIG. 8 , it schematically shows a flowchart of a method for calculating a rotation angle according to an embodiment of the present disclosure. In this embodiment, for each face image, the following operations may be included:
在操作S801中,获取该人脸图像的指定个数的关键点,以及获取所述组中心的指定个数的关键点。其中,所述关键点包括但不限于:眼睛中心点、嘴角点、鼻子顶点、颧骨顶点或眉头顶点中一种或多种。如图9所示,示意性示出了根据本公开实施例的视频帧中人脸图像和关键点的示意图,其中,方框内的图像为从视频帧中识别得到的人脸图像,图片中的矩形区域。表示为[x,y,w,h],(x,y)表示相对于人脸图像的图片左上角的位置,(w,h)表示方框的宽度和高度;人脸图像中的亮点为各关键点,可以表示为kp_1、kp_2、…、kp_N,其中,N表示关键点个数;kp_N是一个二维数据,表示相对于图片左上角的位置坐标,每张脸都提取N个关键点。In operation S801, a specified number of key points of the face image and a specified number of key points of the group center are obtained. Wherein, the key points include, but are not limited to: one or more of eye center points, mouth corner points, nose vertices, cheekbone vertices or brow vertices. As shown in FIG. 9 , it schematically shows a schematic diagram of a human face image and key points in a video frame according to an embodiment of the present disclosure, wherein the image in the box is a human face image recognized from the video frame, and in the picture rectangular area. Represented as [x, y, w, h], (x, y) represents the position of the upper left corner of the picture relative to the face image, (w, h) represents the width and height of the box; the bright spot in the face image is Each key point can be expressed as kp_1, kp_2, ..., kp_N, where N represents the number of key points; kp_N is a two-dimensional data, representing the position coordinates relative to the upper left corner of the picture, and each face extracts N key points .
关键点的检测技术可以为基于SDM的人脸关键点检测和/或基于MTCNN的人脸关键点检测技术。也就是说,可以先做人脸检测,然后对检测出来的人脸检测框单独进行关键点检测,如采用SDM技术;也可以人脸检测和关键点识别同步进行,如采用MTCNN技术。The key point detection technology may be SDM-based face key point detection and/or MTCNN-based face key point detection technology. That is to say, face detection can be done first, and then key point detection can be performed on the detected face detection frame separately, such as using SDM technology; face detection and key point recognition can also be performed simultaneously, such as using MTCNN technology.
对于SDM技术,可以包括如下操作:首先进行建模,模型的输入为人脸图像,模型的输出为人脸图像中的关键点。模型训练过程可以包括如下操作:将具有人工标注信息的人脸图像输入模型中,通过调整模型参数使得模型输出的特征点趋近于人工标注的人脸的关键点,其中,该人工标注信息为人工标注的关键点。For the SDM technology, the following operations may be included: first, modeling is performed, the input of the model is a face image, and the output of the model is the key points in the face image. The model training process may include the following operations: input a human face image with artificially labeled information into the model, and adjust the model parameters to make the feature points output by the model approach the key points of the manually labeled human face, wherein the manually labeled information is Keypoints manually annotated.
对于多任务卷积神经网络(Multi-task convolutional neural networks,MTCNN)技术,可以采用3种神经网络构成该MTCNN,包括:Proposal Network(P-Net)、RefineNetwork(R-Net)和Output Network(O-Net)。其中,P-Net的网络结构主要获得了人脸图像的候选窗口和边界框的回归向量。并用该边界框做回归,对候选窗口进行校准,然后通过非极大值抑制(NMS)来合并高度重叠的候选窗口。R-Net的网络结构还是通过边界框回归和NMS来去掉假阳性(false-positive)区域,但是R-Net的网络结构和P-Net网络结构有差异,多了一个全连接层,所以会取得更好的抑制false-positive区域的作用。O-Net的网络结构比R-Net的网络结构又多了一层卷积层,所以处理的结果会更加精细,O-Net的作用和R-Net作用一样,但是该O-Net对人脸图像进行了更多的监督,同时还会输出5个地标(landmark),即关键点,如图10所示,示意性示出了根据本公开实施例的关键点的示意图,其中,每个人脸图像具有5个关键点,当然,还可以根据实际需求设置更多或更少个数的关键点。For multi-task convolutional neural networks (MTCNN) technology, three kinds of neural networks can be used to form the MTCNN, including: Proposal Network (P-Net), RefineNetwork (R-Net) and Output Network (O -Net). Among them, the network structure of P-Net mainly obtains the regression vector of the candidate window and the bounding box of the face image. And use the bounding box for regression, calibrate the candidate windows, and then merge highly overlapping candidate windows through non-maximum suppression (NMS). The network structure of R-Net still uses bounding box regression and NMS to remove false-positive (false-positive) areas, but the network structure of R-Net is different from that of P-Net, and there is an additional fully connected layer, so it will get Better suppression of false-positive regions. O-Net's network structure has one more convolutional layer than R-Net's network structure, so the processing results will be more refined. The image has undergone more supervision, and at the same time it will output 5 landmarks, that is, key points, as shown in Figure 10, which schematically shows a schematic diagram of key points according to an embodiment of the present disclosure, where each face The image has 5 key points, of course, more or less key points can be set according to actual needs.
在操作S802中,获取该人脸图像的关键点之间的位置关系,以及获取所述组中心的关键点之间的位置关系。上述位置关系包括但不限于:关键点相对于人脸图像左上角的坐标、相同类型的关键点之间的相对坐标(如两个眼睛中心之间的相对坐标)、各关键点之间的距离等中任一项或多项。In operation S802, the positional relationship between the key points of the face image is acquired, and the positional relationship between the key points of the group center is acquired. The above positional relationship includes but is not limited to: the coordinates of key points relative to the upper left corner of the face image, the relative coordinates between key points of the same type (such as the relative coordinates between the centers of two eyes), and the distance between key points any one or more of these.
在操作S803中,根据该人脸图像的关键点之间的位置关系和所述组中心的关键点之间的位置关系,确定该人脸图像相对于组中心的旋转角度。具体地,通过计算得到人脸图像相对于组中心在水平方向和/或竖直方向上的旋转角度。In operation S803, a rotation angle of the face image relative to the group center is determined according to a positional relationship between key points of the face image and a positional relationship between key points of the group center. Specifically, a horizontal and/or vertical rotation angle of the face image relative to the group center is obtained through calculation.
如图11所示,示意性示出了根据本公开另一实施例的人脸图像处理的方法的流程图。在本实施例中,所述方法还可以包括如下操作:As shown in FIG. 11 , it schematically shows a flow chart of a method for processing a face image according to another embodiment of the present disclosure. In this embodiment, the method may also include the following operations:
在操作S111中,对于一个人脸图像组,根据人脸图像与所述组中心的旋转角度和/或距离对人脸图像组中的人脸图像进行排序。其中,旋转角度为人脸图像组中各人脸图像相对于组中心的旋转角度,距离可以为余弦距离,即表示各人脸图像与组中心的相似度,具体地,可以为人脸图像的人脸特征与组中心的人脸特征之间的余弦距离。In operation S111, for a group of human face images, the face images in the group of human face images are sorted according to the rotation angle and/or distance between the face images and the center of the group. Among them, the rotation angle is the rotation angle of each face image in the face image group relative to the group center, and the distance can be a cosine distance, which means the similarity between each face image and the group center. Specifically, it can be the face The cosine distance between the feature and the face feature at the center of the group.
在操作S112中,展示排序后的人脸图像。在本实施例中,由于可能需要对人脸图像组中的人脸图像进行人工核查,以确保每个人脸图像组中的人脸图像对应同一个人。如果人脸图像组中的人脸图像较多、某些人脸图像相对于组中心的旋转角度较大时,人工直接比较人脸图像和组中心的核查方式比较困难,如果根据相似度和/或旋转角度等对人脸图像组中的各人脸图像进行排序,能表现出一个渐变的过程,如果直接比对组中心和人脸图像不易确定是非聚类正确时,则可以参照排序后展现出的渐变过程来辅助确定是非聚类正确。这样有助于提升人工核查的便捷度,也更不容易出现错判或漏判。In operation S112, the sorted face images are displayed. In this embodiment, it may be necessary to manually check the face images in the face image groups to ensure that the face images in each face image group correspond to the same person. If there are many face images in the face image group, and some face images have a large rotation angle relative to the group center, it is difficult to directly compare the face image and the group center verification method manually. If according to the similarity and/or Sorting the face images in the face image group, or rotation angle, etc., can show a gradual process. If it is difficult to determine whether the clustering is correct by directly comparing the group center and the face image, you can refer to the sorting and display The gradual change process is used to assist in determining whether or not clustering is correct. This will help to improve the convenience of manual verification, and it will be less likely to cause misjudgments or missed judgments.
如图12所示,示意性示出了根据本公开另一实施例的人脸图像处理的方法的流程图。在本实施例中,所述方法还可以包括如下操作:As shown in FIG. 12 , it schematically shows a flow chart of a method for processing a face image according to another embodiment of the present disclosure. In this embodiment, the method may also include the following operations:
在操作S121中,利用多个人脸图像组训练人脸识别模型。该人脸识别模型可以是新构建的模型;也可以是利用现有的标准训练数据集训练好的模型,然后利用多个人脸图像组重新训练该训练好的模型,即用多个人脸图像组微调(fine-tune)上述训练好的模型,能极大提高现有技术的人脸识别模型在多角度人脸识别的场景中的人脸识别精度。In operation S121, a face recognition model is trained using a plurality of face image groups. The face recognition model can be a newly constructed model; it can also be a model trained using an existing standard training data set, and then use multiple face image groups to retrain the trained model, that is, use multiple face image groups Fine-tuning the above-mentioned trained model can greatly improve the face recognition accuracy of the face recognition model in the prior art in the scene of multi-angle face recognition.
在一个具体实施例中,预先构建第一人脸识别模型,在得到所述多个人脸图像组之后,将人脸图像组对应的不同角度的人脸图像的人脸特征、及所属人脸图像组输入所述第一人脸识别模型进行训练,得到模型参数数值。利用训练好的第一人脸识别模型即可实现多角度人脸图像的人脸识别。In a specific embodiment, the first face recognition model is pre-built, and after obtaining the multiple face image groups, the face features of the face images from different angles corresponding to the face image groups, and the corresponding face images The group inputs the first face recognition model for training to obtain model parameter values. The face recognition of multi-angle face images can be realized by using the trained first face recognition model.
在另一个具体实施例中,对于预先采用标准人脸数据训练得到的第二人脸识别模型,在得到所述多个人脸图像组之后,将人脸图像组对应的不同角度的人脸图像的人脸特征、及所属人脸图像组输入所述第二人脸识别模型进行训练,得到调整后的模型参数数值。利用训练好的第二人脸识别模型即可实现多角度人脸图像的人脸识别,且识别精度较高。In another specific embodiment, for the second face recognition model obtained by pre-training with standard face data, after obtaining the multiple face image groups, the face images of different angles corresponding to the face image groups are The face features and the associated face image group are input into the second face recognition model for training, and adjusted model parameter values are obtained. The face recognition of multi-angle face images can be realized by using the trained second face recognition model, and the recognition accuracy is high.
在又一个具体实施例中,预先构建第三人脸识别模型,在得到所述多个人脸图像组、以及人脸图像与所属人脸图像组的组中心的旋转角度之后,首先,获取应用场景的人脸旋转角度;然后,利用具有角度标注信息的人脸图像、人脸图像所属人脸图像组、以及所述人脸旋转角度得到所述第三人脸识别模型的模型参数数值。需要说明的是,所述利用具有角度标注信息的人脸图像、人脸图像所属人脸图像组、以及所述人脸旋转角度得到所述第三人脸识别模型的模型参数数值可以以以下两种中一种分别实施,或者相结合实施:第一种,首先从人脸图像组中筛选出特定角度的人脸图像,然后将有角度标注信息的人脸图像、人脸图像所属人脸图像组输入到第三人脸识别模型中进行训练,得到模型参数;第二种,直接将有角度标注信息的人脸图像、人脸图像所属人脸图像组、以及所述人脸旋转角度输入到第三人脸识别模型中进行训练,得到模型参数。In yet another specific embodiment, the third face recognition model is pre-built. After obtaining the multiple face image groups and the rotation angles between the face images and the group centers of the face image groups to which they belong, first, obtain the application scene Then, use the face image with angle label information, the face image group to which the face image belongs, and the face rotation angle to obtain the model parameter value of the third face recognition model. It should be noted that the model parameter value of the third face recognition model obtained by using the face image with angle label information, the face image group to which the face image belongs, and the face rotation angle can be in the following two forms: One of the two types is implemented separately, or implemented in combination: the first type, first select the face images with a specific angle from the face image group, and then use the face images with angle annotation information and the face images to which the face images belong The group is input to the third face recognition model for training to obtain model parameters; the second method is to directly input the face image with angle label information, the face image group to which the face image belongs, and the face rotation angle into Training is performed in the third face recognition model to obtain model parameters.
在其它具体实施例中,对于预先采用标准人脸数据训练得到的第四人脸识别模型,在得到所述多个人脸图像组、以及人脸图像与所属人脸图像组的组中心的旋转角度之后,首先获取应用场景的人脸旋转角度;然后,利用具有角度标注信息的人脸图像、人脸图像所属人脸图像组、以及所述人脸旋转角度得到所述第四人脸识别模型调整后的模型参数数值。In other specific embodiments, for the fourth face recognition model obtained by using standard face data training in advance, after obtaining the plurality of face image groups, and the rotation angle between the face image and the group center of the face image group to which it belongs Afterwards, first obtain the face rotation angle of the application scene; then, use the face image with angle label information, the face image group to which the face image belongs, and the face rotation angle to obtain the fourth face recognition model adjustment After the model parameter values.
相应地,本公开还提供了与上述人脸图像处理方法相对应的人脸图像处理装置1300,图13示意性示出了根据本公开的实施例的人脸图像处理装置的框图。Correspondingly, the present disclosure also provides a face image processing device 1300 corresponding to the above-mentioned face image processing method, and FIG. 13 schematically shows a block diagram of a face image processing device according to an embodiment of the present disclosure.
如图13所示,所述人脸图像处理装置包括:获取模块131和聚类模块132。As shown in FIG. 13 , the face image processing apparatus includes: an acquisition module 131 and a clustering module 132 .
其中,获取模块131用于获取多幅人脸图像,所述多幅人脸图像包括至少一个人的不同角度人脸图像。Wherein, the obtaining module 131 is used for obtaining multiple face images, and the multiple face images include face images of at least one person from different angles.
聚类模块132用于对所述多幅人脸图像进行聚类,得到多个人脸图像组。The clustering module 132 is used for clustering the multiple face images to obtain multiple face image groups.
此外,图14示意性示出了根据本公开的另一实施例的人脸图像处理装置的框图,所述装置还可以包括:In addition, FIG. 14 schematically shows a block diagram of a face image processing device according to another embodiment of the present disclosure, and the device may further include:
修正模块141用于根据用户输入和/或预设规则去除至少一个人脸图像组中的部分人脸图像,使得所述至少一个人脸图像组中每个人脸图像组中的各幅人脸图像对应于同一人。The correction module 141 is used to remove some face images in at least one face image group according to user input and/or preset rules, so that each face image in each face image group in the at least one face image group correspond to the same person.
其中,所述获取模块141包括:Wherein, the acquisition module 141 includes:
视频获取单元用于获取视频。The video acquisition unit is used for acquiring video.
视频帧获取单元用于逐帧或者间隔设定个数帧获取所述视频的视频帧。The video frame acquisition unit is used to acquire video frames of the video frame by frame or at intervals of a set number of frames.
人脸图像获取单元,用于获取所述视频帧的人脸图像。A human face image acquiring unit, configured to acquire the human face image of the video frame.
在一个具体实施例中,所述人脸图像获取单元包括:In a specific embodiment, the human face image acquisition unit includes:
设定子单元用于设定人脸检测框。The setting subunit is used to set the face detection frame.
特征提取子单元用于提取所述视频帧被所述人脸检测框覆盖的部分的人脸特征。The feature extraction subunit is used to extract the face features of the part of the video frame covered by the face detection frame.
模型训练子单元用于将所述人脸特征输入预先训练好的人脸模型得到识别结果。The model training subunit is used for inputting the face features into the pre-trained face model to obtain the recognition result.
图像识别单元用于根据识别结果确定人脸图像。The image recognition unit is used to determine the face image according to the recognition result.
其中,所述聚类模块132具体用于基于K-means或者层次聚类的方法对人脸图像的人脸特征进行聚类,得到至少一个人脸图像组及每个人脸图像组对应的人脸图像。Wherein, the clustering module 132 is specifically used to cluster the face features of the face images based on K-means or hierarchical clustering methods to obtain at least one face image group and the face corresponding to each face image group. image.
在另一个实施例中,图15示意性示出了根据本公开的另一实施例的人脸图像处理装置的框图,所述装置还可以包括如下模块:In another embodiment, FIG. 15 schematically shows a block diagram of a face image processing device according to another embodiment of the present disclosure, and the device may also include the following modules:
与所述聚类模块132相连的组中心获取模块151,用于分别获取至少一个人脸图像组的组中心。The group center acquisition module 151 connected to the clustering module 132 is configured to respectively acquire the group center of at least one face image group.
标识模块152,用于将所述类组中心作为对应人脸图像组的标识。The identification module 152 is configured to use the cluster center as an identification of the corresponding face image group.
其中,所述组中心获取模块151包括:Wherein, the group center acquisition module 151 includes:
距离计算单元用于对于所述至少一个人脸图像组中的每个人脸图像组,分别计算该人脸图像组中每个人脸图像与其他人脸图像的距离。The distance calculating unit is used for calculating, for each face image group in the at least one face image group, the distance between each face image in the face image group and other face images.
中心获取单元用于将距离该人脸图像组中其他人脸图像的距离之和最小的人脸图像作为组中心。The center obtaining unit is used for taking the face image with the smallest sum of distances from other face images in the face image group as the group center.
在又一个实施例中,图16示意性示出了根据本公开的另一实施例的人脸图像处理装置的框图,所述装置还可以包括如下模块:In yet another embodiment, FIG. 16 schematically shows a block diagram of a face image processing device according to another embodiment of the present disclosure, and the device may also include the following modules:
与所述组中心获取模块151相连的旋转角度计算模块161,用于对于该人脸图像组,分别计算该人脸图像组中每个人脸图像相对于所述组中心的旋转角度。The rotation angle calculation module 161 connected with the group center acquisition module 151 is used for calculating the rotation angle of each face image in the group center relative to the group center for the group of face images.
角度标注模块162用于利用得到的旋转角度对相应人脸图像进行角度标注。The angle labeling module 162 is used to use the obtained rotation angle to perform angle labeling on the corresponding face image.
其中,所述旋转角度计算模块161包括:Wherein, the rotation angle calculation module 161 includes:
关键点获取单元用于对于所述每个人脸图像,获取该人脸图像的指定个数的关键点,以及获取所述组中心的指定个数的关键点。The key point obtaining unit is used for obtaining a specified number of key points of the human face image and a specified number of key points of the center of the group for each of the face images.
位置关系获取单元用于获取该人脸图像的关键点之间的位置关系,以及获取所述组中心的关键点之间的位置关系。The positional relationship acquiring unit is used to acquire the positional relationship between the key points of the face image, and acquire the positional relationship between the key points of the group centers.
旋转角度计算单元用于根据该人脸图像的关键点之间的位置关系和所述组中心的关键点之间的位置关系,确定该人脸图像相对于组中心的旋转角度。The rotation angle calculation unit is used to determine the rotation angle of the face image relative to the group center according to the positional relationship between the key points of the face image and the positional relationship between the key points of the group center.
具体地,所述关键点获取单元具体用于基于SDM和/或MTCNN的人脸关键点检测得到所述关键点。Specifically, the key point acquisition unit is specifically configured to obtain the key points based on SDM and/or MTCNN face key point detection.
在其它实施例中,图17示意性示出了根据本公开的另一实施例的人脸图像处理装置的框图,所述装置还可以包括如下模块:In other embodiments, FIG. 17 schematically shows a block diagram of a face image processing device according to another embodiment of the present disclosure, and the device may also include the following modules:
排序模块171用于对于该人脸图像组,根据人脸图像与所述组中心的旋转角度和/或距离对人脸图像组中的人脸图像进行排序。The sorting module 171 is used for sorting the face images in the face image group according to the rotation angle and/or distance between the face images and the group center.
展示模块172用于展示排序后的人脸图像。The display module 172 is used for displaying the sorted face images.
在又一个实施例中,所述装置还可以包括如下模块:人脸识别模型训练模块,用于在得到多个人脸图像组之后,利用所述多个人脸图像组训练人脸识别模型。In yet another embodiment, the device may further include the following module: a face recognition model training module, configured to use the multiple face image groups to train the face recognition model after obtaining the multiple face image groups.
在一个具体实施例中,图18示意性示出了根据本公开的另一实施例的人脸图像处理装置的框图,所述装置还可以包括如下模块:In a specific embodiment, FIG. 18 schematically shows a block diagram of a face image processing device according to another embodiment of the present disclosure, and the device may also include the following modules:
第一人脸识别模型构建模块181用于预先构建第一人脸识别模型。The first face recognition model construction module 181 is used to pre-build the first face recognition model.
第一人脸识别模型训练模块182用于在得到所述多个人脸图像组之后,将人脸图像组对应的不同角度的人脸图像的人脸特征、及所属人脸图像组输入所述第一人脸识别模型,得到模型参数数值。The first face recognition model training module 182 is used for inputting the face features of the face images of different angles corresponding to the face image groups and the corresponding face image groups into the first face image group after obtaining the plurality of face image groups. A face recognition model to obtain model parameter values.
在又一个具体实施例中,图19示意性示出了根据本公开的另一实施例的人脸图像处理装置的框图,所述装置还可以包括如下模块:In yet another specific embodiment, FIG. 19 schematically shows a block diagram of a face image processing device according to another embodiment of the present disclosure, and the device may also include the following modules:
第二人脸识别模型构建模块191用于预先构建第二人脸识别模型;The second face recognition model construction module 191 is used to pre-build the second face recognition model;
第二人脸识别模型训练模块192用于采用标准人脸数据对预先构建的第二人脸识别模型进行训练,得到第二人脸识别模型;The second face recognition model training module 192 is used to train the pre-built second face recognition model using standard face data to obtain the second face recognition model;
第二人脸识别模型调整模块193用于在得到所述多个人脸图像组之后,将人脸图像组对应的不同角度的人脸图像的人脸特征、及所属人脸图像组输入所述第二人脸识别模型,得到调整后的模型参数数值。The second face recognition model adjustment module 193 is used to input the face features of the face images from different angles corresponding to the face image groups and the corresponding face image groups into the first face image group after obtaining the plurality of face image groups. The two-person face recognition model obtains the adjusted model parameter values.
在又一个具体实施例中,图20示意性示出了根据本公开的另一实施例的人脸图像处理装置的框图,所述装置还可以包括如下模块:In yet another specific embodiment, FIG. 20 schematically shows a block diagram of a face image processing device according to another embodiment of the present disclosure, and the device may also include the following modules:
第三人脸识别模型构建模块201用于预先构建第三人脸识别模型;The third face recognition model construction module 201 is used to pre-build the third face recognition model;
旋转角度获取模块202用于获取应用场景的人脸旋转角度;Rotation angle obtaining module 202 is used for obtaining the face rotation angle of application scene;
第三人脸识别模型参数获取模块203用于在得到所述多个人脸图像组、以及人脸图像与所属人脸图像组的组中心的旋转角度之后,利用具有角度标注信息的人脸图像、人脸图像所属人脸图像组、以及所述人脸旋转角度得到所述第三人脸识别模型的模型参数数值。The third face recognition model parameter acquisition module 203 is used to use the face images with angle annotation information, The face image group to which the face image belongs and the face rotation angle obtain the model parameter values of the third face recognition model.
在又一个具体实施例中,图21示意性示出了根据本公开的另一实施例的人脸图像处理装置的框图,所述装置还可以包括如下模块:In yet another specific embodiment, FIG. 21 schematically shows a block diagram of a face image processing device according to another embodiment of the present disclosure, and the device may also include the following modules:
第四人脸识别模型构建模块211用于预先构建第四人脸识别模型;The fourth face recognition model construction module 211 is used to pre-build the fourth face recognition model;
第四人脸识别模型训练模块212用于采用标准人脸数据对预先构建的第四人脸识别模型进行训练,得到第四人脸识别模型;The fourth face recognition model training module 212 is used to train the pre-built fourth face recognition model using standard face data to obtain the fourth face recognition model;
旋转角度获取模块213用于获取应用场景的人脸旋转角度;Rotation angle obtaining module 213 is used for obtaining the face rotation angle of application scene;
第四人脸识别模型参数获取模块214用于在得到所述多个人脸图像组、以及人脸图像与所属人脸图像组的组中心的旋转角度之后,利用具有角度标注信息的人脸图像、人脸图像所属人脸图像组、以及所述人脸旋转角度得到所述第四人脸识别模型调整后的模型参数数值。The fourth face recognition model parameter acquisition module 214 is used to use the face images with angle annotation information, The face image group to which the face image belongs and the face rotation angle obtain the adjusted model parameter values of the fourth face recognition model.
由于人脸识别装置与人脸识别方法相对应,不再对人脸识别装置进行详细阐述,具体可以参考人脸识别方法相关部分内容。Since the face recognition device corresponds to the face recognition method, the face recognition device will not be described in detail. For details, please refer to the relevant parts of the face recognition method.
根据本公开的实施例的模块、子模块、单元、子单元中的任意多个、或其中任意多个的至少部分功能可以在一个模块中实现。根据本公开实施例的模块、子模块、单元、子单元中的任意一个或多个可以被拆分成多个模块来实现。根据本公开实施例的模块、子模块、单元、子单元中的任意一个或多个可以至少被部分地实现为硬件电路,例如现场可编程门阵列(FPGA)、可编程逻辑阵列(PLA)、片上系统、基板上的系统、封装上的系统、专用集成电路(ASIC),或可以通过对电路进行集成或封装的任何其他的合理方式的硬件或固件来实现,或以软件、硬件以及固件三种实现方式中任意一种或以其中任意几种的适当组合来实现。或者,根据本公开实施例的模块、子模块、单元、子单元中的一个或多个可以至少被部分地实现为计算机程序模块,当该计算机程序模块被运行时,可以执行相应的功能。Modules, sub-modules, units, any multiple of sub-units according to the embodiments of the present disclosure, or at least part of the functions of any multiple of them may be implemented in one module. Any one or more of modules, submodules, units, and subunits according to the embodiments of the present disclosure may be implemented by being divided into multiple modules. Any one or more of modules, submodules, units, and subunits according to embodiments of the present disclosure may be at least partially implemented as hardware circuits, such as field programmable gate arrays (FPGAs), programmable logic arrays (PLAs), system-on-chip, system-on-substrate, system-on-package, application-specific integrated circuit (ASIC), or hardware or firmware that may be implemented by any other reasonable means of integrating or packaging circuits, or in a combination of software, hardware, and firmware Any one of these implementations or an appropriate combination of any of them. Alternatively, one or more of the modules, submodules, units, and subunits according to the embodiments of the present disclosure may be at least partially implemented as computer program modules, and when the computer program modules are executed, corresponding functions may be performed.
例如,获取模块131、聚类模块132、修正模块141、组中心获取模块151、标识模块152、排序模块171、以及角度标注模块162中的任意多个可以合并在一个模块中实现,或者其中的任意一个模块可以被拆分成多个模块。或者,这些模块中的一个或多个模块的至少部分功能可以与其他模块的至少部分功能相结合,并在一个模块中实现。根据本公开的实施例,获取模块131、聚类模块132、修正模块141、组中心获取模块151、标识模块152、排序模块171、以及角度标注模块162中的至少一个可以至少被部分地实现为硬件电路,例如现场可编程门阵列(FPGA)、可编程逻辑阵列(PLA)、片上系统、基板上的系统、封装上的系统、专用集成电路(ASIC),或可以通过对电路进行集成或封装的任何其他的合理方式等硬件或固件来实现,或以软件、硬件以及固件三种实现方式中任意一种或以其中任意几种的适当组合来实现。或者,获取模块131、聚类模块132、修正模块141、组中心获取模块151、标识模块152、排序模块171、以及角度标注模块162中的至少一个可以至少被部分地实现为计算机程序模块,当该计算机程序模块被运行时,可以执行相应的功能。For example, any number of acquisition module 131, clustering module 132, correction module 141, group center acquisition module 151, identification module 152, sorting module 171, and angle labeling module 162 can be combined into one module, or one of them Any module can be split into multiple modules. Alternatively, at least part of the functions of one or more of these modules may be combined with at least part of the functions of other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the acquisition module 131, the clustering module 132, the correction module 141, the group center acquisition module 151, the identification module 152, the sorting module 171, and the angle labeling module 162 can be at least partially implemented as Hardware circuits such as Field Programmable Gate Arrays (FPGAs), Programmable Logic Arrays (PLAs), System-on-Chip, System-on-Substrate, System-on-Package, Application-Specific Integrated Circuits (ASICs), or circuits that can be integrated or packaged Any other reasonable way, such as hardware or firmware, or any one of the three implementations of software, hardware and firmware, or an appropriate combination of any of them. Or, at least one of the acquisition module 131, the clustering module 132, the correction module 141, the group center acquisition module 151, the identification module 152, the sorting module 171, and the angle labeling module 162 can be at least partially implemented as a computer program module, when When the computer program module is executed, it can perform corresponding functions.
图22示意性示出了根据本公开实施例的适于实现上文描述的方法的计算机系统的方框图。图22示出的计算机系统仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。Fig. 22 schematically shows a block diagram of a computer system suitable for implementing the methods described above according to an embodiment of the present disclosure. The computer system shown in FIG. 22 is only an example, and should not limit the functions and scope of use of the embodiments of the present disclosure.
如图22所示,根据本公开实施例的计算机系统2200包括处理器2201,其可以根据存储在只读存储器(ROM)2202中的程序或者从存储部分1408加载到随机访问存储器(RAM)2203中的程序而执行各种适当的动作和处理。处理器2201例如可以包括通用微处理器(例如CPU)、指令集处理器和/或相关芯片组和/或专用微处理器(例如,专用集成电路(ASIC)),等等。处理器2201还可以包括用于缓存用途的板载存储器。处理器2201可以包括用于执行根据本公开实施例的方法流程的不同动作的单一处理单元或者是多个处理单元。As shown in FIG. 22 , a computer system 2200 according to an embodiment of the present disclosure includes a processor 2201 that can be loaded into a random access memory (RAM) 2203 according to a program stored in a read-only memory (ROM) 2202 or from a storage section 1408 Various appropriate actions and processing are performed by the program. The processor 2201 may include, for example, a general-purpose microprocessor (eg, a CPU), an instruction set processor and/or related chipsets and/or a special-purpose microprocessor (eg, an application-specific integrated circuit (ASIC)), and the like. Processor 2201 may also include on-board memory for caching purposes. The processor 2201 may include a single processing unit or a plurality of processing units for executing different actions of the method flow according to the embodiments of the present disclosure.
在RAM 2203中,存储有系统2200操作所需的各种程序和数据,例如,用于存储可执行指令。处理器2201、ROM 2202以及RAM2203通过总线2204彼此相连。处理器2201通过执行ROM 2202和/或RAM2203中的程序来执行根据本公开实施例的方法流程的各种操作。需要注意,所述程序也可以存储在除ROM 2202和RAM 2203以外的一个或多个存储器中。处理器2201也可以通过执行存储在所述一个或多个存储器中的程序来执行根据本公开实施例的方法流程的各种操作,例如,使得所述一个或多个处理器获取多幅人脸图像,所述多幅人脸图像包括至少一个人的不同角度人脸图像;对所述多幅人脸图像进行聚类,进而得到多个人脸图像组,还可以根据用户输入和/或预设规则去除至少一个人脸图像组中的部分人脸图像,使得所述至少一个人脸图像组中每个人脸图像组中的各幅人脸图像对应于同一人。In the RAM 2203, various programs and data necessary for the operation of the system 2200 are stored, for example, for storing executable instructions. The processor 2201 , the ROM 2202 and the RAM 2203 are connected to each other through the bus 2204 . The processor 2201 executes various operations according to the method flow of the embodiment of the present disclosure by executing programs in the ROM 2202 and/or RAM 2203 . It should be noted that the program may also be stored in one or more memories other than the ROM 2202 and RAM 2203 . The processor 2201 may also perform various operations according to the method flow of the embodiment of the present disclosure by executing the programs stored in the one or more memories, for example, causing the one or more processors to acquire multiple faces image, the plurality of face images includes at least one person’s face images from different angles; the plurality of face images are clustered to obtain a plurality of face image groups, which can also be input and/or preset according to the user Part of the face images in the at least one face image group is removed according to the rule, so that the face images in each face image group in the at least one face image group correspond to the same person.
根据本公开的实施例,系统2200还可以包括输入/输出(I/O)接口2205,输入/输出(I/O)接口2205也连接至总线2204。系统2200还可以包括连接至I/O接口2205的以下部件中的一项或多项:包括键盘、鼠标等的输入部分2206;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分2207;包括硬盘等的存储部分2208;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分2209。通信部分2209经由诸如因特网的网络执行通信处理。驱动器2210也根据需要连接至I/O接口2205。可拆卸介质2211,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器2210上,以便于从其上读出的计算机程序根据需要被安装入存储部分2208。According to an embodiment of the present disclosure, the system 2200 may also include an input/output (I/O) interface 2205 which is also connected to the bus 2204 . System 2200 may also include one or more of the following components connected to I/O interface 2205: an input section 2206 including a keyboard, mouse, etc.; including components such as a cathode ray tube (CRT), liquid crystal display (LCD), etc.; etc.; a storage section 2208 including a hard disk or the like; and a communication section 2209 including a network interface card such as a LAN card, a modem, or the like. The communication section 2209 performs communication processing via a network such as the Internet. A drive 2210 is also connected to the I/O interface 2205 as needed. A removable medium 2211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc. is mounted on the drive 2210 as necessary so that a computer program read therefrom is installed into the storage section 2208 as necessary.
根据本公开的实施例,根据本公开实施例的方法流程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分2209从网络上被下载和安装,和/或从可拆卸介质2211被安装。在该计算机程序被处理器2201执行时,执行本公开实施例的系统中限定的上述功能。根据本公开的实施例,上文描述的系统、设备、装置、模块、单元等可以通过计算机程序模块来实现。According to the embodiments of the present disclosure, the method flow according to the embodiments of the present disclosure can be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, where the computer program includes program codes for executing the methods shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication portion 2209, and/or installed from removable media 2211. When the computer program is executed by the processor 2201, the above-mentioned functions defined in the system of the embodiment of the present disclosure are executed. According to the embodiments of the present disclosure, the above-described systems, devices, devices, modules, units, etc. may be implemented by computer program modules.
本公开还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的设备/装置/系统中所包含的;也可以是单独存在,而未装配入该设备/装置/系统中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被执行时,实现根据本公开实施例的各种方法。The present disclosure also provides a computer-readable medium. The computer-readable medium may be included in the device/device/system described in the above embodiments; it may also exist independently without being assembled into the device/device/system. system. The above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed, various methods according to the embodiments of the present disclosure are realized.
根据本公开的实施例,计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、有线、光缆、射频信号等等,或者上述的任意合适的组合。According to an embodiment of the present disclosure, the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two. A computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In the present disclosure, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device. In the present disclosure, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. . Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, cable, optical cable, radio frequency signal, etc., or any suitable combination of the above.
例如,根据本公开的实施例,计算机可读介质可以包括上文描述的ROM 2202和/或RAM 2203和/或ROM 2202和RAM 2203以外的一个或多个存储器。For example, according to an embodiment of the present disclosure, the computer-readable medium may include one or more memories other than the ROM 2202 and/or the RAM 2203 and/or the ROM 2202 and the RAM 2203 described above.
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图或流程图中的每个方框、以及框图或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。本领域技术人员可以理解,本公开的各个实施例和/或权利要求中记载的特征可以进行多种组合或/或结合,即使这样的组合或结合没有明确记载于本公开中。特别地,在不脱离本公开精神和教导的情况下,本公开的各个实施例和/或权利要求中记载的特征可以进行多种组合和/或结合。所有这些组合和/或结合均落入本公开的范围。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in a flowchart or block diagram may represent a module, program segment, or portion of code that includes one or more logical functions for implementing specified executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. It should also be noted that each block in the block diagrams or flowchart illustrations, and combinations of blocks in the block diagrams or flowchart illustrations, can be implemented by a dedicated hardware-based system that performs the specified function or operation, or can be implemented by a A combination of dedicated hardware and computer instructions. Those skilled in the art can understand that various combinations and/or combinations of the features described in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not explicitly recorded in the present disclosure. In particular, without departing from the spirit and teaching of the present disclosure, the various embodiments of the present disclosure and/or the features described in the claims can be combined and/or combined in various ways. All such combinations and/or combinations fall within the scope of the present disclosure.
以上对本公开的实施例进行了描述。但是,这些实施例仅仅是为了说明的目的,而并非为了限制本公开的范围。尽管在以上分别描述了各实施例,但是这并不意味着各个实施例中的措施不能有利地结合使用。本公开的范围由所附权利要求及其等同物限定。不脱离本公开的范围,本领域技术人员可以做出多种替代和修改,这些替代和修改都应落在本公开的范围之内。The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the various embodiments have been described separately above, this does not mean that the measures in the various embodiments cannot be advantageously used in combination. The scope of the present disclosure is defined by the appended claims and their equivalents. Various substitutions and modifications can be made by those skilled in the art without departing from the scope of the present disclosure, and these substitutions and modifications should all fall within the scope of the present disclosure.
Claims (18)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810453319.8A CN110472460B (en) | 2018-05-11 | 2018-05-11 | Face image processing method and device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810453319.8A CN110472460B (en) | 2018-05-11 | 2018-05-11 | Face image processing method and device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN110472460A true CN110472460A (en) | 2019-11-19 |
| CN110472460B CN110472460B (en) | 2024-11-19 |
Family
ID=68504556
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810453319.8A Active CN110472460B (en) | 2018-05-11 | 2018-05-11 | Face image processing method and device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN110472460B (en) |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111428652A (en) * | 2020-03-27 | 2020-07-17 | 恒睿(重庆)人工智能技术研究院有限公司 | Biological characteristic management method, system, equipment and medium |
| CN111507232A (en) * | 2020-04-10 | 2020-08-07 | 三一重工股份有限公司 | Multi-mode multi-strategy fused stranger identification method and system |
| CN111627125A (en) * | 2020-06-02 | 2020-09-04 | 上海商汤智能科技有限公司 | Sign-in method, device, computer equipment and storage medium |
| CN111832499A (en) * | 2020-07-17 | 2020-10-27 | 东华理工大学 | A Simple Face Recognition Classification System |
| CN112666843A (en) * | 2020-12-30 | 2021-04-16 | 杭州雅观科技有限公司 | Smart home access system, access method, computer device and storage medium |
| CN112835876A (en) * | 2019-11-25 | 2021-05-25 | 深圳云天励飞技术有限公司 | A kind of face file deduplication method and related equipment |
| CN112836635A (en) * | 2021-02-02 | 2021-05-25 | 京东数字科技控股股份有限公司 | Image processing method, device and equipment |
| CN113127668A (en) * | 2019-12-31 | 2021-07-16 | 深圳云天励飞技术有限公司 | Data annotation method and related product |
| CN113569874A (en) * | 2021-08-31 | 2021-10-29 | 广联达科技股份有限公司 | License plate number re-identification method, device, computer equipment and storage medium |
| CN118430112A (en) * | 2024-07-03 | 2024-08-02 | 一站发展(北京)云计算科技有限公司 | Human resource authority management device based on face image processing |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102945366A (en) * | 2012-11-23 | 2013-02-27 | 海信集团有限公司 | Method and device for face recognition |
| CN105427421A (en) * | 2015-11-16 | 2016-03-23 | 苏州市公安局虎丘分局 | Entrance guard control method based on face recognition |
| WO2016145940A1 (en) * | 2015-03-19 | 2016-09-22 | 北京天诚盛业科技有限公司 | Face authentication method and device |
| CN106503687A (en) * | 2016-11-09 | 2017-03-15 | 合肥工业大学 | The monitor video system for identifying figures of fusion face multi-angle feature and its method |
| WO2017088434A1 (en) * | 2015-11-26 | 2017-06-01 | 腾讯科技(深圳)有限公司 | Human face model matrix training method and apparatus, and storage medium |
| CN107038400A (en) * | 2016-02-04 | 2017-08-11 | 索尼公司 | Face identification device and method and utilize its target person tracks of device and method |
| CN107066966A (en) * | 2017-04-17 | 2017-08-18 | 宜宾学院 | A kind of face identification method based on key point area image |
-
2018
- 2018-05-11 CN CN201810453319.8A patent/CN110472460B/en active Active
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102945366A (en) * | 2012-11-23 | 2013-02-27 | 海信集团有限公司 | Method and device for face recognition |
| WO2016145940A1 (en) * | 2015-03-19 | 2016-09-22 | 北京天诚盛业科技有限公司 | Face authentication method and device |
| CN105427421A (en) * | 2015-11-16 | 2016-03-23 | 苏州市公安局虎丘分局 | Entrance guard control method based on face recognition |
| WO2017088434A1 (en) * | 2015-11-26 | 2017-06-01 | 腾讯科技(深圳)有限公司 | Human face model matrix training method and apparatus, and storage medium |
| CN107038400A (en) * | 2016-02-04 | 2017-08-11 | 索尼公司 | Face identification device and method and utilize its target person tracks of device and method |
| CN106503687A (en) * | 2016-11-09 | 2017-03-15 | 合肥工业大学 | The monitor video system for identifying figures of fusion face multi-angle feature and its method |
| CN107066966A (en) * | 2017-04-17 | 2017-08-18 | 宜宾学院 | A kind of face identification method based on key point area image |
Cited By (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112835876A (en) * | 2019-11-25 | 2021-05-25 | 深圳云天励飞技术有限公司 | A kind of face file deduplication method and related equipment |
| CN112835876B (en) * | 2019-11-25 | 2023-07-28 | 深圳云天励飞技术有限公司 | Face file duplication removing method and related equipment |
| CN113127668A (en) * | 2019-12-31 | 2021-07-16 | 深圳云天励飞技术有限公司 | Data annotation method and related product |
| CN111428652A (en) * | 2020-03-27 | 2020-07-17 | 恒睿(重庆)人工智能技术研究院有限公司 | Biological characteristic management method, system, equipment and medium |
| CN111428652B (en) * | 2020-03-27 | 2021-06-08 | 恒睿(重庆)人工智能技术研究院有限公司 | Biological characteristic management method, system, equipment and medium |
| CN111507232B (en) * | 2020-04-10 | 2023-07-21 | 盛景智能科技(嘉兴)有限公司 | Stranger identification method and system based on multi-mode multi-strategy fusion |
| CN111507232A (en) * | 2020-04-10 | 2020-08-07 | 三一重工股份有限公司 | Multi-mode multi-strategy fused stranger identification method and system |
| CN111627125B (en) * | 2020-06-02 | 2022-09-27 | 上海商汤智能科技有限公司 | Sign-in method, device, computer equipment and storage medium |
| CN111627125A (en) * | 2020-06-02 | 2020-09-04 | 上海商汤智能科技有限公司 | Sign-in method, device, computer equipment and storage medium |
| CN111832499A (en) * | 2020-07-17 | 2020-10-27 | 东华理工大学 | A Simple Face Recognition Classification System |
| CN112666843A (en) * | 2020-12-30 | 2021-04-16 | 杭州雅观科技有限公司 | Smart home access system, access method, computer device and storage medium |
| CN112836635A (en) * | 2021-02-02 | 2021-05-25 | 京东数字科技控股股份有限公司 | Image processing method, device and equipment |
| CN112836635B (en) * | 2021-02-02 | 2022-11-08 | 京东科技控股股份有限公司 | Image processing method, device and equipment |
| CN113569874A (en) * | 2021-08-31 | 2021-10-29 | 广联达科技股份有限公司 | License plate number re-identification method, device, computer equipment and storage medium |
| CN113569874B (en) * | 2021-08-31 | 2024-11-12 | 广联达科技股份有限公司 | License plate number re-identification method, device, computer equipment and storage medium |
| CN118430112A (en) * | 2024-07-03 | 2024-08-02 | 一站发展(北京)云计算科技有限公司 | Human resource authority management device based on face image processing |
Also Published As
| Publication number | Publication date |
|---|---|
| CN110472460B (en) | 2024-11-19 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110472460A (en) | Face image processing process and device | |
| TWI766201B (en) | Methods and devices for biological testing and storage medium thereof | |
| US11669607B2 (en) | ID verification with a mobile device | |
| CN108629168B (en) | Facial verification methods, devices and computing devices | |
| CN106203242B (en) | Similar image identification method and equipment | |
| US8879803B2 (en) | Method, apparatus, and computer program product for image clustering | |
| US8792722B2 (en) | Hand gesture detection | |
| US8750573B2 (en) | Hand gesture detection | |
| US9767363B2 (en) | System and method for automatic detection of spherical video content | |
| WO2021139324A1 (en) | Image recognition method and apparatus, computer-readable storage medium and electronic device | |
| US8971591B2 (en) | 3D image estimation for 2D image recognition | |
| WO2019218621A1 (en) | Detection method for living being, device, electronic apparatus, and storage medium | |
| WO2019218824A1 (en) | Method for acquiring motion track and device thereof, storage medium, and terminal | |
| CN112954450B (en) | Video processing method, apparatus, electronic device and storage medium | |
| US12347239B2 (en) | Face liveness detection method, system, apparatus, computer device, and storage medium | |
| CN112149615B (en) | Face living body detection method, device, medium and electronic equipment | |
| CN113228626B (en) | Video monitoring system and method | |
| CN108229375B (en) | Method and device for detecting face image | |
| CN112052832A (en) | Face detection method, device and computer storage medium | |
| CN111783593A (en) | Artificial intelligence-based face recognition method, device, electronic device and medium | |
| CN111738199B (en) | Image information verification method, device, computing device and medium | |
| CN108280388A (en) | The method and apparatus and type of face detection method and device of training face detection model | |
| CN114743277A (en) | Liveness detection method, device, electronic device, storage medium and program product | |
| CN114140839A (en) | Image sending method, device and equipment for face recognition and storage medium | |
| CN113298747B (en) | Image and video detection method and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |