CN110991307B - Face recognition methods, devices, equipment and storage media - Google Patents
Face recognition methods, devices, equipment and storage media Download PDFInfo
- Publication number
- CN110991307B CN110991307B CN201911185272.2A CN201911185272A CN110991307B CN 110991307 B CN110991307 B CN 110991307B CN 201911185272 A CN201911185272 A CN 201911185272A CN 110991307 B CN110991307 B CN 110991307B
- Authority
- CN
- China
- Prior art keywords
- face
- video
- live
- living
- videos
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
本发明实施例公开了一种人脸识别的方法、装置、设备及存储介质。该方法包括:对待检测视频进行人脸识别;将识别到的人脸视频输入活体检测模型,确定人脸视频中的人脸是否为活体人脸,所述活体检测模型根据活体人脸视频和非活体人脸视频训练得到,所述非活体人脸视频包括打印人脸的视频或者屏幕人脸的视频。本发明实施例的技术方案,对待检测视频进行人脸识别,可以防止利用图片识别时利用照片伪装人脸进行识别造成的识别错误。将人脸视频输入活体检测模型进行活体人脸检测,减少了外部设备个数,减少了用户配合难度,提高了活体检测的准确性,提高了模型的简便性。
The embodiment of the present invention discloses a face recognition method, device, equipment and storage medium. The method includes: performing face recognition on a video to be detected; inputting the recognized face video into a live body detection model to determine whether the face in the face video is a live face; the live body detection model determines whether the face in the face video is a living face; It is obtained by training on live face videos, and the non-live face videos include videos of printed faces or videos of screen faces. The technical solution of the embodiment of the present invention performs face recognition on the video to be detected, which can prevent recognition errors caused by using photos to disguise faces for recognition when using picture recognition. Inputting the face video into the liveness detection model for live face detection reduces the number of external devices, reduces the difficulty of user cooperation, improves the accuracy of liveness detection, and improves the simplicity of the model.
Description
技术领域Technical field
本发明实施例涉及图像处理领域,尤其涉及一种人脸识别的方法、装置、设备及存储介质。Embodiments of the present invention relate to the field of image processing, and in particular, to a face recognition method, device, equipment and storage medium.
背景技术Background technique
随着我国城市数字化水平越来越高,而人脸识别是数字化城市中的一项重要的技术。而为了防止用户在一些重要环境下通过照片或视频骗过人脸识别系统完成验证,区分活体人脸与视频人脸逐渐成为关乎财产人身安全的重要技术。As the level of digitalization in our country's cities becomes higher and higher, face recognition is an important technology in digital cities. In order to prevent users from deceiving the face recognition system to complete verification through photos or videos in some important circumstances, distinguishing between live faces and video faces has gradually become an important technology related to property and personal security.
现有的活体检测多指用户按照系统指示做出相应的动作,在用户按照系统指示做出相应动作后,于后台对用户完成动作进行识别。但现有的识别效果较为准确的活体检测模型,多采用动作配合式检测,如通过视频眨眼检测或点头检测,而传统人脸检测方法还包括通过红外设备辅助检测以及通过3D建模辅助检测。Existing liveness detection mostly refers to the user taking corresponding actions in accordance with the system instructions. After the user takes the corresponding actions in accordance with the system instructions, the user's completed actions are recognized in the background. However, the existing living body detection models with relatively accurate recognition effects mostly use action-based detection, such as video blink detection or nodding detection, while traditional face detection methods also include assisted detection through infrared equipment and assisted detection through 3D modeling.
但是,动作配合式检测识别难度大,对用户的交互性不友好,而对于中小型的项目应用,添加红外检测设备和3D建模运算会极大地增加成本,降低产品的竞争力。因此,需要开发一种人脸识别的方法,在不增加外设成本的基础上减少用户配合难度,提升人脸识别的准确性和便捷性。However, action-based detection and recognition are difficult and unfriendly to user interactivity. For small and medium-sized project applications, adding infrared detection equipment and 3D modeling operations will greatly increase costs and reduce the competitiveness of products. Therefore, it is necessary to develop a face recognition method that reduces the difficulty of user cooperation and improves the accuracy and convenience of face recognition without increasing the cost of peripherals.
发明内容Contents of the invention
本发明提供一种人脸识别的方法、装置、设备及存储介质,以实现提高人脸识别装置活体人脸识别准确性和便捷性的效果。The present invention provides a face recognition method, device, equipment and storage medium to achieve the effect of improving the accuracy and convenience of live face recognition by the face recognition device.
第一方面,本发明实施例提供了一种人脸识别的方法,包括:In a first aspect, embodiments of the present invention provide a face recognition method, including:
对待检测视频进行人脸识别;Perform face recognition on the video to be detected;
将识别到的人脸视频输入活体检测模型,确定人脸视频中的人脸是否为活体人脸,所述活体检测模型根据活体人脸视频和非活体人脸视频训练得到,所述非活体人脸视频包括打印人脸的视频或者屏幕人脸的视频。The recognized face video is input into the liveness detection model to determine whether the face in the face video is a live face. The liveness detection model is trained based on the live face video and the non-living face video. The non-living face video Face videos include videos of printed faces or videos of screen faces.
进一步地,在对待检测视频进行人脸识别之前,还包括:Furthermore, before performing face recognition on the video to be detected, it also includes:
将活体人脸视频和非活体人脸视频采用卷积神经网络训练,得到所述活体检测模型。Live human face videos and non-living human face videos are trained using convolutional neural networks to obtain the live body detection model.
进一步地,所述活体人脸视频的数量小于所述非活体人脸视频的数量。Further, the number of videos of living faces is smaller than the number of videos of non-living faces.
进一步地,所述将活体人脸视频和非活体人脸视频采用卷积神经网络训练,得到所述活体检测模型,包括:Further, the live face video and the non-live face video are trained using a convolutional neural network to obtain the live detection model, which includes:
采集活体人脸视频和非活体人脸视频;Collect living face videos and non-living face videos;
将所述活体人脸视频和非活体人脸视频缩放到预设尺寸;Scale the living face video and the non-living face video to a preset size;
将缩放后的活体人脸视频和非活体人脸视频采用卷积神经网络训练,得到所述活体检测模型。The scaled live face video and the non-live face video are trained using a convolutional neural network to obtain the live detection model.
进一步地,所述将所述活体人脸视频和非活体人脸视频缩放到预设尺寸,包括:Further, scaling the living face video and the non-living face video to a preset size includes:
从每段视频中按照预设时间间隔抽取预设张数的图片,并将抽取的图片缩放到预设尺寸;Extract a preset number of pictures from each video at a preset time interval, and scale the extracted pictures to the preset size;
将抽取时间段的稠密光流方向场线性拉伸后,把方向场图片缩放到预设尺寸,缩放后的抽取图片和缩放后的方向场图片组成卷积神经网络的训练图片。After linearly stretching the dense optical flow direction field in the extraction time period, the direction field picture is scaled to a preset size. The scaled extraction picture and the scaled direction field picture form the training picture of the convolutional neural network.
进一步地,所述将识别到的人脸视频输入活体检测模型,确定人脸视频中的人脸是否为活体人脸,包括:Further, the step of inputting the recognized face video into the living body detection model and determining whether the face in the face video is a living face includes:
按照预设次数,从每段视频中按照预设时间间隔抽取预设张数的图片,并将抽取的图片缩放到预设尺寸;According to the preset number of times, extract a preset number of pictures from each video at a preset time interval, and scale the extracted pictures to the preset size;
如果缩放后的抽取图片输入活体检测模型确定人脸视频中的人脸为活体人脸的次数达到次数阈值,则人脸视频中的人脸为活体人脸。If the scaled extracted image is input to the liveness detection model and the number of times it is determined that the face in the face video is a live face reaches the threshold of times, then the face in the face video is a live face.
第二方面,本发明实施例还提供了一种人脸识别的装置,该人脸识别的装置包括:In a second aspect, embodiments of the present invention also provide a face recognition device. The face recognition device includes:
视频人脸识别模块,用于对待检测视频进行人脸识别;Video face recognition module, used for face recognition of videos to be detected;
活体人脸确定模块,用于将识别到的人脸视频输入活体检测模型,确定人脸视频中的人脸是否为活体人脸,所述活体检测模型根据活体人脸视频和非活体人脸视频训练得到,所述非活体人脸视频包括打印人脸的视频或者屏幕人脸的视频。A living face determination module, used to input the recognized face video into a living body detection model to determine whether the face in the human face video is a living human face. The living body detection model is based on the living human face video and the non-living human face video. As a result of training, the non-living face videos include videos of printed faces or videos of screen faces.
进一步地,所述装置还包括:Further, the device also includes:
模型训练模块,用于将活体人脸视频和非活体人脸视频采用卷积神经网络训练,得到所述活体检测模型。A model training module is used to train live face videos and non-live face videos using convolutional neural networks to obtain the live detection model.
第三方面,本发明实施例还提供了一种设备,所述设备包括:In a third aspect, an embodiment of the present invention further provides a device, which includes:
一个或多个处理器;one or more processors;
存储装置,用于存储一个或多个程序;A storage device for storing one or more programs;
当所述一个或多个程序被所述一个或多个处理器执行,使得一个或多个处理器实现如本发明任意实施例中提供的人脸识别的方法。When the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the face recognition method as provided in any embodiment of the present invention.
第四方面,本发明实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如本发明任意实施例提供的人脸识别的方法。In a fourth aspect, embodiments of the present invention also provide a storage medium containing computer-executable instructions, which when executed by a computer processor are used to perform face recognition as provided by any embodiment of the present invention. method.
本发明实施例通过对待检测视频进行人脸识别;将识别到的人脸视频输入活体检测模型,确定人脸视频中的人脸是否为活体人脸,所述活体检测模型根据活体人脸视频和非活体人脸视频训练得到,所述非活体人脸视频包括打印人脸的视频或者屏幕人脸的视频。对待检测视频进行人脸识别,可以防止利用图片识别时利用照片伪装人脸进行识别造成的识别错误。将人脸视频输入活体检测模型进行活体人脸检测,减少了外部设备个数,减少了用户配合难度,提高了活体检测的准确性,提高了模型的简便性。The embodiment of the present invention performs face recognition on the video to be detected; inputs the recognized face video into a living body detection model to determine whether the face in the face video is a living human face. The living body detection model determines whether the face in the face video is a living human face. It is obtained by training on non-living face videos, including videos of printed faces or videos of screen faces. Performing face recognition on the video to be detected can prevent recognition errors caused by using photos to disguise faces for recognition. Inputting the face video into the liveness detection model for live face detection reduces the number of external devices, reduces the difficulty of user cooperation, improves the accuracy of liveness detection, and improves the simplicity of the model.
附图说明Description of the drawings
图1为本发明实施例一中的一种人脸识别的方法的流程图;Figure 1 is a flow chart of a face recognition method in Embodiment 1 of the present invention;
图2是本发明实施例二中的一种人脸识别的方法的流程图;Figure 2 is a flow chart of a face recognition method in Embodiment 2 of the present invention;
图3是本发明实施例二中的一种活体检测模型训练过程的流程图;Figure 3 is a flow chart of the training process of a living body detection model in Embodiment 2 of the present invention;
图4是本发明实施例二中的一种控制活体人脸视频和非活体人脸视频缩放的流程图;Figure 4 is a flow chart for controlling zooming of live face video and non-living face video in Embodiment 2 of the present invention;
图5是本发明实施例三中的一种人脸识别的装置的结构示意图;Figure 5 is a schematic structural diagram of a face recognition device in Embodiment 3 of the present invention;
图6是本发明实施例四中的一种设备的结构示意图。Figure 6 is a schematic structural diagram of a device in Embodiment 4 of the present invention.
具体实施方式Detailed ways
下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本发明,而非对本发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部结构。The present invention will be further described in detail below in conjunction with the accompanying drawings and examples. It can be understood that the specific embodiments described here are only used to explain the present invention, but not to limit the present invention. In addition, it should be noted that, for convenience of description, only some but not all structures related to the present invention are shown in the drawings.
实施例一Embodiment 1
图1为本发明实施例一提供的一种人脸识别的方法的流程图,本实施例可适用于人脸识别中活体人脸识别的情况,该方法可以由人脸识别的装置来执行,该人脸识别的装置可以由软件和/或硬件来实现,该人脸识别的装置可以配置在计算设备上,具体包括如下步骤:Figure 1 is a flow chart of a face recognition method provided by Embodiment 1 of the present invention. This embodiment can be applied to the situation of living face recognition in face recognition. This method can be executed by a face recognition device. The face recognition device can be implemented by software and/or hardware, and the face recognition device can be configured on a computing device, specifically including the following steps:
步骤11、对待检测视频进行人脸识别。Step 11: Perform face recognition on the video to be detected.
其中,待检测视频可理解为由摄像头在待检测区域进行持续拍摄获取的视频。Among them, the video to be detected can be understood as the video obtained by the camera continuously shooting in the area to be detected.
其中,人脸识别可理解为对摄像头采集到的待检测视频中的类似人脸的轮廓进行识别,以提取含有待识别人脸的人脸视频进行活体检测。Among them, face recognition can be understood as identifying face-like contours in the video to be detected collected by the camera, so as to extract the face video containing the face to be recognized for live detection.
具体的,由摄像头对待检测区域进行持续拍摄,对获取的视频以固定时间间隔提取图片,进行轮廓识别,当识别到人脸的轮廓时,提取该图片的前一时刻至持续识别到人脸轮廓的最后一张提取图片的后一时刻间的视频,作为识别到的人脸视频。Specifically, the camera continuously shoots the area to be detected, extracts pictures from the acquired video at fixed time intervals, and performs contour recognition. When the contour of the face is recognized, the previous moment of the picture is extracted until the facial contour is continuously recognized. The video at the moment after the last extracted picture is used as the recognized face video.
可选的,人脸检测可采用SSD人脸检测技术、MTCNN人脸检测技术、S3FD人脸检测技术等。Optionally, face detection can use SSD face detection technology, MTCNN face detection technology, S3FD face detection technology, etc.
其中,摄像头可为任意可采集视频的设备,本发明实施例对此不做限定,固定时间间隔可由人为设定也可为根据摄像设备自身属性确定。The camera can be any device that can capture video. This is not limited in the embodiment of the present invention. The fixed time interval can be set manually or can be determined based on the properties of the camera device itself.
步骤12、将识别到的人脸视频输入活体检测模型,确定人脸视频中的人脸是否为活体人脸,所述活体检测模型根据活体人脸视频和非活体人脸视频训练得到,所述非活体人脸视频包括打印人脸的视频或者屏幕人脸的视频。Step 12: Input the recognized face video into the liveness detection model to determine whether the face in the face video is a live face. The liveness detection model is trained based on the live face video and the non-living face video. Non-living face videos include videos of printed faces or videos of screen faces.
其中,活体检测模型可理解为根据多个非活体人脸和活体人脸视频训练出的用于区分活体人脸和非活体人脸的模型。可选的,训练方法可采用卷积神经网络训练方法,也可采用大数据统计等方法,本发明实施例对此不做限定。Among them, the living body detection model can be understood as a model trained based on multiple videos of non-living faces and live human faces to distinguish between living faces and non-living faces. Optionally, the training method may use a convolutional neural network training method or a big data statistics method, which is not limited in this embodiment of the present invention.
具体的,将识别到的人脸视频输入至训练好的活体检测模型中,将识别到的人脸视频按预设次数及预设时间间隔抽取图片,将抽取图片中活体人脸特征和活体检测模型中活体人脸特征进行对比,根据相同特征个数判断识别到的人脸视频中的人脸是否为活体人脸。Specifically, the recognized face video is input into the trained living body detection model, the recognized face video is extracted according to a preset number of times and at a preset time interval, and the living face features and live body detection in the picture are extracted. The features of living faces in the model are compared, and based on the number of the same features, it is judged whether the face in the recognized face video is a living face.
本发明实施例通过对待检测视频进行人脸识别;将识别到的人脸视频输入活体检测模型,确定人脸视频中的人脸是否为活体人脸,所述活体检测模型根据活体人脸视频和非活体人脸视频训练得到,所述非活体人脸视频包括打印人脸的视频或者屏幕人脸的视频。对待检测视频进行人脸识别,可以防止利用图片识别时利用照片伪装人脸进行识别造成的识别错误。将人脸视频输入活体检测模型进行活体人脸检测,减少了外部设备个数,减少了用户配合难度,提高了活体检测的准确性,提高了模型的简便性。The embodiment of the present invention performs face recognition on the video to be detected; inputs the recognized face video into a living body detection model to determine whether the face in the face video is a living human face. The living body detection model determines whether the face in the face video is a living human face. It is obtained by training on non-living face videos, including videos of printed faces or videos of screen faces. Performing face recognition on the video to be detected can prevent recognition errors caused by using photos to disguise faces for recognition. Inputting the face video into the liveness detection model for live face detection reduces the number of external devices, reduces the difficulty of user cooperation, improves the accuracy of liveness detection, and improves the simplicity of the model.
实施例二Embodiment 2
图2为本发明实施例二提供的一种人脸识别的方法的流程图。本实施例的技术方案在上述技术方案的基础上进一步细化,具体包括如下步骤:Figure 2 is a flow chart of a face recognition method provided in Embodiment 2 of the present invention. The technical solution of this embodiment is further refined based on the above technical solution, and specifically includes the following steps:
步骤21、将活体人脸视频和非活体人脸视频采用卷积神经网络训练,得到所述活体检测模型。Step 21: Use the convolutional neural network to train the live face video and the non-live face video to obtain the live detection model.
其中,卷积神经网络可理解为一类包含卷积计算且具有深度结构的前馈神经网络,是深度学习的代表算法之一。Among them, convolutional neural network can be understood as a type of feed-forward neural network that contains convolution calculations and has a deep structure. It is one of the representative algorithms of deep learning.
可选的,活体人脸视频的数量应小于非活体人脸视频的数量。随训练方式的不同所需活体人脸视频与非活体人脸视频的数量不定,本发明实施例对此不做具体限定。Optionally, the number of live face videos should be smaller than the number of non-living face videos. The number of live face videos and non-live face videos required varies depending on the training method, and this is not specifically limited in the embodiment of the present invention.
具体的,图3提供了一种活体检测模型训练过程的流程图,具体包括以下步骤:Specifically, Figure 3 provides a flow chart of the training process of a living body detection model, which specifically includes the following steps:
步骤211、采集活体人脸视频和非活体人脸视频。Step 211: Collect live face videos and non-live face videos.
其中,非活体人脸视频可包括打印人脸的视频或屏幕中人脸的视频。Among them, non-living face videos may include videos of printed faces or videos of faces on the screen.
可选的,活体人脸视频应不少于5000段,非活体人脸视频应不少于10000段。Optional, there should be no less than 5,000 videos of living faces, and no less than 10,000 videos of non-living faces.
步骤212、将所述活体人脸视频和非活体人脸视频缩放到预设尺寸。Step 212: Scale the living face video and the non-living face video to a preset size.
可选的,预设尺寸可为32像素*32像素。Optional, the default size can be 32 pixels * 32 pixels.
具体的,图4提供了一种控制活体人脸视频和非活体人脸视频缩放的流程图,具体包括以下步骤:Specifically, Figure 4 provides a flow chart for controlling the scaling of living face videos and non-living face videos, which specifically includes the following steps:
步骤2121、从每段视频中按照预设时间间隔抽取预设张数的图片,并将抽取的图片缩放到预设尺寸。Step 2121: Extract a preset number of pictures from each video at a preset time interval, and scale the extracted pictures to a preset size.
具体的,在每段视频中相隔预设时间间隔抽取图片,直至图片张数达到预设张数,将抽取到的图片的宽高缩放至预设尺寸,该预设尺寸可由人为设置或根据训练模型的需要进行设置。Specifically, pictures are extracted at preset time intervals in each video until the number of pictures reaches the preset number, and the width and height of the extracted pictures are scaled to the preset size. The preset size can be set manually or based on training. The model needs to be set up.
可选的,预设时间间隔可为20ms,预设张数可为8张,本发明实施例对此不做具体限定。Optionally, the preset time interval can be 20ms, and the preset number of pictures can be 8, which is not specifically limited in the embodiment of the present invention.
步骤2122、将抽取时间段的稠密光流方向场线性拉伸后,把方向场图片缩放到预设尺寸,缩放后的抽取图片和缩放后的方向场图片组成卷积神经网络的训练图片。Step 2122: After linearly stretching the dense optical flow direction field of the extraction time period, the direction field picture is scaled to a preset size. The scaled extraction picture and the scaled direction field picture constitute the training picture of the convolutional neural network.
其中,抽取时间段可理解为第一张抽取图片和最后一张抽取图片间的时间段。Among them, the extraction time period can be understood as the time period between the first extracted picture and the last extracted picture.
其中,稠密光流方向场可理解为一种针对图像进行的逐点匹配的图像校准,该场为图像上所有点在预设时间段内的偏移量,将其集合称为稠密光流方向场。Among them, the dense optical flow direction field can be understood as a point-by-point matching image calibration for the image. This field is the offset of all points on the image within a preset time period, and its set is called the dense optical flow direction. field.
可选的,可将稠密光流方向场由[0-360]线性拉伸至[0-255]。Optionally, the dense optical flow direction field can be linearly stretched from [0-360] to [0-255].
示例性的,将抽取时间段间采集的稠密光流方向场由[0-360]线性拉伸至[0-255],并将方向场图片长宽缩放至32像素乘以32像素,将该缩放后的方向场图片与在抽取时间段内采集的8张抽取图片共同组成一张96像素乘以96像素的卷积神经网络的训练图片。For example, the dense optical flow direction field collected during the extraction time period is linearly stretched from [0-360] to [0-255], and the length and width of the direction field picture are scaled to 32 pixels times 32 pixels, and the The scaled direction field picture and the 8 extracted pictures collected during the extraction time period together form a 96-pixel by 96-pixel training picture for the convolutional neural network.
步骤213、将缩放后的活体人脸视频和非活体人脸视频采用卷积神经网络训练,得到所述活体检测模型。Step 213: Use the convolutional neural network to train the scaled live face video and the non-live face video to obtain the live detection model.
示例性的,活体检测模型可采用caffe框架进行训练,其中模型由基础网络和自定义网络两部分组成,基础网络使用mobilenetV2模型,自定义网络设计可为For example, the living body detection model can be trained using the caffe framework, in which the model consists of a basic network and a custom network. The basic network uses the mobilenetV2 model, and the custom network design can be
Dense(240)->Activation(relu)->BatchNormalization()->Dropout(0.25)->Dense(2)Dense(240)->Activation(relu)->BatchNormalization()->Dropout(0.25)->Dense(2)
可选的,优化方法可采用Adam方式。Optionally, the optimization method can use the Adam method.
步骤22、按照预设次数,从每段视频中按照预设时间间隔抽取预设张数的图片,并将抽取的图片缩放到预设尺寸。Step 22: Extract a preset number of pictures from each video at a preset time interval according to a preset number of times, and scale the extracted pictures to a preset size.
具体的,将识别到的人脸视频按照预设时间间隔抽取图片,直至抽取图片张数达到预设张数,将上述行为重复预设次数,并获取预设次数乘以预设张数张抽取图片,将上述预设次数抽取时间内的稠密光流方向场图片与抽取图片缩放到预设尺寸并组合生成大图。Specifically, pictures are extracted from the recognized face video at preset time intervals until the number of extracted pictures reaches the preset number, the above behavior is repeated the preset number of times, and the preset number of times multiplied by the preset number of pictures is obtained. Picture, scale the dense optical flow direction field picture and the extracted picture within the above preset number of extraction times to the preset size and combine them to generate a large image.
可选的,预设时间间隔可为20ms,预设张数可为8张,预设尺寸可为32像素*32像素,预设次数可为3次。Optionally, the preset time interval can be 20ms, the preset number of pictures can be 8, the preset size can be 32 pixels * 32 pixels, and the preset number of times can be 3 times.
示例性的,每隔20ms抽取一张识别到的人脸视频中的图片,对抽取图片中的每8张进行一次尺寸缩放,使之宽高缩放至32像素*32像素,并对每8张抽取图片的时间计算该段时间内的稠密光流方向场,将方向场图片缩放至32像素*32像素,最后将每8张抽取图片与其对应的稠密光流方向场图片组合生成大图。For example, a picture of a recognized face video is extracted every 20ms, and every 8 pictures in the extracted pictures are resized so that the width and height are scaled to 32 pixels * 32 pixels, and every 8 pictures are When extracting pictures, calculate the dense optical flow direction field during that period, scale the direction field picture to 32 pixels * 32 pixels, and finally combine each 8 extracted pictures with their corresponding dense optical flow direction field pictures to generate a large image.
步骤23、如果缩放后的抽取图片输入活体检测模型确定人脸视频中的人脸为活体人脸的次数达到次数阈值,则人脸视频中的人脸为活体人脸。Step 23: If the scaled extracted picture is input to the liveness detection model and the number of times that the face in the face video is a live face reaches the threshold of times, then the face in the face video is a live face.
可选的,次数阈值可为预设次数的2/3或其他由用户预设的比例或数值,本发明实施例对此不做具体限制。Optionally, the number of times threshold may be 2/3 of the preset number of times or other proportions or values preset by the user, which is not specifically limited in this embodiment of the present invention.
具体的,当缩放后的抽取图片与其对应的稠密光流方向场图片组合输入活体检测模型后输出为活体人脸的次数达到预设的次数或预设的比例时,认为识别到的人脸视频中的人脸为活体人脸;否则,认为识别到的人脸视频中的人脸为非活体人脸。Specifically, when the scaled extracted picture and its corresponding dense optical flow direction field picture are combined and input into the living body detection model and the number of times that the number of times it is output as a living human face reaches a preset number or a preset ratio, the recognized face video is considered The face in the video is a living face; otherwise, the face in the recognized face video is considered to be a non-living face.
示例性的,当预设次数为3次,则将连续3次获取的图片组合输入活体检测模型中,如果3次检测中有两次或3次判定为活体人脸,则认为识别到的人脸视频中的人脸为活体人脸。For example, when the preset number of times is 3, the combination of pictures obtained for 3 consecutive times is input into the living body detection model. If two or 3 of the 3 detections are judged to be live faces, the recognized person is considered to be a face. The faces in the face video are live human faces.
本实施例的技术方案,手机活体人脸视频与非活体人脸视频进行卷积神经网络训练,构建活体检测模型,以固定时间间隔抽取待识别人脸视频中的图片,并将其与对应时间内的稠密光流方向场图片缩放组合后输入活体检测模型,根据次数阈值判断待识别的人脸视频中是否为活体人脸,解决了人脸识别中利用照片伪装人脸进行识别造成的识别错误,交互式识别用户配合难度高的问题,使得活体人脸识别更为方便准确。The technical solution of this embodiment is to conduct convolutional neural network training on mobile phone live face videos and non-live face videos, build a live body detection model, extract pictures from the face videos to be recognized at fixed time intervals, and compare them with the corresponding time The dense optical flow direction field pictures in the image are scaled and combined and then input into the living body detection model. Based on the times threshold, it is judged whether the face to be recognized in the video is a living face, which solves the recognition errors caused by using photos to disguise faces in face recognition. , interactively recognizes difficult problems for users to cooperate with, making living face recognition more convenient and accurate.
实施例三Embodiment 3
图5为本发明实施例三提供的一种人脸识别的装置的结构示意图,该人脸识别的装置包括:视频人脸识别模块31和活体人脸确定模块32。Figure 5 is a schematic structural diagram of a face recognition device provided in Embodiment 3 of the present invention. The face recognition device includes: a video face recognition module 31 and a living face determination module 32.
其中,视频人脸识别模块31,用于对待检测视频进行人脸识别;活体人脸确定模块32,用于将识别到的人脸视频输入活体检测模型,确定人脸视频中的人脸是否为活体人脸,所述活体检测模型根据活体人脸视频和非活体人脸视频训练得到,所述非活体人脸视频包括打印人脸的视频或者屏幕人脸的视频。Among them, the video face recognition module 31 is used to perform face recognition on the video to be detected; the living face determination module 32 is used to input the recognized face video into the living body detection model to determine whether the face in the face video is For living faces, the liveness detection model is trained based on live face videos and non-living face videos. The non-living face videos include videos of printed faces or videos of screen faces.
本实施例的技术方案,解决了人脸识别中利用照片伪装人脸进行识别造成的识别错误,交互式识别用户配合难度高,外接设备多的问题,使得活体人脸识别更为方便准确,模型更简便易推广。The technical solution of this embodiment solves the problems of recognition errors caused by using photos to disguise faces in face recognition, high difficulty of user cooperation in interactive recognition, and many external devices, making living face recognition more convenient and accurate, and the model Easier and easier to promote.
可选的,该装置还包括:Optionally, the device also includes:
模型训练模块,用于将活体人脸视频和非活体人脸视频采用卷积神经网络训练,得到所述活体检测模型。A model training module is used to train live face videos and non-live face videos using convolutional neural networks to obtain the live detection model.
可选的,模型训练模块包括:Optional, model training modules include:
人脸视频采集单元,用于采集活体人脸视频和非活体人脸视频。The face video collection unit is used to collect live face videos and non-living face videos.
视频缩放单元,用于将所述活体人脸视频和非活体人脸视频缩放到预设尺寸。A video scaling unit, configured to scale the live face video and the non-living face video to a preset size.
模型训练单元,用于将缩放后的活体人脸视频和非活体人脸视频采用卷积神经网络训练,得到所述活体检测模型。A model training unit is used to train the scaled live face video and the non-live face video using a convolutional neural network to obtain the live detection model.
可选的,视频缩放单元具体用于:从每段视频中按照预设时间间隔抽取预设张数的图片,并将抽取的图片缩放到预设尺寸;将抽取时间段的稠密光流方向场线性拉伸后,把方向场图片缩放到预设尺寸,缩放后的抽取图片和缩放后的方向场图片组成卷积神经网络的训练图片。Optionally, the video scaling unit is specifically used to: extract a preset number of pictures from each video according to a preset time interval, and scale the extracted pictures to a preset size; extract the dense optical flow direction field of the time period After linear stretching, the orientation field image is scaled to a preset size. The scaled extraction image and the scaled orientation field image form the training image of the convolutional neural network.
可选的,活体人脸确定模块32,包括:Optional, living face determination module 32 includes:
图片处理单元,用于按照预设次数,从每段视频中按照预设时间间隔抽取预设张数的图片,并将抽取的图片缩放到预设尺寸。The picture processing unit is used to extract a preset number of pictures from each video at a preset time interval according to a preset number of times, and scale the extracted pictures to a preset size.
人脸确定单元,用于如果缩放后的抽取图片输入活体检测模型确定人脸视频中的人脸为活体人脸的次数达到次数阈值,则人脸视频中的人脸为活体人脸。The face determination unit is used to determine that the face in the face video is a live face if the number of times the scaled extracted picture is input to the live body detection model reaches a threshold of times, then the face in the face video is a live face.
本发明实施例所提供的人脸识别的装置可执行本发明任意实施例所提供的人脸识别的方法,具备执行方法相应的功能模块和有益效果。The face recognition device provided by the embodiment of the present invention can execute the face recognition method provided by any embodiment of the present invention, and has corresponding functional modules and beneficial effects for executing the method.
实施例四Embodiment 4
图6为本发明实施例四提供的一种设备的结构示意图,如图6所示,该设备包括处理器41、存储器42、输入装置43和输出装置44;设备中处理器41的数量可以是一个或多个,图6中以一个处理器41为例;设备中的处理器41、存储器42、输入装置43和输出装置44可以通过总线或其他方式连接,图C中以通过总线连接为例。Figure 6 is a schematic structural diagram of a device provided in Embodiment 4 of the present invention. As shown in Figure 6, the device includes a processor 41, a memory 42, an input device 43 and an output device 44; the number of processors 41 in the device can be One or more, one processor 41 is taken as an example in Figure 6; the processor 41, memory 42, input device 43 and output device 44 in the device can be connected through a bus or other means. In Figure C, the connection through a bus is taken as an example. .
存储器42作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本发明实施例中的人脸识别的方法对应的程序指令/模块(例如,视频人脸识别模块31和活体人脸确定模块32)。处理器41通过运行存储在存储器42中的软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述的人脸识别的方法。As a computer-readable storage medium, the memory 42 can be used to store software programs, computer-executable programs and modules, such as program instructions/modules corresponding to the face recognition method in the embodiment of the present invention (for example, video face recognition module 31 and living face determination module 32). The processor 41 executes various functional applications and data processing of the device by running software programs, instructions and modules stored in the memory 42, that is, implementing the above face recognition method.
存储器42可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端的使用所创建的数据等。此外,存储器42可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器42可进一步包括相对于处理器41远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 42 may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system and at least one application program required for a function; the stored data area may store data created according to the use of the terminal, etc. In addition, memory 42 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 42 may further include memory located remotely relative to processor 41, and these remote memories may be connected to the device through a network. Examples of the above-mentioned networks include but are not limited to the Internet, intranets, local area networks, mobile communication networks and combinations thereof.
输入装置43可用于接收输入的数字或字符信息,以及产生与设备的用户设置以及功能控制有关的键信号输入,可以包括触屏、键盘和鼠标等。输出装置44可包括显示屏等显示设备。The input device 43 may be used to receive input numeric or character information and generate key signal input related to user settings and function control of the device, and may include a touch screen, a keyboard, a mouse, etc. The output device 44 may include a display device such as a display screen.
实施例五Embodiment 5
本发明实施例五还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种人脸识别的方法,该方法包括:Embodiment 5 of the present invention also provides a storage medium containing computer-executable instructions. The computer-executable instructions, when executed by a computer processor, are used to perform a method of face recognition. The method includes:
对待检测视频进行人脸识别;Perform face recognition on the video to be detected;
将识别到的人脸视频输入活体检测模型,确定人脸视频中的人脸是否为活体人脸,所述活体检测模型根据活体人脸视频和非活体人脸视频训练得到,所述非活体人脸视频包括打印人脸的视频或者屏幕人脸的视频。The recognized face video is input into the liveness detection model to determine whether the face in the face video is a live face. The liveness detection model is trained based on the live face video and the non-living face video. The non-living face video Face videos include videos of printed faces or videos of screen faces.
当然,本发明实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的方法操作,还可以执行本发明任意实施例所提供的人脸识别的方法中的相关操作。Of course, the embodiments of the present invention provide a storage medium containing computer-executable instructions, and the computer-executable instructions are not limited to the method operations described above, and can also execute the face recognition method provided by any embodiment of the present invention. related operations.
通过以上关于实施方式的描述,所属领域的技术人员可以清楚地了解到,本发明可借助软件及必需的通用硬件来实现,当然也可以通过硬件实现,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(RandomAccess Memory,RAM)、闪存(FLASH)、硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。Through the above description of the implementation, those skilled in the art can clearly understand that the present invention can be implemented with the help of software and necessary general hardware. Of course, it can also be implemented with hardware, but in many cases the former is a better implementation. . Based on this understanding, the technical solution of the present invention can be embodied in the form of a software product in essence or that contributes to the existing technology. The computer software product can be stored in a computer-readable storage medium, such as a computer floppy disk. , read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), flash memory (FLASH), hard disk or optical disk, etc., including a number of instructions to make a computer device (which can be a personal computer, Server, or network device, etc.) performs the methods described in various embodiments of the present invention.
值得注意的是,上述搜索装置的实施例中,所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本发明的保护范围。It is worth noting that in the above embodiments of the search device, the various units and modules included are only divided according to functional logic, but are not limited to the above divisions, as long as the corresponding functions can be realized; in addition, each function The specific names of the units are only for the convenience of distinguishing each other and are not used to limit the protection scope of the present invention.
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。Note that the above are only the preferred embodiments of the present invention and the technical principles used. Those skilled in the art will understand that the present invention is not limited to the specific embodiments described herein, and that various obvious changes, readjustments and substitutions can be made to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in detail through the above embodiments, the present invention is not limited to the above embodiments. Without departing from the concept of the present invention, it can also include more other equivalent embodiments, and the present invention The scope is determined by the scope of the appended claims.
Claims (9)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911185272.2A CN110991307B (en) | 2019-11-27 | 2019-11-27 | Face recognition methods, devices, equipment and storage media |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911185272.2A CN110991307B (en) | 2019-11-27 | 2019-11-27 | Face recognition methods, devices, equipment and storage media |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN110991307A CN110991307A (en) | 2020-04-10 |
| CN110991307B true CN110991307B (en) | 2023-09-26 |
Family
ID=70087526
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201911185272.2A Active CN110991307B (en) | 2019-11-27 | 2019-11-27 | Face recognition methods, devices, equipment and storage media |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN110991307B (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113486853B (en) * | 2021-07-29 | 2024-02-27 | 北京百度网讯科技有限公司 | Video detection method and device, electronic equipment and medium |
| CN114565112A (en) * | 2022-01-28 | 2022-05-31 | 珠海华发金融科技研究院有限公司 | A travel management system |
Citations (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TW200809700A (en) * | 2006-08-15 | 2008-02-16 | Compal Electronics Inc | Method for recognizing face area |
| KR20090050199A (en) * | 2007-11-15 | 2009-05-20 | 주식회사 휴민텍 | Real-Time Facial Expression Recognition Using Optical Flow and Hidden Markov Models |
| EP2843621A1 (en) * | 2013-08-26 | 2015-03-04 | Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. | Human pose calculation from optical flow data |
| CN104504366A (en) * | 2014-11-24 | 2015-04-08 | 上海闻泰电子科技有限公司 | System and method for smiling face recognition based on optical flow features |
| CN105389567A (en) * | 2015-11-16 | 2016-03-09 | 上海交通大学 | Group anomaly detection method based on a dense optical flow histogram |
| CN105488519A (en) * | 2015-11-13 | 2016-04-13 | 同济大学 | Video classification method based on video scale information |
| CN107066942A (en) * | 2017-03-03 | 2017-08-18 | 上海斐讯数据通信技术有限公司 | A kind of living body faces recognition methods and system |
| CN108108676A (en) * | 2017-12-12 | 2018-06-01 | 北京小米移动软件有限公司 | Face identification method, convolutional neural networks generation method and device |
| CN108596041A (en) * | 2018-03-28 | 2018-09-28 | 中科博宏(北京)科技有限公司 | A kind of human face in-vivo detection method based on video |
| CN109255322A (en) * | 2018-09-03 | 2019-01-22 | 北京诚志重科海图科技有限公司 | A kind of human face in-vivo detection method and device |
| CN109598242A (en) * | 2018-12-06 | 2019-04-09 | 中科视拓(北京)科技有限公司 | A kind of novel biopsy method |
| CN109784215A (en) * | 2018-12-27 | 2019-05-21 | 金现代信息产业股份有限公司 | A kind of in-vivo detection method and system based on improved optical flow method |
| CN109977846A (en) * | 2019-03-22 | 2019-07-05 | 中国科学院重庆绿色智能技术研究院 | A kind of in-vivo detection method and system based on the camera shooting of near-infrared monocular |
| CN110458063A (en) * | 2019-07-30 | 2019-11-15 | 西安建筑科技大学 | Face liveness detection method for anti-video and photo spoofing |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| AUPQ896000A0 (en) * | 2000-07-24 | 2000-08-17 | Seeing Machines Pty Ltd | Facial image processing system |
| TWI430185B (en) * | 2010-06-17 | 2014-03-11 | Inst Information Industry | Facial expression recognition systems and methods and computer program products thereof |
-
2019
- 2019-11-27 CN CN201911185272.2A patent/CN110991307B/en active Active
Patent Citations (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TW200809700A (en) * | 2006-08-15 | 2008-02-16 | Compal Electronics Inc | Method for recognizing face area |
| KR20090050199A (en) * | 2007-11-15 | 2009-05-20 | 주식회사 휴민텍 | Real-Time Facial Expression Recognition Using Optical Flow and Hidden Markov Models |
| EP2843621A1 (en) * | 2013-08-26 | 2015-03-04 | Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. | Human pose calculation from optical flow data |
| CN104504366A (en) * | 2014-11-24 | 2015-04-08 | 上海闻泰电子科技有限公司 | System and method for smiling face recognition based on optical flow features |
| CN105488519A (en) * | 2015-11-13 | 2016-04-13 | 同济大学 | Video classification method based on video scale information |
| CN105389567A (en) * | 2015-11-16 | 2016-03-09 | 上海交通大学 | Group anomaly detection method based on a dense optical flow histogram |
| CN107066942A (en) * | 2017-03-03 | 2017-08-18 | 上海斐讯数据通信技术有限公司 | A kind of living body faces recognition methods and system |
| CN108108676A (en) * | 2017-12-12 | 2018-06-01 | 北京小米移动软件有限公司 | Face identification method, convolutional neural networks generation method and device |
| CN108596041A (en) * | 2018-03-28 | 2018-09-28 | 中科博宏(北京)科技有限公司 | A kind of human face in-vivo detection method based on video |
| CN109255322A (en) * | 2018-09-03 | 2019-01-22 | 北京诚志重科海图科技有限公司 | A kind of human face in-vivo detection method and device |
| CN109598242A (en) * | 2018-12-06 | 2019-04-09 | 中科视拓(北京)科技有限公司 | A kind of novel biopsy method |
| CN109784215A (en) * | 2018-12-27 | 2019-05-21 | 金现代信息产业股份有限公司 | A kind of in-vivo detection method and system based on improved optical flow method |
| CN109977846A (en) * | 2019-03-22 | 2019-07-05 | 中国科学院重庆绿色智能技术研究院 | A kind of in-vivo detection method and system based on the camera shooting of near-infrared monocular |
| CN110458063A (en) * | 2019-07-30 | 2019-11-15 | 西安建筑科技大学 | Face liveness detection method for anti-video and photo spoofing |
Non-Patent Citations (2)
| Title |
|---|
| 《Alignment and tracking of facial features with component-based active appearance models and optical flow》;Ren C. Luo;《2011 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM)》;1058-1063 * |
| 张轩阁.《 基于光流与LBP-TOP特征结合的微表情识别》.《吉林大学学报(信息科学版)》.2015,第33卷(第5期),516-523. * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN110991307A (en) | 2020-04-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110532984B (en) | Key point detection method, gesture recognition method, device and system | |
| CN108805047B (en) | Living body detection method and device, electronic equipment and computer readable medium | |
| CN105335722B (en) | Detection system and method based on depth image information | |
| Qiang et al. | SqueezeNet and fusion network-based accurate fast fully convolutional network for hand detection and gesture recognition | |
| CN111989689A (en) | Method for recognizing objects in images and mobile device for performing the method | |
| WO2019196308A1 (en) | Device and method for generating face recognition model, and computer-readable storage medium | |
| WO2019095571A1 (en) | Human-figure emotion analysis method, apparatus, and storage medium | |
| WO2021012494A1 (en) | Deep learning-based face recognition method and apparatus, and computer-readable storage medium | |
| CN108764133A (en) | Image-recognizing method, apparatus and system | |
| WO2019033525A1 (en) | Au feature recognition method, device and storage medium | |
| CN106570497A (en) | Text detection method and device for scene image | |
| WO2019011073A1 (en) | Human face live detection method and related product | |
| CN108664843B (en) | Living object recognition method, living object recognition apparatus, and computer-readable storage medium | |
| CN104751153B (en) | A kind of method and device of identification scene word | |
| CN108781252B (en) | Image shooting method and device | |
| CN110717407A (en) | Human face recognition method, device and storage medium based on lip language password | |
| CN111160288A (en) | Gesture key point detection method and device, computer equipment and storage medium | |
| WO2014180108A1 (en) | Systems and methods for matching face shapes | |
| CN113449726B (en) | Text matching and recognition method and device | |
| CN114627534B (en) | Living body discriminating method, electronic apparatus, and storage medium | |
| CN115082994A (en) | Face liveness detection method, training method and device of liveness detection network model | |
| CN110991307B (en) | Face recognition methods, devices, equipment and storage media | |
| CN112949689A (en) | Image recognition method and device, electronic equipment and storage medium | |
| CN111488732A (en) | Deformed keyword detection method, system and related equipment | |
| CN116052225A (en) | Palmprint recognition method, electronic device, storage medium and computer program product |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
| PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: Method, device, equipment, and storage medium for facial recognition Granted publication date: 20230926 Pledgee: Bank of Shanghai Co.,Ltd. Beijing Branch Pledgor: RUN TECHNOLOGIES Co.,Ltd. BEIJING Registration number: Y2024980059997 |