CN112001932B - Face recognition method, device, computer equipment and storage medium - Google Patents
Face recognition method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN112001932B CN112001932B CN202010902551.2A CN202010902551A CN112001932B CN 112001932 B CN112001932 B CN 112001932B CN 202010902551 A CN202010902551 A CN 202010902551A CN 112001932 B CN112001932 B CN 112001932B
- Authority
- CN
- China
- Prior art keywords
- face
- gesture
- pose
- image
- recognition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/223—Analysis of motion using block-matching
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The application relates to image recognition of artificial intelligence, and provides a face recognition method, a face recognition device, computer equipment and a storage medium. The method comprises the following steps: acquiring a face image to be recognized, and recognizing the face image to be recognized to obtain a corresponding face identity matching degree; image segmentation is carried out on the face image to be identified, so as to obtain a face area; recognizing the face gesture corresponding to the face region to obtain face gesture information; determining a target face pose sub-interval matched with the face pose information, and acquiring a corresponding face pose matching degree according to the target face pose sub-interval; and determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face posture matching degree. By adopting the method, the accuracy of face recognition can be improved.
Description
Technical Field
The present application relates to the field of internet technologies, and in particular, to a face recognition method, apparatus, computer device, and storage medium.
Background
With the development of artificial intelligence technology, a face recognition technology appears, and face recognition is a biological recognition technology for carrying out identity recognition based on facial feature information of people. A series of related technologies, commonly referred to as image recognition and face recognition, are used to capture images or video streams containing faces with a camera or cameras, and automatically detect and track the faces in the images, thereby performing face recognition on the detected faces. At present, face recognition is carried out by extracting face image features and comparing the face image features with the existing face features to obtain a face image recognition result. However, the face recognition accuracy is low by extracting the face image features and comparing the face image features with the existing face features.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a face recognition method, apparatus, computer device, and storage medium that can improve face recognition accuracy.
A face recognition method, the method comprising:
acquiring a face image to be recognized, and recognizing the face image to be recognized to obtain a corresponding face identity matching degree;
image segmentation is carried out on the face image to be identified, so as to obtain a face area;
recognizing the face gesture corresponding to the face region to obtain face gesture information;
determining a target face pose sub-interval matched with the face pose information, and acquiring a corresponding face pose matching degree according to the target face pose sub-interval;
and determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face posture matching degree.
In one embodiment, the obtaining the corresponding face identity matching degree according to the face image to be identified includes:
inputting the face image to be recognized into a face recognition model for face recognition to obtain the face identity matching degree; the face recognition model is obtained by taking a training face image as input, taking face identity labels corresponding to the training face image as labels, and training by using a convolutional neural network.
In one embodiment, determining a face recognition result corresponding to a face image to be recognized based on the face identity matching degree and the face pose matching degree includes:
acquiring a face identity weight corresponding to the face identity matching degree and a face posture weight corresponding to the face posture matching degree;
carrying out weighted calculation according to the face identity weight and the face identity matching degree to obtain the face identity weighted matching degree;
carrying out weighted calculation according to the face pose weight and the face pose matching degree to obtain the face pose weighted matching degree;
and obtaining a target face matching degree according to the face identity weighted matching degree and the face posture weighted matching degree, and obtaining a face recognition result corresponding to the face image to be recognized as face recognition passing when the target face matching degree exceeds a preset threshold.
A face recognition device, the device comprising:
the identity matching module is used for acquiring a face image to be identified and acquiring corresponding face identity matching degree according to the face image to be identified;
the image segmentation module is used for carrying out image segmentation on the face image to be identified to obtain a face area;
the gesture recognition module is used for recognizing the facial gesture corresponding to the facial region to obtain facial gesture information;
The gesture matching module is used for determining a target face gesture subinterval matched with the face gesture information and acquiring a corresponding face gesture matching degree according to the target face gesture subinterval;
the result determining module is used for determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face gesture matching degree.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a face image to be recognized, and recognizing the face image to be recognized to obtain a corresponding face identity matching degree;
image segmentation is carried out on the face image to be identified, so as to obtain a face area;
recognizing the face gesture corresponding to the face region to obtain face gesture information;
determining a target face pose sub-interval matched with the face pose information, and acquiring a corresponding face pose matching degree according to the target face pose sub-interval;
and determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face posture matching degree.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Acquiring a face image to be recognized, and recognizing the face image to be recognized to obtain a corresponding face identity matching degree;
image segmentation is carried out on the face image to be identified, so as to obtain a face area;
recognizing the face gesture corresponding to the face region to obtain face gesture information;
determining a target face pose sub-interval matched with the face pose information, and acquiring a corresponding face pose matching degree according to the target face pose sub-interval;
and determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face posture matching degree.
According to the face recognition method, the device, the computer equipment and the storage medium, the face identity matching degree is obtained by recognizing the face image to be recognized, then the face posture information of the face image to be recognized is recognized, the face posture matching degree is obtained according to the face posture information, then the face recognition result corresponding to the face image to be recognized is determined according to the face identity matching degree and the face posture matching degree, namely, the face recognition result corresponding to the face image to be recognized is determined by using the face identity matching degree and the face posture matching degree together, namely, the face recognition result can be determined by using different information, and the accuracy of the obtained face recognition result is improved.
Drawings
FIG. 1 is a flow chart of a face recognition method in one embodiment;
FIG. 2 is a flow chart of determining a face region in one embodiment;
FIG. 3 is a schematic diagram of an image segmentation model in an embodiment;
FIG. 4 is a diagram illustrating the result of image segmentation in one embodiment;
FIG. 5 is a flow diagram of training an image segmentation model in one embodiment;
FIG. 6 is a schematic flow chart of face region gesture recognition in one embodiment;
FIG. 7 is a schematic diagram of a face gesture recognition model in an embodiment;
FIG. 8 is a schematic diagram illustrating the result of face gesture recognition in one embodiment;
FIG. 9 is a flow chart of training a face pose recognition model in one embodiment;
FIG. 10 is a flow chart of determining a face pose matching degree in one embodiment;
FIG. 11 is a flow diagram of a profile obtained in one embodiment;
FIG. 12 is a flow diagram of a profile setup in one embodiment;
FIG. 13 is a schematic diagram of a distribution diagram in one embodiment;
FIG. 14 is a flow chart of obtaining a face pose matching degree in an embodiment;
FIG. 15 is a flowchart of a target face pose subinterval according to an embodiment;
Fig. 16 is a flowchart of a method for obtaining a face recognition result in an embodiment;
fig. 17 is a flowchart of a face recognition method in an embodiment;
fig. 18 is a schematic diagram of a scenario in which face recognition is unlocked in one embodiment;
fig. 19 is an interface schematic diagram of successful face recognition unlocking in the embodiment of fig. 18;
fig. 20 is a schematic diagram of a scenario of face recognition unlocking in another embodiment;
fig. 21 is a schematic diagram of an application scenario of a face recognition method in a specific embodiment;
fig. 22 is a schematic diagram of a face recognition scenario in the embodiment of fig. 21;
fig. 23 is a block diagram of a face recognition device in one embodiment;
fig. 24 is an internal structural view of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Computer Vision (CV) is a science of studying how to "look" a machine, and more specifically, to replace human eyes with a camera and a Computer to perform machine Vision such as recognition, tracking and measurement on a target, and further perform graphic processing to make the Computer process into an image more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, among others, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and others.
Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
The scheme provided by the embodiment of the application relates to technologies such as image recognition, deep learning and the like of artificial intelligence, and is specifically described by the following embodiments:
in one embodiment, as shown in fig. 1, a face recognition method is provided, where the embodiment is applied to a terminal to illustrate the method, it is understood that the method may also be applied to a server, and may also be applied to a system including the terminal and the server, and implemented through interaction between the terminal and the server. In this embodiment, the method includes the steps of: :
Step 102, obtaining a face image to be recognized, and obtaining a corresponding face identity matching degree according to the face image to be recognized.
The face image is an image comprising a face part, and the face image to be recognized is a face image required to be recognized. The face identity is preset identity information corresponding to the face, and the identity information can comprise a face image, a face identification, a face age, a face gender and the like. The face identifier is used for uniquely identifying the corresponding face, and may be a name, an identification card number, a passport number, and the like. The face identity matching degree refers to the degree of matching with the face identity, which is obtained by face recognition of the face image to be recognized. Face recognition refers to a biological recognition technology for carrying out identity recognition based on facial feature information of a person. A series of related technologies, commonly referred to as image recognition and face recognition, are used to capture images or video streams containing faces with a camera or cameras, and automatically detect and track the faces in the images, thereby performing face recognition on the detected faces.
Specifically, the terminal may acquire the face image to be recognized through a camera device, where the camera device is a device for performing image acquisition, and may be a camera, a video camera, and so on. The terminal can also acquire the face image to be identified stored in the memory. The terminal can also acquire the image to be identified through an instant messaging application, wherein the instant messaging application refers to software for realizing online chat and communication through an instant messaging technology, such as a QQ application, a WeChat application, a nail application, an ICQ application, a MSN MESSENGER application and the like. The terminal may also obtain the face image to be identified from the internet, for example, obtain a video image from a video network in the internet, and extract the face image from the video image. Such as directly from the internet to a face image, etc. When the terminal obtains the face image to be recognized, face recognition is carried out on the face image to be recognized so as to obtain the corresponding face identity matching degree, wherein various face recognition algorithms can be used for carrying out face recognition on the face image to be recognized, after the face is detected and key feature points of the face are positioned, the main face area can be cut out, and after preprocessing, the main face area is fed into the recognition algorithm at the rear end. The recognition algorithm is to complete the extraction of the face features, and compare the face features with the face images of the known face identities in the stock to complete the final classification. The face recognition algorithm includes, but is not limited to, a recognition algorithm based on face feature points, a recognition algorithm based on a whole face image, a recognition algorithm based on a template, an algorithm for recognition by using a neural network, and an algorithm for recognition by using a support vector machine. For example, a neural network model based on CNN (Convolutional Neural Networks, convolutional neural network) may be used to identify the face image to be identified, so as to obtain the corresponding face identity matching degree. The terminal 102 includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, camera devices, and devices capable of performing face recognition.
And 104, performing image segmentation on the face image to be identified to obtain a face region.
The image segmentation refers to region division of the face image to be identified by using an image segmentation algorithm. Among them, the image segmentation algorithm includes, but is not limited to, a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, a theory-specific based segmentation method, a depth information-based segmentation method, a priori information-based segmentation method, a neural network-based segmentation method, and the like. The face region refers to a face partial region in a face image to be recognized.
Specifically, the terminal uses an image segmentation algorithm to segment the face image to be identified, and cuts out the face region according to the segmented image. For example, the terminal may use a convolutional neural network model to perform image segmentation on the face image to be identified, so as to obtain a face region.
And 106, recognizing the face gesture corresponding to the face region to obtain face gesture information.
The face gesture refers to the position between the face and the camera device in the face image to be recognized. The face pose includes, but is not limited to, an angular pose and a distance pose, the angular pose being an angular position between a face and a camera device in a face image to be recognized. The distance gesture refers to the distance position between the face and the camera device in the face image to be recognized. Face pose information refers to specific information of the face pose, including but not limited to distance pose information and angle pose information. The distance posture information refers to distance position information between a face and the camera device in the face image to be recognized. The angle posture information refers to the intersection position information between the face and the camera device in the face image to be recognized.
Specifically, the terminal uses a pre-trained face gesture recognition model to recognize the face gesture corresponding to the face region to obtain face gesture information, wherein the face gesture recognition model is obtained by training a deep neural network algorithm according to a training face region image and corresponding face gesture labels. The face gesture recognition model can be trained in advance in a server, and then the face gesture recognition model is deployed into a terminal for use. The face gesture recognition model can also be directly trained in the terminal and deployed for use.
Step 108, determining a target face gesture subinterval matched with the face gesture information, and acquiring a corresponding face gesture matching degree according to the target face gesture subinterval.
The face gesture subinterval is an interval obtained by dividing intervals established by all face gesture information corresponding to the historical user. And establishing a section according to the positions between the faces and the camera device in all face images corresponding to the historical user. For example, when the face gesture is a distance gesture, a distance gesture interval is obtained according to the maximum distance position and the minimum distance position of the face and the camera device in the face image, and when the face gesture is an angle gesture, an angle gesture interval is obtained according to the maximum angle gesture and the minimum angle gesture of the face and the camera device in the face image. And then dividing the established gesture interval to obtain a gesture subinterval, namely obtaining the face gesture subinterval. For example, when the gesture section is a distance gesture section, the distance gesture section is divided to obtain a distance gesture sub-section. When the attitude section is an angle attitude section, the angle attitude section is divided to obtain an angle attitude sub-section. The target face pose sub-interval refers to a face pose sub-interval in which face pose information corresponding to a face image to be recognized is located. The face pose matching degree refers to the matching degree with the face identity obtained according to the target face pose subinterval. And establishing a corresponding relation between each face gesture subinterval and the face gesture matching degree in advance according to the face identities corresponding to the historical users.
Specifically, the terminal uses the face pose information to match the corresponding face pose subinterval to obtain a target face pose subinterval. For example, when the face pose information is distance pose information, the corresponding distance pose sub-section is matched, and when the face pose information is angle pose information, the corresponding angle pose sub-section is matched. And then the terminal acquires the face gesture matching degree corresponding to the target face gesture subinterval according to the corresponding relation between the stored face gesture subinterval and the face gesture matching degree. Namely, matching the target face gesture subinterval with each stored face gesture subinterval to obtain the face gesture matching degree corresponding to the face gesture subinterval with the same matching.
Step 110, determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face gesture matching degree.
The face recognition result refers to whether the face in the face image to be recognized is consistent with the identity of the face. When the face in the face image to be recognized is consistent with the face identity, the face recognition result is obtained to be that the face recognition passes. When the face in the face image to be recognized is not consistent with the face identity, the face recognition result is obtained to be that the face recognition is failed.
Specifically, the terminal judges that when the face identity matching degree exceeds a preset face identity matching degree threshold and the face pose matching degree exceeds a preset face pose matching degree threshold, the face in the face image to be recognized is consistent with the face identity, and the face recognition result is obtained to be the passing face recognition. And the terminal judges that the face in the face image to be recognized is inconsistent with the face identity when the face identity matching degree does not exceed a preset face identity matching degree threshold or the face gesture matching degree does not exceed a preset face gesture matching degree threshold, and the face recognition result is obtained as that the face recognition fails. The preset face identity matching degree threshold is a threshold of a preset face identity matching degree, and the preset face gesture matching degree threshold is a threshold of a preset face gesture matching degree and is used for judging whether the face in the face image to be recognized is consistent with the face identity.
In one embodiment, the weighted summation calculation can be performed according to the face identity matching degree and the face pose matching degree to obtain a weighted summation calculation result, and when the weighted summation calculation result exceeds a preset threshold value, it is indicated that the face in the face image to be recognized is consistent with the face identity, and the face recognition result is obtained as the passing of face recognition. And when the weighted sum calculation result does not exceed a preset threshold value, indicating that the face in the face image to be recognized is inconsistent with the face identity, and obtaining a face recognition result as that the face recognition fails.
In one embodiment, the terminal may collect the face image to be identified, and the terminal sends the face image to be identified to a server, where the server may be a server that provides face recognition services corresponding to the terminal, for example, a cloud server, and so on. The server receives the face image to be recognized, obtains the corresponding face identity matching degree according to the face image to be recognized, performs image segmentation on the face image to be recognized to obtain a face area, recognizes the face gesture corresponding to the face area, and obtains face gesture information. And determining a target face pose sub-interval matched with the face pose information, acquiring corresponding face pose matching degree according to the target face pose sub-interval, and determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face pose matching degree. The server returns the face recognition result corresponding to the face image to be recognized to the terminal, the terminal displays the face recognition result corresponding to the face image to be recognized, and the face recognition result corresponding to the face image to be recognized is obtained through the server, so that the efficiency can be improved.
According to the face recognition method, the face identity matching degree is obtained by recognizing the face image to be recognized, then the face posture information of the face image to be recognized is recognized, the face posture matching degree is obtained according to the face posture information, then the face recognition result corresponding to the face image to be recognized is determined according to the face identity matching degree and the face posture matching degree, namely, the face recognition result corresponding to the face image to be recognized is determined by using the face identity matching degree and the face posture matching degree together, namely, the face recognition result can be determined by using different information, and the accuracy of the obtained face recognition result is improved.
In one embodiment, after step 110, after determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face pose matching degree, the method further includes the steps of:
and when the face recognition result is that the face recognition passes, executing corresponding target operation according to the face recognition passing result.
The target operation refers to an operation executed by the terminal according to the result of face recognition. For example, the target operation may be an unlocking operation, a payment operation, a login operation, a shutdown operation, an early warning operation, a terminal opening operation, a recording operation, and the like.
Specifically, when the terminal detects that the face recognition result is that the face recognition is passed, a corresponding unlocking operation is executed according to the face recognition passing result, wherein when the terminal is a smart phone, the unlocking operation is to switch a terminal screen from a locking state to an unlocking state, the locking state is a state that the smart phone cannot be normally used, and the unlocking state is a state that the smart phone can be normally used. In one embodiment, the unlocking operation may further unlock the intelligent electronic access control, for example, when the human recognition result is that the human face recognition passes, an unlocking instruction is sent to the intelligent electronic access control according to the human face recognition passing result, and the intelligent electronic access control receives the unlocking instruction to execute the unlocking operation to open the access control. In one embodiment, the payment operation refers to an operation of making a face payment. And when the face recognition result is that the face recognition passes, executing the electronic payment operation to carry out the electronic payment. In one embodiment, when the face recognition result is that the face recognition passes, the login operation is executed to enter the corresponding website and APP application which need to be logged in. The login operation is an operation for performing application login. In one embodiment, when the face recognition result is that the face recognition passes, the terminal executes a shutdown operation to shutdown. In one embodiment, when the face recognition result is that the face recognition passes, the terminal executes the early warning operation to perform face identity early warning prompt, and the early warning operation refers to the early warning operation performed by the face recognition monitoring device. In one embodiment, when the face recognition result is that the face recognition passes, the terminal performs a recording operation to record the current time point and the corresponding face identity.
In one embodiment, when the face recognition result is that the face recognition fails, the face recognition result corresponding to the face image to be recognized is recorded as that the face recognition fails.
In the above embodiment, when the face recognition result is that the face recognition passes, the corresponding target operation is executed according to the face recognition passing result, and since the obtained face recognition result is more accurate, the corresponding target operation can be more accurately obtained for execution, and the subsequent use is convenient.
In one embodiment, step 102, obtaining a corresponding face identity matching degree according to the face image to be identified includes:
inputting the face image to be recognized into a face recognition model for face recognition to obtain the face identity matching degree; the face recognition model is obtained by taking a training face image as input, taking face identity labels corresponding to the training face image as labels, and training by using a convolutional neural network.
The training face image refers to a face image used for training a face recognition model. The training face images may be various face image datasets on the internet for face recognition model training, such as PubFig (Public Figures Face Database, university of columbia public character face database) dataset, celebA (CelebFaces Attributes Dataset, large face attribute dataset) dataset, colorferet dataset, and FDDB (Face Detection Data Set and Benchmark, face detection evaluation set) dataset, and the like. The training face image may also be a face image obtained directly from a server database. The training face image can also be a face image acquired by a camera device. The face identity label is used for uniquely identifying the face identity of the training face image. Convolutional neural networks refer to a class of feedforward neural networks that includes convolutional computations and has a deep structure, including an input layer, a convolutional layer, a pooling layer, a fully connected layer, and an output layer. The activation functions of convolutional neural networks include, but are not limited to, linear rectification functions (Rectified Linear Unit, reLU), sloped relus (lrrelu), parameterized relus (Parametric ReLU, prilu), randomized relus (rrilu), exponential linear units (Exponential Linear Unit, ELU), sigmoid functions, and hyperbolic tangent functions. The convolutional neural network may be a mean square error loss function using MSE, a SVM (support vector machine) hinge loss function, a Cross Entropy loss function, and so on.
Specifically, a face recognition model is deployed in the terminal, and the face recognition model can be obtained by training a training face image in a server in advance as input of a convolutional neural network and a face identity label corresponding to the training face image as a label of the convolutional neural network. When the terminal acquires the face image to be recognized, the face image to be recognized is input into a face recognition model for face recognition, and the face identity matching degree is obtained, wherein the face image to be recognized can be normalized, and the face image after normalization is input into the face recognition model for recognition.
In the embodiment, the face recognition is performed through the deployed face recognition model, so that the efficiency and accuracy of the face recognition are improved.
In one embodiment, as shown in fig. 2, step 104, performing image segmentation on a face image to be identified to obtain a face region includes:
step 202, inputting the face image to be recognized into a segmentation feature extraction network of an image segmentation model to obtain image segmentation features.
The image segmentation model is used for segmenting a face image to obtain a face region, and is obtained by training a deep neural network model according to a training face image with face boundary labels, wherein a cross entropy loss function is used as a loss function, and a ReLU function is used as an activation function. The segmentation feature extraction network refers to a network for extracting image features of an image to be identified. The image segmentation feature is a feature image obtained by convolving the face image to be identified through a segmentation feature extraction network.
Specifically, the terminal inputs the face image to be identified into a segmentation feature extraction network of an image segmentation model, and the segmentation feature extraction network carries out convolution operation on the face image to be identified to obtain image segmentation features.
Step 204, inputting the image segmentation features into an image classification network of the image segmentation model to obtain face pixel points and non-face pixel points, and determining a face area according to the face pixel points and the non-face pixel points.
The image classification network is used for classifying the images, and may be a classification network obtained by using an SVM algorithm. The face pixel points refer to the pixel points belonging to the face area in the face image to be recognized. The non-human pixel points refer to pixel points which do not belong to a human region in the face image to be recognized.
Specifically, the server inputs the image segmentation features into an image classification network of an image segmentation model to classify, so as to obtain each face pixel point and each non-face pixel point, and the area formed by each face pixel point is determined as a face area.
In a specific embodiment, as shown in fig. 3, the structure diagram of the image segmentation model is shown, specifically, the terminal inputs the face image to be recognized into the CNN convolutional neural network for feature extraction, where the CNN convolves the input face image to be recognized with the convolutional layer, and then the face image is pooled through the pooling layer, and finally the feature vector is output through the full-communication layer, and classified through the SVM to obtain a classification result, namely, the face region is determined. For example, as shown in fig. 4, a schematic view of a face region obtained after a face image to be identified is identified by a face recognition model is shown.
In the above embodiment, the image segmentation feature image classification network is obtained by using the segmentation feature extraction network of the image segmentation model, so as to obtain the face pixel points, and the face region is determined according to the face pixel points, so that the obtained face region is more accurate.
In one embodiment, as shown in FIG. 5, the training of the image segmentation model includes the steps of:
step 502, a training face image with face boundary labels is obtained.
The face boundary labels are used for identifying face parts in the training face images.
Specifically, the terminal can acquire a training face image with a face boundary mark, the terminal can acquire the training face image with the face boundary mark from a service side providing the training face image by a third party, and the terminal can acquire the training face image to carry out the face boundary mark to obtain the training face image with the face boundary mark. The terminal can also acquire the stored training face image with the face boundary marks from the server database.
Step 504, inputting the training face image into an initial segmentation feature extraction network of an initial image segmentation model to obtain initial image segmentation features.
The initial image segmentation model refers to an image segmentation model initialized by model parameters. The initial segmentation feature extraction network refers to a network parameter initialized segmentation feature extraction network. The initial image segmentation feature refers to an image segmentation feature calculated using the initialized network parameters.
Specifically, the server establishes an initial image segmentation model, inputs the training face image into an initial segmentation feature extraction network of the initial image segmentation model for feature extraction, and obtains initial image segmentation features.
Step 506, inputting the initial image segmentation feature into an initial image classification network of the initial image segmentation model to obtain an initial face pixel point and an initial non-face pixel point, and determining an initial face region image according to the initial face pixel point and the initial non-face pixel point.
The initial image classification network refers to an image classification network initialized by network parameters. The initial face pixel points are face pixel points obtained by using an initial image classification network for classification and identification. The initial non-face pixel points are non-face pixel points obtained through classification and identification by using an initial image classification network. The initial face region image is a face region image obtained by classifying and identifying the initial image segmentation model.
Specifically, the server inputs the initial image segmentation features output by the initial segmentation feature extraction network into an initial image classification network of an initial image segmentation model to classify, so as to obtain each initial face pixel point and each initial non-face pixel point, and an initial face region image is partitioned from the face image to be recognized according to each initial face pixel point and each initial non-face pixel point.
And step 508, calculating the region error information of the initial face region and the face boundary label until the region error information obtained by training meets the preset training completion condition, and obtaining an image segmentation model.
The region error information is used for representing the error between the initial face region and the face boundary label. The preset training completion condition refers to a preset training completion condition of the image segmentation model, and may be that the region error information is smaller than a preset threshold value, or that the training iteration number reaches the maximum iteration at this time.
Specifically, the terminal calculates the region error information of the initial face region and the face boundary label by using the loss function, judges whether the region error information accords with the preset training completion condition, and when the region error information does not accord with the preset training completion condition, updates the model parameters of the initial image segmentation model by using the region error information, namely, carries out back propagation calculation on the initial image segmentation model by using a back propagation algorithm to obtain an updated image segmentation model, and then carries out iterative training again by using the updated image segmentation model, namely, returns to step 502 to continue execution until the region error information obtained by training accords with the preset training completion condition, and takes the image segmentation model updated last time as the image segmentation model obtained by completing training. And the terminal deploys and uses the trained image segmentation model.
In one embodiment, the terminal may obtain a training face image with a face boundary label, send the training face image with the face boundary label to the server, receive the training face image with the face boundary label, input the training face image into an initial segmentation feature extraction network of an initial image segmentation model to obtain initial image segmentation features, input the initial image segmentation features into an initial image classification network of the initial image segmentation model to obtain initial face pixel points and initial non-face pixel points, and determine an initial face region image according to the initial face pixel points and the initial non-face pixel points. And calculating the region error information of the initial face region and the face boundary mark until the region error information obtained by training meets the preset training completion condition, obtaining an image segmentation model, then deploying the image segmentation model into a terminal for use, and training the image segmentation model in a server, so that the training efficiency can be improved. In one embodiment, the server may invoke an image segmentation model deployed in the server for use by invoking an interface.
In the embodiment, the training is performed by using the training face image with the face boundary label to obtain the image segmentation model, and then the image segmentation model is deployed and used, so that the efficiency of obtaining the face recognition result is improved.
In one embodiment, step 106, identifying a face pose corresponding to the face area image, to obtain face pose information, includes:
the face region image is input into a face gesture recognition model for recognition to obtain face gesture information, and the face gesture recognition model is obtained by training the face region image and the corresponding face gesture label through a multi-task regression model.
The training of the face region image refers to training of the face region image of the face gesture recognition model. The face region image may be an image cut out from a training face image. The face gesture label is used for identifying face gesture information corresponding to the face region image. The multi-task regression model is a model obtained by performing multi-task learning through a deep neural network. Multitasking is a machine learning method that learns multiple related tasks together based on a shared representation.
Specifically, a face gesture recognition model is deployed in the terminal, and the face gesture recognition model may be obtained by training the terminal in advance by using a multi-task regression model according to a training face region image and a corresponding face gesture label, or may be obtained by training the terminal by using a multi-task regression model according to a training face region image and a corresponding face gesture label, wherein a loss function used during training is a cross entropy loss function, and an activation function is a RELU function. The server then deploys the face pose recognition model into the terminal. The terminal can also call the face gesture recognition model deployed in the server through the call interface. The terminal inputs the face area image into a face gesture recognition model to recognize, and face gesture information is obtained.
In the embodiment, the face gesture recognition is performed through the face gesture recognition model, so that the face gesture information is obtained, and the efficiency of obtaining the face gesture information is improved.
In one embodiment, the face pose information includes distance pose information and angle pose information, and the face pose recognition model includes a pose feature extraction network, a distance pose recognition network, and an angle pose recognition network.
As shown in fig. 6, inputting a face area image into a face gesture recognition model for recognition to obtain face gesture information, including:
step 602, inputting the face region image into a gesture feature extraction network to extract features, and obtaining face gesture features.
The distance posture information refers to distance position information between a face in a face image to be recognized and the camera device. The angle posture information refers to the intersection position information between the face and the camera device in the face image to be recognized. The pose feature extraction network is a convolutional neural network for extracting image pose features from a face region image. The distance gesture recognition network is a full-connected neural network for recognizing distance gesture information. The angle gesture recognition network is used for recognizing the full-connected neural network of the angle gesture information.
Specifically, the terminal inputs the face region image into a gesture feature extraction network to extract features, so as to obtain face gesture features.
Step 604, inputting the face gesture features into a distance gesture recognition network for recognition to obtain distance gesture information, and simultaneously inputting the face gesture features into an angle gesture recognition network for recognition to obtain angle gesture information.
Specifically, the terminal inputs the face gesture features extracted through the gesture feature extraction network into the distance gesture recognition network for recognition to obtain distance gesture information, and simultaneously inputs the face gesture features into the angle gesture recognition network for recognition to obtain angle gesture information.
In a specific embodiment, as shown in fig. 7, a schematic structural diagram of a human face gesture recognition model, specifically: the terminal inputs the face region image into a CNN network, and the face attitude characteristic is obtained through operation of a convolution layer, a pooling layer and a full communication layer, wherein the full communication layer is a full connection layer. And then respectively inputting the face gesture features into a full-communication-based distance gesture recognition network and a full-communication-based angle gesture recognition network for recognition, namely obtaining output results through a full-communication layer, a Dropout layer (for preventing model from being fitted) and a Relu layer (nonlinear excitation function layer), and carrying out regression processing on the output results through a multi-layer perceptron to obtain corresponding output results, namely distance gesture information and angle gesture information. For example, as shown in fig. 8, a schematic diagram of the obtained face pose information is shown, in which after face pose recognition is performed on the face area image, the obtained distance pose information is 30 cm and the angle pose information is 30 degrees.
In one embodiment, the face pose information includes, but is not limited to, distance pose information, angle pose information, and three-dimensional coordinate pose information, which refers to angle information of a face orientation of a face in three-dimensional space, including pitch angle coordinate information, yaw angle coordinate information, and roll angle coordinate information. The face gesture recognition model comprises a gesture feature extraction network, a distance gesture recognition network, an angle gesture recognition network and a three-dimensional coordinate gesture recognition network, wherein the terminal inputs the face gesture feature into the distance gesture recognition network for recognition to obtain distance gesture information, simultaneously inputs the face gesture feature into the angle gesture recognition network for recognition to obtain angle gesture information, and simultaneously inputs the face gesture feature into the three-dimensional coordinate gesture recognition network for recognition to obtain three-dimensional coordinate gesture information. The three-dimensional coordinate gesture recognition network is used for recognizing three-dimensional coordinate gesture information in the face region image.
In the embodiment, the feature extraction is performed in the gesture feature extraction network to obtain the face gesture feature, and then the distance gesture recognition network and the angle gesture recognition network are used for recognition to obtain the distance gesture information and the angle gesture information, so that the accuracy and the efficiency of the obtained distance gesture information and angle gesture information are improved.
In one embodiment, as shown in FIG. 9, the training of the face pose recognition model includes the steps of:
step 902, obtaining training data, wherein the training data comprises face region images and corresponding face pose labels.
Specifically, the terminal may directly acquire the training data from the server. The terminal can also collect the face image to obtain a face region image, and simultaneously record face gesture information during collection as a human gesture label. The terminal may also obtain training data from a service provider that provides training data over the internet.
Step 904, inputting the face area image into an initial pose feature extraction network of an initial face pose recognition model to extract features, and obtaining initial face pose features.
The initial face gesture recognition model refers to a face gesture recognition model initialized by model parameters. The initial gesture feature extraction network refers to a gesture feature extraction network initialized by network parameters. The initial face posture feature is obtained by extracting an initial network parameter operation in a network through the initial posture feature.
Specifically, the terminal establishes an initial face gesture recognition model, and then inputs a face region image into an initial gesture feature extraction network of the initial face gesture recognition model to perform feature extraction so as to obtain initial face gesture features.
Step 906, inputting the initial face posture feature into an initial distance posture recognition network of the initial face posture recognition model to perform recognition to obtain initial distance posture information, and simultaneously inputting the face posture feature into an initial angle posture recognition network of the initial face posture recognition model to perform recognition to obtain initial angle posture information, and obtaining the initial face posture information according to the initial distance posture information and the initial angle posture information.
The initial distance gesture recognition network refers to a distance gesture recognition network initialized by network parameters. The initial distance posture information refers to distance posture information obtained by calculation by using initialized network parameters in the initialized angle posture recognition network. The initial angular pose recognition network refers to an angular pose recognition network initialized by network parameters. The initial angular pose information refers to angular pose information obtained by performing operation by using initialized network parameters in the initial angular pose recognition network.
Specifically, the server performs multi-task learning by using the initial face pose features, namely, initial distance pose information and initial angle pose information are obtained through the initial distance pose recognition network and the initial angle pose recognition network. And then taking the initial distance posture information and the initial angle posture information as initial face posture information.
Step 908, calculating the initial face pose information and the pose error information of the face pose label until the training obtained pose error information accords with the preset pose error condition, and obtaining a face pose recognition model.
The gesture error information refers to an error between the initial face gesture information and the face gesture label. The preset posture error condition means that the posture error information is smaller than a preset posture error threshold value or the maximum iteration number of training is reached.
Specifically, the terminal calculates the initial face pose information and the pose error information of the face pose label by using a preset loss function, judges whether the pose error information accords with preset pose error conditions, and when the pose error information does not accord with the preset pose error conditions, uses a back propagation algorithm to update model parameters of the initial face pose recognition model to obtain an updated face pose recognition model, and uses the updated face pose recognition model to continue iterative execution, namely returns to step 904 to continue execution until the pose error information obtained by training accords with the preset pose error conditions, and uses the face pose recognition model updated last time as the face pose recognition model.
In the embodiment, training is performed by using training data in advance, so that a face gesture recognition model is obtained, and then the face gesture recognition model is deployed, so that the face gesture recognition model can be directly used in use, and the efficiency is improved.
In one embodiment, as shown in fig. 10, before step 102, that is, before the face image to be recognized is acquired, the matching degree of the corresponding face identity is obtained according to the face image to be recognized, further includes:
step 1002, obtaining each piece of corresponding historical face pose information when the user identifier performs the target operation.
The user identification is used for uniquely identifying the terminal corresponding to the user. The historical face gesture information refers to face gesture information of the user terminal corresponding to the user identifier when the user terminal performs the target operation in the history.
Specifically, the terminal may obtain, from the server database, each piece of historical face pose information corresponding to the user identifier when the user identifier performs the target operation, and the terminal may obtain, from the memory, each piece of stored historical face pose information corresponding to the user identifier when the user identifier performs the target operation, that is, each time the terminal performs the target operation, collect the face pose information and store the face pose information. The terminal can also be obtained from a service party providing historical face gesture information through the Internet. The user identification corresponds to historical face gesture information every time target operation is executed.
Step 1002, determining a face pose total section corresponding to the user identifier according to each historical face pose information.
The total human face gesture interval is used for representing the interval range where the historical human face gesture information is located.
Specifically, when the historical face pose information is the historical distance pose information, the terminal compares each of the historical distance pose information to determine the largest and smallest historical distance pose information, and obtains the face distance total section according to the largest and smallest historical distance pose information. When the historical face posture information is the historical angle posture information, comparing the historical angle posture information to determine the maximum and minimum historical angle posture information, and obtaining the total face angle interval according to the maximum and minimum historical angle posture information.
In one embodiment, when the historical face pose information is three-dimensional coordinate pose information, comparing the three-dimensional coordinate pose information, and determining the maximum value and the minimum value of each dimensional coordinate in the three-dimensional coordinate pose information to obtain a face three-dimensional coordinate total interval.
Step 1002, dividing the total face pose interval to obtain each face pose sub-interval.
The human face posture subinterval is an interval range obtained by dividing the human face posture total interval, and the interval range is smaller than the interval range of the human face posture total interval.
Specifically, the terminal divides the total human face gesture interval according to preset gesture dividing conditions to obtain each human face gesture subinterval. Wherein the preset gesture dividing condition refers to a preset gesture dividing condition,
step 1002, determining a face pose matching degree corresponding to each face pose sub-interval according to each historical face pose information, and storing each face pose sub-interval and the corresponding face pose matching degree in an associated manner.
Specifically, matching each historical face posture information with each face posture subinterval, determining a face posture subinterval corresponding to each historical face posture information, determining a face posture matching degree corresponding to the face posture subinterval according to the total number of each historical face posture information and the number of the historical face posture information in the face posture subintervals, and then storing each face posture subinterval in association with the corresponding face posture matching degree.
In the above embodiment, the face pose total section corresponding to the user identifier is determined according to each historical face pose information, the face pose total section is divided to obtain each face pose sub-section, and the face pose matching degree corresponding to each face pose sub-section is determined according to each historical face pose information, so that the obtained face pose matching degree is more accurate. And then, each face gesture subinterval is associated with the corresponding face gesture matching degree for storage, so that the subsequent use is convenient.
In one embodiment, the face pose total interval includes a face distance total interval and a face angle total interval.
As shown in fig. 11, step 1002, dividing the total face pose interval to obtain each face pose sub-interval includes:
step 1102, dividing the total face distance interval to obtain each face distance subinterval.
The human face distance subinterval is an interval obtained by dividing a human face distance total interval.
Specifically, when dividing the total face distance section, the section may be divided according to a preset distance division size, for example, the section may be divided according to a size of 4 cm, and the total face distance section may be divided into 10 sections from 10 cm to 50 cm.
Step 1104, dividing the total face angle interval to obtain each face angle subinterval.
The face angle subinterval is an interval obtained by dividing a total face angle interval.
Specifically, when dividing the total section of the face angle, the section division may be performed according to a preset angle division size. For example, the section division is performed according to a size of 3 degrees, and the total section of the face angle can be divided into 10 sections from 30 degrees to 60 degrees.
In one embodiment, when the terminal divides the total three-dimensional coordinate section of the face, the terminal may divide the section according to preset coordinate values, for example, may divide the section according to 4 coordinate values.
Step 1106, combining each face distance subinterval and each face angle subinterval to obtain a distribution diagram of the face gesture subinterval.
The distribution map is used for representing the distribution state of the face gesture subinterval.
Specifically, the terminal may combine each face distance subinterval with each face angle subinterval, which means that each face distance subinterval and each face angle subinterval are combined into a planar area, so as to obtain a distribution diagram of the face gesture subinterval.
In the above embodiment, the face distance sub-sections and the face angle sub-sections are obtained by dividing the face angle total section and the face distance total section, and then the face distance sub-sections and the face angle sub-sections are combined to obtain the distribution diagram of the face gesture sub-section, so that the obtained face gesture sub-section is more accurate.
In one embodiment, as shown in fig. 12, step 1106, combining each face distance subinterval and each face angle subinterval to obtain a distribution map of the face pose subinterval includes:
Step 1202, a plane area is established according to the total face distance interval and the total face angle interval.
The plane area refers to a plane formed by a total face distance interval and a total face angle interval.
Specifically, the terminal may set the face distance total section as a plane abscissa range, and the face angle total section as a plane ordinate range, so as to establish a plane area. The terminal can also set up a plane area by taking the total face angle interval as a plane abscissa range and the total face distance interval as a plane ordinate range.
In step 1204, area division is performed in the planar area according to each face distance subinterval and each face angle subinterval, so as to obtain a planar sub-area corresponding to each face pose subinterval.
Step 1206, composing a distribution map of the face pose subinterval according to each plane subzone.
The plane subarea refers to an area corresponding to the face gesture subarea, and is part of the plane area.
Specifically, the terminal may divide a plane abscissa range in the plane area according to a range of each face distance subinterval, and divide a plane ordinate range in the plane area according to a range of each face angle subinterval, so as to obtain a plane sub-area corresponding to each face gesture subinterval. The terminal can divide the plane abscissa range in the plane area according to the range of each face angle subinterval, and divide the plane ordinate range in the plane area according to the range of each face distance subinterval, so as to obtain the plane subregion corresponding to each face gesture subinterval. And finally, the terminal forms a distribution diagram of the face gesture subinterval according to each plane subinterval.
In a specific embodiment, as shown in fig. 13, a distribution diagram of a face pose sub-interval is created, where a face distance total interval is 10 cm to 60 cm and a face angle total interval is 30 degrees to 60 degrees, a plane area is created, then the face distance total interval is taken as a plane ordinate, the face angle total interval is taken as a plane abscissa, the face distance total interval is divided according to a 10 cm interval, and the face angle total interval is divided according to a 6 degree interval, so as to obtain a plane subarea corresponding to each face pose sub-interval, namely each square in the graph. Then, a distribution diagram of the face gesture subinterval is obtained. When the face gesture subinterval corresponding to the face gesture information is determined, the corresponding interval is determined according to the face distance information and the face angle information, and then the grids in the corresponding distribution diagram are obtained, so that the corresponding face gesture subinterval is obtained.
In the embodiment, the distribution map of the face posture subinterval is obtained by establishing the plane area and then dividing the plane area into areas, so that the accuracy of obtaining the distribution map is improved.
In one embodiment, as shown in fig. 14, step 1008, determining the face pose matching degree corresponding to each face pose subinterval according to each historical face pose information includes:
Step 1402, counting total number of each history face pose information.
Step 1406, determining face pose sub-intervals matched with the historical face pose information, and counting the number of the historical face pose information in each face pose sub-interval.
Specifically, the terminal counts the total number of each piece of historical face pose information corresponding to the user identifier, and then determines a face pose subinterval corresponding to each piece of historical face pose information, namely, the historical face pose information is divided into the corresponding face pose subintervals. And counts the number of historical face pose information falling into each face pose subinterval.
Step 1408, a face pose matching degree corresponding to each face pose sub-section is calculated according to the number and total number of the historical face pose information in each face pose sub-section.
Specifically, the terminal calculates the ratio of the number of the historical face pose information to the total number in each face pose sub-interval, and uses the ratio as the face pose matching degree corresponding to each face pose sub-interval. The terminal may also perform calculation using the following formula (1).
In a specific embodiment, as shown in fig. 13, 40 pieces of historical face pose information fall in a grid of 30 to 40 cm and 36 to 42 degrees, and the total number of the historical face pose information is 100. The face pose matching degree of the corresponding lattices of 30 to 40 cm and 36 to 42 degrees is 40/100=0.4. And calculating the face gesture matching degree of each grid in the distribution diagram to obtain the face gesture matching degree corresponding to each face gesture subinterval.
In the above embodiment, the total number of the historical face pose information and the number of the historical face pose information in each face pose sub-section are counted, so that the face pose matching degree corresponding to each face pose sub-section is calculated, and the efficiency of obtaining the face pose matching degree is improved.
In one embodiment, the face pose information includes a face distance pose parameter and a face angle pose parameter. As shown in fig. 15, step 108, determining a target face pose subinterval matched with the face pose information includes:
step 1502, an established profile is acquired.
Specifically, the terminal may be a profile of an established face pose subinterval obtained from a server database, or the terminal may directly obtain a stored established profile from a memory.
Step 1504, determining a target plane subarea from the distribution diagram according to the face distance posture parameter and the face angle posture parameter in the face posture information.
The face distance posture parameter refers to a specific distance value between the face and the camera device. The face angle posture parameter refers to an angle value between the face and the camera device. The target plane sub-area refers to a plane sub-area corresponding to face posture information of a face image to be recognized.
Specifically, a face distance subinterval and a face angle subinterval where the face distance subinterval and the face angle subinterval are located are determined according to face distance posture parameters and face angle posture parameters in the face posture information, and then a target plane sub-area is determined from a distribution diagram according to the face distance subinterval and the face angle subinterval. For example, if the face distance posture parameter in the face posture information is 18 cm and the face angle posture parameter is 45 degrees, the face distance subinterval is determined to be a 10 cm-20 cm interval, the face angle subinterval is determined to be a 42-48 degree interval, and then the target plane subarea is determined from the distribution diagram.
Step 1506, acquiring a face pose subinterval corresponding to the target plane sub-area as a target face pose subinterval.
Specifically, the terminal acquires a face gesture subinterval corresponding to the target plane sub-area as a target face gesture subinterval.
In the embodiment, the planar subarea corresponding to the face posture information is determined according to the established distribution diagram, and then the face posture subarea corresponding to the target planar subarea is obtained as the target face posture subarea, so that the efficiency of obtaining the target face posture subarea is improved.
In one embodiment, as shown in fig. 16, step 110, determining a face recognition result corresponding to a face image to be recognized based on the face identity matching degree and the face pose matching degree includes:
Step 1602, acquiring a face identity weight corresponding to the face identity matching degree and a face pose weight corresponding to the face pose matching degree.
The face identity weight is a weight corresponding to a preset face identity matching degree, and the face pose weight is a weight corresponding to a preset face pose matching degree.
Specifically, the terminal obtains a face identity weight corresponding to the face identity matching degree and a face posture weight corresponding to the face posture matching degree from the memory.
Step 1604, performing weighted calculation according to the face identity weight and the face identity matching degree to obtain the face identity weighted matching degree.
In step 1606, a weighted computation is performed according to the face pose weight and the face pose matching degree, so as to obtain the face pose weighted matching degree.
And 1608, obtaining target face matching degree according to the face identity weighting matching degree and the face posture weighting matching degree, and obtaining a face recognition result corresponding to the face image to be recognized as face recognition passing when the target face matching degree exceeds a preset threshold.
The face identity weighted matching degree refers to a matching degree obtained after weighting the face identity matching degree. The face pose weighted matching degree refers to a matching degree obtained after weighting the face pose matching degree. The target face matching degree is used for representing the matching degree of the face image to be recognized and the face identity. The preset threshold value is a preset target face matching degree threshold value.
Specifically, the terminal may perform weighted calculation according to the face identity weight and the face identity matching degree, so as to obtain the face identity weighted matching degree. And carrying out weighted calculation according to the face pose weight and the face pose matching degree to obtain a face pose weighted matching degree, carrying out average calculation according to the face identity weighted matching degree and the face pose weighted matching degree to obtain a target face matching degree, judging whether the target face matching degree exceeds a preset threshold, obtaining a face recognition result corresponding to the face image to be recognized as face recognition passing when the target face matching degree exceeds the preset threshold, and obtaining a face recognition result corresponding to the face image to be recognized as face recognition failing when the target face matching degree does not exceed the preset threshold. In one embodiment, the matching degree of the target face may also be calculated directly by using the following formula (2).
Target face matching degree=face pose matching degree w1+face identity matching degree w2 formula (2)
Wherein w1 is the face pose weight, and w2 is the face identity weight.
In the above embodiment, the face pose matching degree and the face identity matching degree are respectively weighted to obtain the final target face matching degree, and then the face recognition result is obtained according to the target face matching degree, so that the obtained face recognition result is more accurate.
In a specific embodiment, as shown in fig. 17, the face recognition method specifically includes the following steps:
1702, acquiring each piece of history face gesture information corresponding to the user identifier when the user identifier executes the target operation, and determining a face gesture total section corresponding to the user identifier according to each piece of history face gesture information, wherein the face gesture total section comprises a face distance total section and a face angle total section.
1704, dividing a face distance total section and a face angle total section to obtain face distance subsections and face angle subsections, establishing a plane area according to the face distance total section and the face angle total section, dividing areas in the plane area according to the face distance subsections and the face angle subsections to obtain plane subareas corresponding to the face gesture subsections, and forming a distribution diagram of the face gesture subsections according to the plane subareas.
1706, counting the total number of the historical face pose information, determining face pose sub-intervals matched with the historical face pose information, counting the number of the historical face pose information in the face pose sub-intervals, calculating the face pose matching degree corresponding to the face pose sub-intervals according to the number and the total number of the historical face pose information in the face pose sub-intervals, and storing the face pose sub-intervals and the corresponding face pose matching degree in an associated mode.
1708, acquiring a face image to be recognized, and inputting the face image to be recognized into a face recognition model to perform face recognition to obtain the face identity matching degree.
And 1710, inputting the face image to be recognized into a segmentation feature extraction network of the image segmentation model to obtain image segmentation features, inputting the image segmentation features into an image classification network of the image segmentation model to obtain face pixel points and non-face pixel points, and determining a face region according to the face pixel points and the non-face pixel points.
1712, inputting the face region image into a gesture feature extraction network in the face gesture recognition model to extract features, so as to obtain face gesture features. The face gesture features are input into a distance gesture recognition network in the face gesture recognition model to be recognized, distance gesture information is obtained, and meanwhile, the face gesture features are input into an angle gesture recognition network in the face gesture recognition model to be recognized, so that angle gesture information is obtained.
1714, acquiring the established distribution diagram, determining a target plane subarea from the distribution diagram according to the face distance posture parameter and the face angle posture parameter in the face posture information, and acquiring a face posture subarea corresponding to the target plane subarea as a target face posture subarea. And acquiring the face pose matching degree corresponding to the target face pose sub-interval according to the association relation between each face pose sub-interval and the corresponding face pose matching degree.
1716, acquiring a face identity weight corresponding to the face identity matching degree and a face pose weight corresponding to the face pose matching degree, performing weighted calculation according to the face identity weight and the face identity matching degree to obtain a face identity weighted matching degree, performing weighted calculation according to the face pose weight and the face pose matching degree to obtain a face pose weighted matching degree, and obtaining a target face matching degree according to the face identity weighted matching degree and the face pose weighted matching degree.
1718, when the matching degree of the target face exceeds a preset threshold, obtaining a face recognition result corresponding to the face image to be recognized as the passing of the face recognition, and executing corresponding target operation according to the face recognition passing result.
The application also provides an application scene, and the application scene applies the face recognition method. Specifically, the application of the face recognition method in the application scene is as follows:
the face recognition method is applied to an unlocking scene, wherein when face recognition unlocking is required, a front camera on a smart phone is used for collecting face images to be recognized, as shown in fig. 18, a schematic diagram of unlocking the smart phone is shown for a user, the corresponding face identity matching degree of 0.95 is obtained according to the face images to be recognized, image segmentation is carried out on the face images to be recognized to obtain face areas, face gestures corresponding to the face areas are recognized to obtain face gesture information, wherein the angle is 30 degrees, the distance is 15 cm, a target face gesture subinterval matched with the face gesture information is determined, and the corresponding face gesture matching degree of 0.95 is obtained according to the target face gesture subinterval. The face recognition result corresponding to the face image to be recognized, namely the target face matching degree, is determined to be 0.95 based on the face identity matching degree and the face gesture matching degree, and exceeds a preset threshold value of 0.9, wherein the face recognition result indicates that the face recognition is the passing of the face recognition, and is that the smart phone is unlocked by the user, at this time, the smart phone performs the unlocking operation of the smart phone, as shown in fig. 19, an interface schematic diagram of successful unlocking of the smart phone through the face recognition is provided, 1902 is a front camera of the smart phone in the interface, and 1904 is a sign of successful unlocking in the interface. In another embodiment, as shown in fig. 20, a schematic diagram of unlocking a smart phone by a user is shown, wherein the degree of matching of a face identity obtained by recognizing a face image to be recognized is 0.95, and the obtained face posture information is obtained, wherein the angle is 150 degrees, the distance is 8 cm, the obtained face posture matching degree is 0.6, the obtained target face matching degree is (0.95+0.6)/2=0.775, the preset threshold value is not exceeded, the obtained face recognition result is that the face recognition fails, and the smart phone prompts "non-self unlocking information and the face posture is wrong".
The application further provides an application scene, and the application scene applies the face recognition method. Specifically, the application of the face recognition method in the application scene is as follows:
the face recognition method is applied to an access control scene, as shown in fig. 21, and is an application scene schematic diagram of the face recognition method, wherein an access control monitoring device collects face images to be recognized, and a scene diagram of the access control monitoring device collects the face images to be recognized is shown in fig. 22. The face image to be recognized is sent to a server, the server obtains a corresponding face identity matching degree of 0.95 according to the face image to be recognized, image segmentation is carried out on the face image to be recognized to obtain a face area, the face gesture corresponding to the face area is recognized to obtain face gesture information, wherein the angle is 110 degrees, the distance is 50 cm, a target face gesture subinterval matched with the face gesture information is determined, and the corresponding face gesture matching degree is obtained according to the target face gesture subinterval and is 0.97. And determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face gesture matching degree, namely, the target face matching degree is (0.85+0.97)/2=0.96, and exceeds a preset threshold value of 0.90. And the face recognition result is that the face recognition is passed, at this time, the server sends a door opening instruction to the access control equipment, and the access control equipment receives the door opening instruction and executes door opening operation.
It should be understood that, although the steps in the flowcharts of fig. 1, 2, 5, 6, 9-12, and 14-17 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps of fig. 1, 2, 5, 6, 9-12, and 14-17 may include a plurality of steps or stages that are not necessarily performed at the same time but may be performed at different times, nor does the order in which the steps or stages are performed necessarily occur sequentially, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages in other steps.
In one embodiment, as shown in fig. 23, a face recognition apparatus 2300 is provided, which may employ a software module or a hardware module, or a combination of both, as part of a computer device, the apparatus specifically comprising: identity matching module 2302, image segmentation module 2304, gesture recognition module 2306, gesture matching module 2308 and result determination module 2310, wherein:
The identity matching module 2302 is configured to obtain a face image to be identified, and obtain a corresponding face identity matching degree according to the face image to be identified;
an image segmentation module 2304, configured to perform image segmentation on a face image to be identified, to obtain a face area;
the gesture recognition module 2306 is configured to recognize a face gesture corresponding to the face region, to obtain face gesture information;
the gesture matching module 2308 is configured to determine a target face gesture subinterval matched with the face gesture information, and obtain a corresponding face gesture matching degree according to the target face gesture subinterval;
the result determining module 2310 is configured to determine a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face pose matching degree.
In one embodiment, the face recognition device 2300 further includes:
and the operation execution module is used for executing corresponding target operation according to the face recognition passing result when the face recognition result is the face recognition passing result.
In one embodiment, the identity matching module 2302 is further configured to input a face image to be identified into a face recognition model for face recognition, so as to obtain a face identity matching degree; the face recognition model is obtained by taking a training face image as input, taking face identity labels corresponding to the training face image as labels, and training by using a convolutional neural network.
In one embodiment, image segmentation module 2304 includes:
the segmentation feature obtaining module is used for inputting the face image to be identified into a segmentation feature extraction network of the image segmentation model to obtain image segmentation features;
the region determining module is used for inputting the image segmentation features into an image classification network of the image segmentation model to obtain face pixel points and non-face pixel points, and determining the face region according to the face pixel points and the non-face pixel points.
In one embodiment, the face recognition device 2300 further includes:
the image acquisition module is used for acquiring a training face image with face boundary marks;
the training module is used for inputting the training face image into an initial segmentation feature extraction network of the initial image segmentation model to obtain initial image segmentation features; inputting the initial image segmentation features into an initial image classification network of an initial image segmentation model to obtain initial face pixel points and initial non-face pixel points, and determining an initial face region image according to the initial face pixel points and the initial non-face pixel points;
the segmentation model obtaining module is used for calculating the region error information of the initial face region and the face boundary label until the region error information obtained by training accords with the preset training completion condition, and obtaining an image segmentation model.
In one embodiment, gesture recognition module 2306 includes:
the model identification unit is used for inputting the face area image into a face gesture identification model to identify so as to obtain face gesture information, and the face gesture identification model is obtained by training the face area image and the corresponding face gesture label through a multi-task regression model.
In one embodiment, the face pose information includes distance pose information and angle pose information, and the face pose recognition model includes a pose feature extraction network, a distance pose recognition network, and an angle pose recognition network;
the model identification unit is also used for inputting the facial region image into a gesture feature extraction network to extract features so as to obtain facial gesture features; the face gesture features are input into a distance gesture recognition network for recognition to obtain distance gesture information, and meanwhile, the face gesture features are input into an angle gesture recognition network for recognition to obtain angle gesture information.
In one embodiment, the face recognition device 2300 further includes:
the data acquisition module is used for acquiring training data, wherein the training data comprises a face area image and a corresponding face gesture label;
The face gesture training module is used for inputting the face region image into an initial gesture feature extraction network of an initial face gesture recognition model to perform feature extraction so as to obtain initial face gesture features; inputting the initial face posture features into an initial distance posture recognition network of an initial face posture recognition model to recognize, obtaining initial distance posture information, inputting the face posture features into an initial angle posture recognition network of the initial face posture recognition model to recognize, obtaining initial angle posture information, and obtaining initial face posture information according to the initial distance posture information and the initial angle posture information;
the recognition module obtaining module is used for calculating the initial face posture information and the posture error information of the face posture mark until the training obtained posture error information accords with the preset posture error condition, and obtaining the face posture recognition model.
In one embodiment, the face recognition device 2300 further includes:
the information acquisition module is used for acquiring corresponding historical face pose information when the user identifier executes target operation;
the interval determining module is used for determining a face gesture total interval corresponding to the user mark according to each historical face gesture information;
The interval dividing module is used for dividing the total interval of the human face gestures to obtain each human face gesture subinterval;
the matching degree determining module is used for determining the face posture matching degree corresponding to each face posture subinterval according to each historical face posture information, and storing each face posture subinterval and the corresponding face posture matching degree in an associated mode.
In one embodiment, the face pose total section includes a face distance total section and a face angle total section; the interval dividing module comprises:
the dividing unit is used for dividing the total human face distance interval to obtain each human face distance subinterval; dividing the total human face angle interval to obtain each human face angle subinterval;
and the distribution map obtaining unit is used for combining each face distance subinterval and each face angle subinterval to obtain the distribution map of the face gesture subinterval.
In one embodiment, the profile obtaining unit is further configured to establish a planar area according to the face distance total interval and the face angle total interval; dividing the plane area according to each face distance subinterval and each face angle subinterval to obtain a plane subinterval corresponding to each face posture subinterval; and forming a distribution diagram of the human face posture subinterval according to each plane subzone.
In one embodiment, the matching degree determining module is further configured to count the total number of each historical face pose information; determining face pose sub-intervals matched with the historical face pose information, and counting the number of the historical face pose information in each face pose sub-interval; and calculating according to the number and total number of the historical face gesture information in each face gesture subinterval to obtain the face gesture matching degree corresponding to each face gesture subinterval.
In one embodiment, the face pose information includes a face distance pose parameter and a face angle pose parameter; the gesture matching module 2308 is also configured to obtain an established profile; determining a target plane subarea from the distribution map according to the face distance posture parameter and the face angle posture parameter in the face posture information; and acquiring a face gesture subinterval corresponding to the target plane sub-area as a target face gesture subinterval.
In one embodiment, the result determining module 2310 is further configured to obtain a face identity weight corresponding to the face identity matching degree and a face pose weight corresponding to the face pose matching degree; carrying out weighted calculation according to the face identity weight and the face identity matching degree to obtain the face identity weighted matching degree; carrying out weighted calculation according to the face pose weight and the face pose matching degree to obtain the face pose weighted matching degree; and obtaining a target face matching degree according to the face identity weighted matching degree and the face posture weighted matching degree, and obtaining a face recognition result corresponding to the face image to be recognized as face recognition passing when the target face matching degree exceeds a preset threshold.
For specific limitations of the face recognition apparatus, reference may be made to the above limitations of the face recognition method, and no further description is given here. The respective modules in the above-described face recognition apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 24. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a face recognition method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 24 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the steps in the above-described method embodiments.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.
Claims (15)
1. A method of face recognition, the method comprising:
acquiring a face image to be identified, and identifying the face image to be identified to obtain a corresponding face identity matching degree;
image segmentation is carried out on the face image to be identified, so as to obtain a face area;
recognizing the face gesture corresponding to the face region to obtain face gesture information;
determining a target face pose sub-interval matched with the face pose information, and acquiring a corresponding face pose matching degree according to the target face pose sub-interval;
and determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face gesture matching degree.
2. The method according to claim 1, further comprising, after the determining the face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face pose matching degree:
And when the face recognition result is that the face recognition passes, executing corresponding target operation according to the face recognition passing result.
3. The method according to claim 1, wherein the image segmentation of the face image to be identified to obtain a face region includes:
inputting the face image to be recognized into a segmentation feature extraction network of an image segmentation model to obtain image segmentation features;
inputting the image segmentation features into an image classification network of the image segmentation model to obtain face pixel points and non-face pixel points, and determining the face region according to the face pixel points and the non-face pixel points.
4. A method according to claim 3, wherein the training of the image segmentation model comprises the steps of:
acquiring a training face image with face boundary marks;
inputting the training face image into an initial segmentation feature extraction network of an initial image segmentation model to obtain initial image segmentation features;
inputting the initial image segmentation features into an initial image classification network of the initial image segmentation model to obtain initial face pixel points and initial non-face pixel points, and determining an initial face region image according to the initial face pixel points and the initial non-face pixel points;
And calculating the region error information marked by the initial face region and the face boundary until the region error information obtained by training accords with a preset training completion condition, and obtaining the image segmentation model.
5. The method according to claim 1, wherein the identifying the face pose corresponding to the face area image to obtain face pose information includes:
and inputting the face region image into a face gesture recognition model to recognize, so as to obtain the face gesture information, wherein the face gesture recognition model is obtained by training the face region image and the corresponding face gesture label by using a multi-task regression model.
6. The method of claim 5, wherein the face pose information comprises distance pose information and angle pose information, and the face pose recognition model comprises a pose feature extraction network, a distance pose recognition network, and an angle pose recognition network;
the step of inputting the face area image into a face gesture recognition model for recognition to obtain the face gesture information comprises the following steps:
inputting the face region image into the gesture feature extraction network to extract features, so as to obtain face gesture features;
And inputting the face gesture features into the distance gesture recognition network for recognition to obtain distance gesture information, and simultaneously inputting the face gesture features into the angle gesture recognition network for recognition to obtain angle gesture information.
7. The method of claim 5, wherein the training of the face pose recognition model comprises the steps of:
acquiring training data, wherein the training data comprises a face area image and a corresponding face gesture label;
inputting the face region image into an initial pose feature extraction network of an initial face pose recognition model to extract features, so as to obtain initial face pose features;
inputting the initial face posture features into an initial distance posture recognition network of the initial face posture recognition model to recognize, obtaining initial distance posture information, and simultaneously inputting the face posture features into an initial angle posture recognition network of the initial face posture recognition model to recognize, obtaining initial angle posture information, and obtaining initial face posture information according to the initial distance posture information and the initial angle posture information;
And calculating the initial face posture information and the posture error information of the face posture mark until the training obtained posture error information accords with a preset posture error condition, and obtaining the face posture recognition model.
8. The method according to claim 1, further comprising, before the acquiring the face image to be recognized, recognizing the face image to be recognized to obtain the corresponding face identity matching degree, the steps of:
acquiring corresponding historical face pose information of a user identifier when a target operation is executed;
determining a face gesture total interval corresponding to the user identifier according to the historical face gesture information;
dividing the total human face gesture interval to obtain each human face gesture subinterval;
and determining the face pose matching degree corresponding to each face pose sub-interval according to the historical face pose information, and storing the face pose sub-intervals and the corresponding face pose matching degree in an associated mode.
9. The method of claim 8, wherein the total face pose interval comprises a total face distance interval and a total face angle interval;
dividing the total human face gesture interval to obtain each human face gesture subinterval, wherein the dividing comprises the following steps:
Dividing the total human face distance interval to obtain each human face distance subinterval;
dividing the total face angle interval to obtain each face angle subinterval;
and combining the face distance subintervals and the face angle subintervals to obtain a distribution diagram of the face gesture subintervals.
10. The method of claim 9, wherein the combining the face distance subintervals and the face angle subintervals to obtain the distribution map of the face pose subintervals comprises:
establishing a plane area according to the face distance total interval and the face angle total interval;
performing region division on the plane region according to the face distance subintervals and the face angle subintervals to obtain plane sub-regions corresponding to the face gesture subintervals;
and forming a distribution diagram of the face gesture subinterval according to the plane subareas.
11. The method according to claim 8, wherein determining the face pose matching degree corresponding to the face pose sub-intervals according to the historical face pose information includes:
Counting the total number of the historical face pose information;
determining face pose sub-intervals matched with the historical face pose information, and counting the number of the historical face pose information in the face pose sub-intervals;
and calculating the face pose matching degree corresponding to each face pose sub-interval according to the number of the historical face pose information in each face pose sub-interval and the total number.
12. The method of claim 1, wherein the face pose information includes a face distance pose parameter and a face angle pose parameter;
the determining a target face pose sub-interval matched with the face pose information comprises the following steps:
acquiring an established distribution diagram;
determining a target plane subarea from the distribution map according to the face distance posture parameter and the face angle posture parameter in the face posture information;
and acquiring a face gesture subinterval corresponding to the target plane sub-area as the target face gesture subinterval.
13. A face recognition device, the device comprising:
the identity matching module is used for acquiring a face image to be identified and obtaining a corresponding face identity matching degree according to the face image to be identified;
The image segmentation module is used for carrying out image segmentation on the face image to be identified to obtain a face area;
the gesture recognition module is used for recognizing the facial gesture corresponding to the facial region to obtain facial gesture information;
the gesture matching module is used for determining a target face gesture subinterval matched with the face gesture information and acquiring a corresponding face gesture matching degree according to the target face gesture subinterval;
and the result determining module is used for determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face gesture matching degree.
14. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 12 when the computer program is executed.
15. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method of any one of claims 1 to 12.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010902551.2A CN112001932B (en) | 2020-09-01 | 2020-09-01 | Face recognition method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010902551.2A CN112001932B (en) | 2020-09-01 | 2020-09-01 | Face recognition method, device, computer equipment and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112001932A CN112001932A (en) | 2020-11-27 |
| CN112001932B true CN112001932B (en) | 2023-10-31 |
Family
ID=73465531
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010902551.2A Active CN112001932B (en) | 2020-09-01 | 2020-09-01 | Face recognition method, device, computer equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112001932B (en) |
Families Citing this family (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112529073A (en) * | 2020-12-07 | 2021-03-19 | 北京百度网讯科技有限公司 | Model training method, attitude estimation method and apparatus, and electronic device |
| CN112528858A (en) * | 2020-12-10 | 2021-03-19 | 北京百度网讯科技有限公司 | Training method, device, equipment, medium and product of human body posture estimation model |
| CN113160475A (en) * | 2021-04-21 | 2021-07-23 | 深圳前海微众银行股份有限公司 | Access control method, device, equipment and computer readable storage medium |
| CN113822287B (en) * | 2021-11-19 | 2022-02-22 | 苏州浪潮智能科技有限公司 | Image processing method, system, device and medium |
| CN114360013B (en) * | 2021-12-30 | 2025-07-25 | 信丰世嘉科技有限公司 | High-precision face recognition camera |
| CN114330565A (en) | 2021-12-31 | 2022-04-12 | 深圳集智数字科技有限公司 | Face recognition method and device |
| CN114550088B (en) * | 2022-02-22 | 2022-12-13 | 北京城建设计发展集团股份有限公司 | Multi-camera fused passenger identification method and system and electronic equipment |
| CN114565814B (en) * | 2022-02-25 | 2024-07-09 | 深圳平安智慧医健科技有限公司 | Feature detection method and device and terminal equipment |
| CN115331280A (en) * | 2022-07-01 | 2022-11-11 | 网易(杭州)网络有限公司 | Face identification method, device and electronic device |
| CN115564387A (en) * | 2022-10-19 | 2023-01-03 | 常州瀚森科技股份有限公司 | Digital intelligent operation management method and system for industrial park under digital economic condition |
| CN117058738B (en) * | 2023-08-07 | 2024-05-03 | 深圳市华谕电子科技信息有限公司 | Remote face detection and recognition method and system for mobile law enforcement equipment |
| CN119810869B (en) * | 2024-12-19 | 2025-08-15 | 北京迈道科技有限公司 | Method, system, equipment and storage medium for detecting correct wearing of safety helmet for construction site |
Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2011065952A1 (en) * | 2009-11-30 | 2011-06-03 | Hewlett-Packard Development Company, L.P. | Face recognition apparatus and methods |
| CN102087702A (en) * | 2009-12-04 | 2011-06-08 | 索尼公司 | Image processing device, image processing method and program |
| CN105117463A (en) * | 2015-08-24 | 2015-12-02 | 北京旷视科技有限公司 | Information processing method and information processing device |
| KR20160042646A (en) * | 2014-10-10 | 2016-04-20 | 인하대학교 산학협력단 | Method of Recognizing Faces |
| CN106295480A (en) * | 2015-06-09 | 2017-01-04 | 上海戏剧学院 | Multi-orientation Face identification interactive system |
| US9971933B1 (en) * | 2017-01-09 | 2018-05-15 | Ulsee Inc. | Facial image screening method and face recognition system thereof |
| WO2019042195A1 (en) * | 2017-08-31 | 2019-03-07 | 杭州海康威视数字技术股份有限公司 | Method and device for recognizing identity of human target |
| CN110287880A (en) * | 2019-06-26 | 2019-09-27 | 西安电子科技大学 | A Pose Robust Face Recognition Method Based on Deep Learning |
| CN110427849A (en) * | 2019-07-23 | 2019-11-08 | 深圳前海达闼云端智能科技有限公司 | Face pose determination method and device, storage medium and electronic equipment |
| CN110647865A (en) * | 2019-09-30 | 2020-01-03 | 腾讯科技(深圳)有限公司 | Face gesture recognition method, device, equipment and storage medium |
| CN111160307A (en) * | 2019-12-31 | 2020-05-15 | 帷幄匠心科技(杭州)有限公司 | Face recognition method and face recognition card punching system |
| CN111199029A (en) * | 2018-11-16 | 2020-05-26 | 株式会社理光 | Face recognition device and face recognition method |
| CN111310512A (en) * | 2018-12-11 | 2020-06-19 | 杭州海康威视数字技术股份有限公司 | User identity authentication method and device |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7127087B2 (en) * | 2000-03-27 | 2006-10-24 | Microsoft Corporation | Pose-invariant face recognition system and process |
| JP5423379B2 (en) * | 2009-08-31 | 2014-02-19 | ソニー株式会社 | Image processing apparatus, image processing method, and program |
| KR101130817B1 (en) * | 2011-09-27 | 2012-04-16 | (주)올라웍스 | Face recognition method, apparatus, and computer-readable recording medium for executing the method |
-
2020
- 2020-09-01 CN CN202010902551.2A patent/CN112001932B/en active Active
Patent Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2011065952A1 (en) * | 2009-11-30 | 2011-06-03 | Hewlett-Packard Development Company, L.P. | Face recognition apparatus and methods |
| CN102087702A (en) * | 2009-12-04 | 2011-06-08 | 索尼公司 | Image processing device, image processing method and program |
| KR20160042646A (en) * | 2014-10-10 | 2016-04-20 | 인하대학교 산학협력단 | Method of Recognizing Faces |
| CN106295480A (en) * | 2015-06-09 | 2017-01-04 | 上海戏剧学院 | Multi-orientation Face identification interactive system |
| CN105117463A (en) * | 2015-08-24 | 2015-12-02 | 北京旷视科技有限公司 | Information processing method and information processing device |
| US9971933B1 (en) * | 2017-01-09 | 2018-05-15 | Ulsee Inc. | Facial image screening method and face recognition system thereof |
| WO2019042195A1 (en) * | 2017-08-31 | 2019-03-07 | 杭州海康威视数字技术股份有限公司 | Method and device for recognizing identity of human target |
| CN111199029A (en) * | 2018-11-16 | 2020-05-26 | 株式会社理光 | Face recognition device and face recognition method |
| CN111310512A (en) * | 2018-12-11 | 2020-06-19 | 杭州海康威视数字技术股份有限公司 | User identity authentication method and device |
| CN110287880A (en) * | 2019-06-26 | 2019-09-27 | 西安电子科技大学 | A Pose Robust Face Recognition Method Based on Deep Learning |
| CN110427849A (en) * | 2019-07-23 | 2019-11-08 | 深圳前海达闼云端智能科技有限公司 | Face pose determination method and device, storage medium and electronic equipment |
| CN110647865A (en) * | 2019-09-30 | 2020-01-03 | 腾讯科技(深圳)有限公司 | Face gesture recognition method, device, equipment and storage medium |
| CN111160307A (en) * | 2019-12-31 | 2020-05-15 | 帷幄匠心科技(杭州)有限公司 | Face recognition method and face recognition card punching system |
Non-Patent Citations (5)
| Title |
|---|
| Design of face detection and recognition system for smart home security application;Dwi Ana Ratna Wati 等;《2017 2nd International conferences on Information Technology, Information Systems and Electrical Engineering》;342-347 * |
| Face Recognition with Integrating Multiple Cues;Zhaocui Han 等;《J. Sign. Process. Syst. 》;391-404 * |
| Implicit elastic matching with random projections for pose-variant face recognition;John Wright 等;《2009 IEEE Conference on Computer Vision and Pattern Recognition》;1502-1509 * |
| 基于Candide-3模型的姿态表情人脸识别研究;杜杏菁 等;《计算机工程与设计》;第33卷(第3期);1017-1021 * |
| 基于深度学习的人脸识别;程福运;《中国优秀硕士学位论文全文数据库 信息科技辑》;全文 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112001932A (en) | 2020-11-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112001932B (en) | Face recognition method, device, computer equipment and storage medium | |
| CN110555481B (en) | Portrait style recognition method, device and computer readable storage medium | |
| CN112348117B (en) | Scene recognition method, device, computer equipment and storage medium | |
| CN111191539B (en) | Certificate authenticity verification method and device, computer equipment and storage medium | |
| CN112364827B (en) | Face recognition method, device, computer equipment and storage medium | |
| CN107153817B (en) | Pedestrian re-identification data labeling method and device | |
| CN112132099A (en) | Identification method, palmprint key point detection model training method and device | |
| WO2021139324A1 (en) | Image recognition method and apparatus, computer-readable storage medium and electronic device | |
| CN113449704B (en) | Face recognition model training method and device, electronic equipment and storage medium | |
| CN110660078B (en) | Object tracking method, device, computer equipment and storage medium | |
| CN111414888A (en) | Low-resolution face recognition method, system, device and storage medium | |
| US11403875B2 (en) | Processing method of learning face recognition by artificial intelligence module | |
| CN111507285A (en) | Face attribute recognition method and device, computer equipment and storage medium | |
| CN113762249B (en) | Image attack detection and image attack detection model training method and device | |
| CN113706550A (en) | Image scene recognition and model training method and device and computer equipment | |
| CN111382638A (en) | Image detection method, device, equipment and storage medium | |
| CN111898561A (en) | Face authentication method, device, equipment and medium | |
| CN113298158A (en) | Data detection method, device, equipment and storage medium | |
| CN115223022A (en) | Image processing method, device, storage medium and equipment | |
| CN110175500B (en) | Finger vein comparison method, device, computer equipment and storage medium | |
| CN114861241A (en) | Anti-peeping screen method based on intelligent detection and related equipment thereof | |
| Wang et al. | A study of convolutional sparse feature learning for human age estimate | |
| Ying et al. | Dynamic random regression forests for real-time head pose estimation | |
| CN115797990A (en) | Image classification method, image processing method, image classification device and storage medium | |
| CN113269176B (en) | Image processing model training method, image processing device and computer equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |