CN113762013B - Method and device for face recognition - Google Patents
Method and device for face recognition Download PDFInfo
- Publication number
- CN113762013B CN113762013B CN202011389216.3A CN202011389216A CN113762013B CN 113762013 B CN113762013 B CN 113762013B CN 202011389216 A CN202011389216 A CN 202011389216A CN 113762013 B CN113762013 B CN 113762013B
- Authority
- CN
- China
- Prior art keywords
- face
- processed
- image
- face object
- detection frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a method and a device for face recognition. One embodiment of the method comprises the following steps: for each frame of image to be processed, acquiring a detection frame and face characteristics of a face object in the image to be processed; predicting a prediction detection frame of the face object in the image to be processed according to the historical track of the face object in the video to be processed; for each face object in the image to be processed, adding the face features of the face object to a face feature set of the face object corresponding to the matched prediction detection frame based on the matching degree of the detection frame of the face object and the prediction detection frame in the image to be processed; in response to determining that the face object in the image to be processed disappears, matching the face feature set of the face object disappeared with the face feature set in the preset feature library, and associating the matched face feature set, so that the recognition accuracy is improved.
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for face recognition.
Background
Currently, the mainstream face recognition method generally matches the feature information of a currently acquired face object with a plurality of face feature information under each face object in a feature library, and when the feature library has a face object matched with the feature information of the current face object, the feature information of the current face image is associated to the face object matched with the feature information of the current face object.
Disclosure of Invention
The embodiment of the application provides a method and a device for face recognition.
In a first aspect, an embodiment of the present application provides a method for face recognition, including: aiming at each frame of to-be-processed image in the to-be-processed video, acquiring a detection frame and a face feature of a face object in the to-be-processed image; predicting track information of the face object which appears in the image to be processed according to the historical track of the face object which appears in the video to be processed before the image to be processed, and obtaining a prediction detection frame; for each face object in the image to be processed, adding the face features of the face object to a face feature set of the face object corresponding to a prediction detection frame matched with the detection frame of the face object based on the matching degree of the detection frame of the face object and the prediction detection frame in the image to be processed; in response to determining that the face object in the image to be processed disappears, matching a face feature set of the face object disappearing with a face feature set in a preset feature library, and associating the matched face feature set, wherein each face feature set in the preset feature library is correspondingly provided with a face mark.
In some embodiments, the above method further comprises: and in response to determining that the predicted detection frame matched with the detection frame of the face object does not exist in the predicted detection frame of the image to be processed, creating a corresponding face feature set for the face object, and adding the face feature of the face object to the face feature set corresponding to the face object.
In some embodiments, in response to determining that a face object in the image to be processed disappears, matching a face feature set of the face object that disappears with a face feature set in a preset feature library, and associating the matched face feature set, where each face feature set in the preset feature library is correspondingly provided with a face identifier, including: responding to the condition that the face object disappears in the image to be processed, and selecting a preset number of face features from a face feature set of the face object which disappears; matching a preset number of face features with face feature sets in a preset feature library, and associating the matched face feature sets.
In some embodiments, the above method further comprises: and in response to determining that the face feature set matched with the face feature set of the disappeared face object does not exist in the preset feature library, adding the face feature set of the disappeared face object to the preset feature library, and adding a corresponding face identifier for the disappeared face object.
In some embodiments, the above method further comprises: and determining attribute information of the target personnel represented by each face object according to the face feature set corresponding to each face object.
In some embodiments, the above method further comprises: and carrying out data statistics on attribute information of the target personnel represented by each face object to obtain preset data.
In some embodiments, the above method further comprises: image acquisition is carried out on the video to be processed based on a preset time interval through the edge terminal equipment, so that a plurality of frames of images to be processed are obtained; and carrying out face detection and face feature extraction on each frame of image to be processed through edge equipment to obtain a detection frame and face features of a face object in each frame of image to be processed.
In a second aspect, an embodiment of the present application provides an apparatus for face recognition, including: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is configured to acquire a detection frame and a face characteristic of a face object in each frame of a to-be-processed image in a to-be-processed video; the prediction unit is configured to predict the track information of the face object in the image to be processed according to the historical track of the face object in the video to be processed before the image to be processed, so as to obtain a prediction detection frame; a first matching unit configured to add, for each face object in the image to be processed, a face feature of the face object to a face feature set of the face object corresponding to a predicted detection frame matched with the detection frame of the face object, based on a degree of matching between the detection frame of the face object and the predicted detection frame in the image to be processed; the second matching unit is configured to match a face feature set of the disappeared face object with a face feature set in a preset feature library in response to determining that the face object in the image to be processed disappears, and correlate the matched face feature sets, wherein each face feature set in the preset feature library is correspondingly provided with a face identifier.
In some embodiments, the apparatus further comprises: and the creating unit is configured to create a corresponding face feature set for the face object and add the face feature of the face object to the face feature set corresponding to the face object in response to determining that the predicted detection frame matched with the detection frame of the face object does not exist in the predicted detection frame of the image to be processed.
In some embodiments, the second matching unit is further configured to: responding to the condition that the face object disappears in the image to be processed, and selecting a preset number of face features from a face feature set of the face object which disappears; matching a preset number of face features with face feature sets in a preset feature library, and associating the matched face feature sets.
In some embodiments, the apparatus further comprises: the adding unit is configured to respond to the fact that the face feature set matched with the face feature set of the disappeared face object does not exist in the preset feature library, add the face feature set of the disappeared face object to the preset feature library, and add the corresponding face identification for the disappeared face object.
In some embodiments, the apparatus further comprises: and the determining unit is configured to determine attribute information of the target personnel represented by each face object according to the face feature set corresponding to each face object.
In some embodiments, the apparatus further comprises: and the statistics unit is configured to perform data statistics on the attribute information of the target personnel represented by each face object to obtain preset data.
In some embodiments, the apparatus further comprises: the acquisition unit is configured to acquire images of the video to be processed based on a preset time interval through the edge equipment to obtain multi-frame images to be processed; the obtaining unit is configured to carry out face detection and face feature extraction on each frame of to-be-processed image through the edge terminal equipment to obtain a detection frame and face features of a face object in each frame of to-be-processed image.
In a third aspect, embodiments of the present application provide a computer readable medium having a computer program stored thereon, wherein the program when executed by a processor implements a method as described in any of the implementations of the first aspect.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method as described in any of the implementations of the first aspect.
According to the method and the device for face recognition, provided by the embodiment of the application, the detection frame and the face characteristics of the face object in each frame of the image to be processed in the video to be processed are obtained; predicting track information of the face object which appears in the image to be processed according to the historical track of the face object which appears in the video to be processed before the image to be processed, and obtaining a prediction detection frame; for each face object in the image to be processed, adding the face features of the face object to a face feature set of the face object corresponding to a prediction detection frame matched with the detection frame of the face object based on the matching degree of the detection frame of the face object and the prediction detection frame in the image to be processed; in response to determining that the face object in the image to be processed disappears, matching a face feature set of the face object disappeared with a face feature set in a preset feature library, and associating the matched face feature set, wherein each face feature set in the preset feature library is correspondingly provided with a face mark, so that the problem of low accuracy in single face feature matching is avoided, and the recognition accuracy is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which an embodiment of the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a method for face recognition according to the present application;
fig. 3 is a schematic diagram of an application scenario of the method for face recognition according to the present embodiment;
fig. 4 is a flow chart of yet another embodiment of a method for face recognition according to the present application;
fig. 5 is a block diagram of one embodiment of an apparatus for face recognition according to the present application;
FIG. 6 is a schematic diagram of a computer system suitable for use in implementing embodiments of the present application.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
Fig. 1 illustrates an exemplary architecture 100 for a method and apparatus for face recognition to which the present application may be applied.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The communication connection between the terminal devices 101, 102, 103 constitutes a topology network, the network 104 being the medium for providing the communication link between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The terminal devices 101, 102, 103 may be hardware devices or software supporting network connections for data interaction and data processing. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices supporting network connection, information acquisition, interaction, display, processing, etc., including, but not limited to, cameras, smartphones, tablets, electronic book readers, laptop and desktop computers, etc. When the terminal devices 101, 102, 103 are software, they can be installed in the above-listed electronic devices. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
The server 105 may be a server that provides various services, for example, a background processing server that receives a detection frame and a face feature of a face object in an image to be processed transmitted by the terminal devices 101, 102, 103, and performs face recognition. The background processing server matches the face feature set of the face object with the face feature set in the preset feature library, and associates the matched face feature set. As an example, the server 105 may be a cloud server.
It should be noted that, the server may be hardware, or may be software. When the server is hardware, the server may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (e.g., software or software modules for providing distributed services), or as a single software or software module. The present invention is not particularly limited herein.
It should also be noted that, the method for face recognition provided by the embodiment of the present disclosure may be performed by a server, or may be performed by a terminal device, or may be performed by the server and the terminal device in cooperation with each other. Accordingly, each part (for example, each unit, sub-unit, module, sub-module) included in the apparatus for face recognition may be all disposed in the server, may be all disposed in the terminal device, or may be disposed in the server and the terminal device, respectively.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. When the electronic device on which the method for face recognition operates does not need to perform data transmission with other electronic devices, the system architecture may include only the electronic device (e.g., a server or a terminal device) on which the method for face recognition operates.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for face recognition is shown, comprising the steps of:
Step 201, for each frame of image to be processed in the video to be processed, acquiring a detection frame and a face feature of a face object in the image to be processed.
In this embodiment, for each frame of the image to be processed in the video to be processed, the execution subject (e.g., the server in fig. 1) of the method for face recognition may acquire the detection frame and the face feature of the face object in the image to be processed from a remote location or from a local location through a wired connection or a wireless connection.
The video to be processed may be video including any content. As an example, the video to be processed may be a surveillance video in a particular scene acquired by the surveillance device. Specifically, the video to be processed can be a video which is obtained in real time by a camera and represents the traffic situation of import and export of a supermarket. The image to be processed may be a video frame included in the video to be processed. The detection frame of the face object represents a face area of the face object in the image to be processed, and the face features are used for representing feature information of the face object in the detection frame. It can be understood that each face object in the image to be processed corresponds to the detection frame and the face feature one by one.
In some optional implementations of this embodiment, the executing body performs image acquisition on the video to be processed based on a preset time interval through an edge device to obtain a multi-frame image to be processed; and carrying out face detection and face feature extraction on each frame of image to be processed through edge equipment to obtain a detection frame and face features of a face object in each frame of image to be processed.
As an example, the edge device may determine a detection box of the face object in the image to be processed based on the face detection model, and determine a face feature of the face object in the detection box based on the feature extraction model. The face detection model and the face recognition model may be, for example, a convolutional neural network model, a cyclic neural network model, a residual network model, and the like.
The edge device may be a terminal device with smaller computing power and storage space as shown in fig. 1. Compared with the edge device, the computing power and the storage space of the execution main body are larger. The preset time interval can be specifically set according to actual situations. For example, the preset time interval may be 500 milliseconds. In the implementation mode, the edge terminal equipment and the execution main body perform their roles together, and the information processing efficiency of the face recognition method is improved.
And the edge device is responsible for processing the image to be processed to obtain a detection frame and a face feature of each face object included in the image to be processed, so that data transmitted between the edge device and the execution main body are the detection frame and the face feature of the face object and are not the original image to be processed. It can be appreciated that compared with information transmission based on the image to be processed, information transmission based on the detection frame of the face object and the face feature improves the security of information interaction between the edge device and the execution subject.
Step 202, predicting track information of the face object in the image to be processed according to the historical track of the face object in the video to be processed before the image to be processed, and obtaining a prediction detection frame.
In this embodiment, the execution body may predict the track information of the face object that has appeared in the to-be-processed image according to the historical track of the face object that has appeared in the to-be-processed video before the to-be-processed image, so as to obtain the prediction detection frame.
For each frame of the image to be processed from the initial frame to the image to be processed, the executing body detects the included face object to obtain a corresponding detection frame. It will be appreciated that the same face object exists in different detection frames in different images to be processed. For each face object in the video to be processed, starting from the image to be processed, in which the target person appears in the video to be processed, the detection frame information corresponding to the face object of the same target person can be combined into a historical track representing the motion track of the face object of the target person.
As an example, the execution body may determine offset information of a detection frame of a face object between adjacent images to be processed, and determine track information of the face object in the images to be processed, that is, a predicted detection frame of the face object in the images to be processed, according to a time interval between the adjacent images to be processed and a history track.
As yet another example, the execution subject may determine a predicted detection frame of the face object in the image to be processed through a trajectory prediction model. The track prediction model is used for predicting a prediction detection frame of the face object in the image to be processed according to the historical track of the target object.
For each face object in the image to be processed, the execution body predicts the predicted detection frames of the face object in the image to be processed according to the historical track of the face object, so as to obtain the predicted detection frames of all the face objects included in the image to be processed.
Step 203, for each face object in the image to be processed, adding the face feature of the face object to the face feature set of the face object corresponding to the prediction detection frame matched with the detection frame of the face object based on the matching degree of the detection frame of the face object and the prediction detection frame in the image to be processed.
In this embodiment, for each face object in the image to be processed, the executing body adds the face feature of the face object to the face feature set of the face object corresponding to the prediction detection frame matched with the detection frame of the face object based on the matching degree of the detection frame of the face object and the prediction detection frame in the image to be processed.
As an example, for each face object in the image to be processed, the execution subject may determine the matching degree between the detection frame of the face image and the prediction detection frame IoU (Intersection over Union, cross-over ratio) in the image to be processed. When IoU of the face object detection frame and a predicted detection frame are larger than a preset threshold, the face object detection frame can be considered to be matched with the predicted detection frame, and then the face objects corresponding to the matched detection frame and the predicted detection frame are considered to be the face objects of the same target person. It is understood that when IoU of the detection frame of the face object and a predicted detection frame are not greater than a preset threshold, it may be determined that the detection frame of the face object does not match the predicted detection frame.
The preset threshold value can be specifically set according to actual situations. For example, the preset threshold may be 0.8.
The detection frame of each face object included in the image to be processed is matched with a plurality of prediction detection frames in the image to be processed, the matched detection frame and the prediction detection frame are determined, and the face characteristics of the face object are added into a face characteristic set of the face object corresponding to the prediction detection frame matched with the detection frame of the face object. It can be understood that, for each face object, the face feature set of the face object includes a plurality of face features, where the plurality of face features are the face features of the face object in the plurality of images to be processed that are cut off to the image to be processed.
In this embodiment, for each face object, the execution body may further add a detection frame of the face object to a history track of the face object corresponding to a prediction detection frame that matches the detection frame of the face object.
In some optional implementations of this embodiment, in response to determining that there is no predicted detection frame matching the detection frame of the face object in the predicted detection frames of the image to be processed, a corresponding set of face features is created for the face object, and the face features of the face object are added to the set of face features corresponding to the face object.
When the predicted detection frame matched with the detection frame of the face object does not exist in the predicted detection frame of the image to be processed, the image to be processed comprises a face image which does not appear in the previous image to be processed, a corresponding face feature set is required to be created for the face object which does not appear, and the face feature of the face object is added into the face feature set corresponding to the face object.
Step 204, in response to determining that the face object in the image to be processed disappears, matching the face feature set of the face object disappeared with the face feature set in the preset feature library, and associating the matched face feature set.
In this embodiment, the executing body matches a face feature set of a face object that disappears with a face feature set in a preset feature library in response to determining that the face object in the image to be processed disappears, and associates the matched face feature sets, where each face feature set in the preset feature library is correspondingly provided with a face identifier.
The face object disappearance represents that the face object which appears in the video to be processed before the image to be processed is not included in the image to be processed. As an example, when the face object of the target person a is included in the previous frame of the image to be processed, and the face object of the target person a is not included in the image to be processed, it is determined that the face object in the image to be processed is disappeared.
In this embodiment, in order to avoid inaccuracy of monitoring a face object message in a single frame of to-be-processed image, the executing body determines that there is a situation that the face object disappears in the to-be-processed image in response to determining that no face object appears in a continuous specific number of frames of to-be-processed images until the to-be-processed image is reached. Wherein, the specific number can be set according to the actual situation. For example, the specific number may be 5.
As an example, the executing body determines, according to the euclidean distance, that each face feature in the face feature set of the vanished face object is matched with each face feature in the face feature sets in the preset feature library in response to determining that the face object in the image to be processed is vanished, and determines two face features with the minimum euclidean distance as matched face features, thereby determining, in the preset feature library, a face feature matched with each face feature in the face feature set of the vanished face object. For each face feature of the disappeared face object, the executing body determines the face identification corresponding to the face feature in the preset feature set matched with the face feature as the face identification corresponding to the face feature. Furthermore, the executing body may weight the face distance and the number of face marks of the face object based on the preset weight to obtain a comprehensive score, and determine the face object in the preset feature library corresponding to the face object. The face distance is the Euclidean distance between the face features in the face feature set of the face object and the face features in the matched preset feature library. By matching the face feature sets, the problem of low accuracy in single face feature matching is avoided, and the recognition accuracy is improved.
In some optional implementations of this embodiment, in response to determining that a face object disappears in the image to be processed, the executing body selects a preset number of face features from a face feature set of the face object that disappears; matching a preset number of face features with face feature sets in a preset feature library, and associating the matched face feature sets. The executing body may associate the matched face feature sets in any manner. As an example, the executing body may add the face feature set of the face object in the image to be processed to the face feature set of the face object matched with the face object in the preset feature library.
The preset number can be specifically set according to actual situations. As an example, the preset number is 10. For each face object, the executing body may select a first frame image and a last frame image of the face object in the video to be processed, and randomly select 8 frames of images to be processed between the first frame image and the last frame image of the face object.
In some optional implementations of this embodiment, the executing body adds the face feature set of the disappeared face object to the preset feature library and adds the corresponding face identifier to the disappeared face object in response to determining that the face feature set matching the face feature set of the disappeared face object does not exist in the preset feature library. Through the implementation mode, the preset feature library can be enriched.
With continued reference to fig. 3, fig. 3 is a schematic diagram 300 of an application scenario of the method for face recognition according to the present embodiment. In the application scenario of fig. 3, the edge device 301 performs face detection and face feature extraction on each frame of to-be-processed image in the to-be-processed images, so as to obtain a detection frame and a face feature of a face object in each frame of to-be-processed image, and sends the detection frame and the face feature of the face object in each frame of to-be-processed image to the server 302. The server 302 obtains, for each frame of the image to be processed in the video to be processed, a detection box 30311, 30312 and a face feature 30321, 30322 of the face object in the image to be processed 303. The detection frames 30311, 30312 correspond to the face features 30321, 30322 one by one. Then, according to the historical track of the face object which appears in the video to be processed before the image to be processed, track information of the face object which appears in the image to be processed is predicted, and prediction detection frames 30331 and 30332 are obtained; for each face object in the image to be processed, adding the face features of the face object to a face feature set of the face object corresponding to a prediction detection frame matched with the detection frame of the face object based on the matching degree of the detection frame of the face object and the prediction detection frame in the image to be processed. Specifically, the detection frame 30311 is matched with the prediction detection frame 30331, and the server 302 adds the face feature 30321 corresponding to the detection frame 30311 to the face feature set 304 of the face object corresponding to the prediction detection frame 30331; the detection box 30312 matches the predictive detection box 30332 and the server 302 adds the face features 30322 corresponding to the detection box 30312 to the set of face features 305 of the face object corresponding to the predictive detection box 30332. The server 302 responds to the condition that the face object in the image to be processed is disappeared, wherein the disappeared face object is the face object 3061 in the image 306 to be processed of the last frame. The server 302 matches the face feature set 307 of the vanishing face object with a plurality of face feature sets 308, 309, 310 in a preset feature library, and associates the matched face feature sets 307 and 308.
According to the method provided by the embodiment of the disclosure, the detection frame and the face characteristics of the face object in each frame of the image to be processed in the video to be processed are obtained; predicting track information of the face object which appears in the image to be processed according to the historical track of the face object which appears in the video to be processed before the image to be processed, and obtaining a prediction detection frame; for each face object in the image to be processed, adding the face features of the face object to a face feature set of the face object corresponding to a prediction detection frame matched with the detection frame of the face object based on the matching degree of the detection frame of the face object and the prediction detection frame in the image to be processed; in response to determining that the face object in the image to be processed disappears, matching a face feature set of the face object disappeared with a face feature set in a preset feature library, and associating the matched face feature set, wherein each face feature set in the preset feature library is correspondingly provided with a face mark, so that the problem of low accuracy in single face feature matching is avoided, and the recognition accuracy is improved.
In some optional implementations of this embodiment, the executing body may further determine attribute information of the target person represented by each face object according to a face feature set corresponding to each face object.
The attribute information may be, for example, information such as age, sex, whether to wear a mask, a duration of time that a target person corresponding to a face object appears in a video to be processed, and the like. For each face object, the executing body may perform attribute recognition on each face feature in the face feature set of the face object to obtain attribute information corresponding to each face feature, and then determine the attribute information of the face object based on average, weighted average, number ratio and other modes for a plurality of attribute information corresponding to the face object. As an example, the above-described execution subject may determine the age of the face object based on an average value, and determine the sex of the face object based on the sex-characterizing information whose number is relatively large.
In some optional implementations of this embodiment, the executing body may further perform data statistics on attribute information of a target person represented by each face object to obtain preset data.
The preset data may be any information obtained by counting attribute information. As an example, the video to be processed may be a monitoring video in a supermarket acquired by a monitoring device, and the preset data may be people flow data characterizing the supermarket in each time period, age bracket distribution data of a supermarket customer, and the like.
With continued reference to fig. 4, there is shown a schematic flow 400 of one embodiment of a training method of a face detection recognition model according to the present application, comprising the steps of:
in step 401, image acquisition is performed on the video to be processed based on a preset time interval through the edge terminal equipment, so as to obtain a plurality of frames of images to be processed.
The face recognition model is used for representing the corresponding relation between the face object and the face characteristic information in the face detection frame.
Step 402, performing face detection and face feature extraction on each frame of to-be-processed image through edge equipment to obtain a detection frame and face features of a face object in each frame of to-be-processed image.
Step 403, for each frame of image to be processed in the video to be processed, acquiring a detection frame and a face feature of a face object in the image to be processed.
And step 404, predicting the track information of the face object in the image to be processed according to the historical track of the face object in the video to be processed before the image to be processed, and obtaining a prediction detection frame.
Step 405, for each face object in the image to be processed, adding the face feature of the face object to the face feature set of the face object corresponding to the prediction detection frame matched with the detection frame of the face object based on the matching degree of the detection frame of the face object and the prediction detection frame in the image to be processed.
Step 406, in response to determining that the face object in the image to be processed disappears, matching the face feature set of the face object disappeared with the face feature set in the preset feature library, and associating the matched face feature set.
As can be seen from this embodiment, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for face recognition in this embodiment specifically illustrates that the face detection and the face feature extraction are performed by the edge device, the data sent by the edge device to the execution subject is the detection frame and the face feature, the edge device is responsible for face detection and feature extraction, the execution subject is responsible for face recognition, and the edge device and the execution subject perform their own roles, which improves the information processing efficiency and also improves the security of data transmission.
With continued reference to fig. 5, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of an apparatus for face recognition, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus for face recognition includes: comprising the following steps: an obtaining unit 501 configured to obtain, for each frame of an image to be processed in a video to be processed, a detection frame and a face feature of a face object in the image to be processed; a prediction unit 502 configured to predict track information of a face object that has appeared in the image to be processed according to a history track of the face object that has appeared in the video to be processed before the image to be processed, to obtain a prediction detection frame; a first matching unit 503 configured to, for each face object in the image to be processed, add the face feature of the face object to a face feature set of the face object corresponding to the predicted detection frame matched with the detection frame of the face object based on the matching degree of the detection frame of the face object and the predicted detection frame in the image to be processed; and a second matching unit 504, configured to match a face feature set of the disappeared face object with a face feature set in a preset feature library in response to determining that the face object in the image to be processed disappears, and correlate the matched face feature sets, where each face feature set in the preset feature library is correspondingly provided with a face identifier.
In some optional implementations of this embodiment, the apparatus further includes: a creating unit (not shown in the figure) configured to create a corresponding face feature set for the face object in response to determining that, among the prediction detection frames of the image to be processed, there is no prediction detection frame matching the detection frame of the face object, and add the face feature of the face object to the face feature set corresponding to the face object.
In some optional implementations of the present embodiment, the second matching unit 504 is further configured to: responding to the condition that the face object disappears in the image to be processed, and selecting a preset number of face features from a face feature set of the face object which disappears; matching a preset number of face features with face feature sets in a preset feature library, and associating the matched face feature sets.
In some optional implementations of this embodiment, the apparatus further includes: an adding unit (not shown in the figure) is configured to, in response to determining that there is no face feature set in the preset feature library that matches the face feature set of the disappeared face object, add the face feature set of the disappeared face object to the preset feature library, and add a corresponding face identification for the disappeared face object.
In some optional implementations of this embodiment, the apparatus further includes: a determining unit (not shown in the figure) configured to determine attribute information of the target person represented by each face object according to the face feature set corresponding to each face object.
In some embodiments, the apparatus further comprises: and the statistics unit (not shown in the figure) is configured to perform data statistics on the attribute information of the target personnel represented by each face object to obtain preset data.
In some optional implementations of this embodiment, the apparatus further includes: an acquisition unit (not shown in the figure) configured to acquire images of the video to be processed based on a preset time interval by using edge equipment, so as to obtain multi-frame images to be processed; and the obtaining unit (not shown in the figure) is configured to perform face detection and face feature extraction on each frame of to-be-processed image through the edge terminal equipment to obtain a detection frame and face features of a face object in each frame of to-be-processed image.
In this embodiment, an acquiring unit in the apparatus for face recognition acquires, for each frame of image to be processed in a video to be processed, a detection frame and a face feature of a face object in the image to be processed; the prediction unit predicts the track information of the face object which appears in the image to be processed according to the historical track of the face object which appears in the video to be processed before the image to be processed, and obtains a prediction detection frame; the first matching unit is used for adding the face characteristics of each face object in the image to be processed into a face characteristic set of the face object corresponding to a prediction detection frame matched with the detection frame of the face object based on the matching degree of the detection frame of the face object and the prediction detection frame in the image to be processed; the second matching unit is used for matching the face feature set of the disappeared face object with the face feature set in the preset feature library in response to the fact that the face object disappears in the image to be processed, and correlating the matched face feature set, wherein each face feature set in the preset feature library is correspondingly provided with a face mark, so that the problem of low accuracy in single face feature matching is avoided, and the recognition accuracy is improved.
Referring now to FIG. 6, there is illustrated a schematic diagram of a computer system 600 suitable for use with devices (e.g., devices 101, 102, 103, 105 shown in FIG. 1) implementing embodiments of the present application. The apparatus shown in fig. 6 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a processor (e.g., CPU, central processing unit) 601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the system 600 are also stored. The processor 601, the ROM602, and the RAM603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, mouse, etc.; an output portion 607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611. The above-described functions defined in the method of the application are performed when the computer program is executed by the processor 601.
The computer readable medium of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the client computer, partly on the client computer, as a stand-alone software package, partly on the client computer and partly on a remote computer or entirely on the remote computer or server. In the case of remote computers, the remote computer may be connected to the client computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented in software or in hardware. The described units may also be provided in a processor, for example, described as: a processor includes an acquisition unit, a prediction unit, a first matching unit, and a second matching unit. The names of the units do not form a limitation on the unit itself in a certain case, for example, the second matching unit may also be described as a "unit that matches a face feature set of a face object that disappears with a face feature set in a preset feature library and associates the matched face feature set" in response to determining that the face object that exists in the image to be processed disappears.
As another aspect, the present application also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the computer device to: aiming at each frame of to-be-processed image in the to-be-processed video, acquiring a detection frame and a face feature of a face object in the to-be-processed image; predicting track information of the face object which appears in the image to be processed according to the historical track of the face object which appears in the video to be processed before the image to be processed, and obtaining a prediction detection frame; for each face object in the image to be processed, adding the face features of the face object to a face feature set of the face object corresponding to a prediction detection frame matched with the detection frame of the face object based on the matching degree of the detection frame of the face object and the prediction detection frame in the image to be processed; in response to determining that the face object in the image to be processed disappears, matching a face feature set of the face object disappearing with a face feature set in a preset feature library, and associating the matched face feature set, wherein each face feature set in the preset feature library is correspondingly provided with a face mark.
The above description is only illustrative of the preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the application referred to in the present application is not limited to the specific combinations of the technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the inventive concept described above. Such as the above-mentioned features and the technical features disclosed in the present application (but not limited to) having similar functions are replaced with each other.
Claims (16)
1.A method for face recognition, comprising:
aiming at each frame of to-be-processed image in the to-be-processed video, acquiring a detection frame and a face feature of a face object in the to-be-processed image;
Predicting track information of the face object which appears in the image to be processed according to the historical track of the face object which appears in the video to be processed before the image to be processed is cut off, and obtaining a prediction detection frame;
For each face object in the image to be processed, adding the face features of the face object to a face feature set of the face object corresponding to a prediction detection frame matched with the detection frame of the face object based on the matching degree of the detection frame of the face object and the prediction detection frame in the image to be processed;
In response to determining that the face object in the image to be processed disappears, matching a face feature set of the face object disappearing with a face feature set in a preset feature library, and associating the matched face feature set, wherein each face feature set in the preset feature library is correspondingly provided with a face mark.
2. The method of claim 1, further comprising:
And in response to determining that the predicted detection frame matched with the detection frame of the face object does not exist in the predicted detection frame of the image to be processed, creating a corresponding face feature set for the face object, and adding the face feature of the face object to the face feature set corresponding to the face object.
3. The method according to claim 1, wherein the responding to the condition that the face object disappears in the image to be processed is determined, the face feature set of the disappeared face object is matched with the face feature set in the preset feature library, and the matched face feature sets are associated, wherein each face feature set in the preset feature library is correspondingly provided with a face identifier, and the method comprises:
responding to the condition that the face object disappears in the image to be processed, and selecting a preset number of face features from a face feature set of the face object which disappears;
Matching the face features of the preset number with face feature sets in the preset feature library, and associating the matched face feature sets.
4. The method of claim 1, further comprising:
And in response to determining that the face feature set matched with the face feature set of the disappeared face object does not exist in the preset feature library, adding the face feature set of the disappeared face object into the preset feature library, and adding a corresponding face identifier for the disappeared face object.
5. The method of claim 1, further comprising:
And determining attribute information of the target personnel represented by each face object according to the face feature set corresponding to each face object.
6. The method of claim 5, further comprising:
and carrying out data statistics on attribute information of the target personnel represented by each face object to obtain preset data.
7. The method of any of claims 1-6, further comprising:
Image acquisition is carried out on the video to be processed based on a preset time interval through edge terminal equipment, so that a plurality of frames of images to be processed are obtained;
and carrying out face detection and face feature extraction on each frame of to-be-processed image through the edge terminal equipment to obtain a detection frame and face features of a face object in each frame of to-be-processed image.
8. An apparatus for face recognition, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is configured to acquire a detection frame and a face characteristic of a face object in each frame of a to-be-processed image in a to-be-processed video;
the prediction unit is configured to predict the track information of the face object which appears in the image to be processed according to the historical track of the face object which appears in the video to be processed before the image to be processed is cut off, so as to obtain a prediction detection frame;
A first matching unit configured to add, for each face object in the image to be processed, a face feature of the face object to a face feature set of the face object corresponding to a predicted detection frame matched with the detection frame of the face object, based on a degree of matching between the detection frame of the face object and the predicted detection frame in the image to be processed;
The second matching unit is configured to match a face feature set of the disappeared face object with a face feature set in a preset feature library and correlate the matched face feature set in response to determining that the face object in the image to be processed disappears, wherein each face feature set in the preset feature library is correspondingly provided with a face mark.
9. The apparatus of claim 8, further comprising:
And the creating unit is configured to create a corresponding face feature set for the face object and add the face feature of the face object to the face feature set corresponding to the face object in response to determining that the predicted detection frame matched with the detection frame of the face object does not exist in the predicted detection frame of the image to be processed.
10. The apparatus of claim 8, wherein the second matching unit is further configured to:
Responding to the condition that the face object disappears in the image to be processed, and selecting a preset number of face features from a face feature set of the face object which disappears; matching the face features of the preset number with face feature sets in the preset feature library, and associating the matched face feature sets.
11. The apparatus of claim 8, further comprising:
And the adding unit is configured to respond to the fact that the face feature set matched with the face feature set of the disappeared face object does not exist in the preset feature library, add the face feature set of the disappeared face object into the preset feature library, and add the corresponding face identification for the disappeared face object.
12. The apparatus of claim 8, further comprising:
and the determining unit is configured to determine attribute information of the target personnel represented by each face object according to the face feature set corresponding to each face object.
13. The apparatus of claim 12, further comprising:
and the statistics unit is configured to perform data statistics on the attribute information of the target personnel represented by each face object to obtain preset data.
14. The apparatus of any of claims 8-13, further comprising:
The acquisition unit is configured to acquire images of the video to be processed based on a preset time interval through the edge equipment to obtain multi-frame images to be processed;
The obtaining unit is configured to perform face detection and face feature extraction on each frame of to-be-processed image through the edge terminal equipment to obtain a detection frame and face features of a face object in each frame of to-be-processed image.
15. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-7.
16. An electronic device, comprising:
One or more processors;
a storage device having one or more programs stored thereon,
When executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-7.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011389216.3A CN113762013B (en) | 2020-12-02 | 2020-12-02 | Method and device for face recognition |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011389216.3A CN113762013B (en) | 2020-12-02 | 2020-12-02 | Method and device for face recognition |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN113762013A CN113762013A (en) | 2021-12-07 |
| CN113762013B true CN113762013B (en) | 2024-09-24 |
Family
ID=78786137
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202011389216.3A Active CN113762013B (en) | 2020-12-02 | 2020-12-02 | Method and device for face recognition |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113762013B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120183078B (en) * | 2025-03-20 | 2025-09-23 | 中国人民解放军总医院第四医学中心 | Login verification system and method based on digital ambulance |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108171207A (en) * | 2018-01-17 | 2018-06-15 | 百度在线网络技术(北京)有限公司 | Face identification method and device based on video sequence |
| CN110705478A (en) * | 2019-09-30 | 2020-01-17 | 腾讯科技(深圳)有限公司 | Face tracking method, device, equipment and storage medium |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9589179B2 (en) * | 2013-12-19 | 2017-03-07 | Microsoft Technology Licensing, Llc | Object detection techniques |
| CN109829436B (en) * | 2019-02-02 | 2022-05-13 | 福州大学 | A Multi-Face Tracking Method Based on Deep Apparent Features and Adaptive Aggregation Networks |
-
2020
- 2020-12-02 CN CN202011389216.3A patent/CN113762013B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108171207A (en) * | 2018-01-17 | 2018-06-15 | 百度在线网络技术(北京)有限公司 | Face identification method and device based on video sequence |
| CN110705478A (en) * | 2019-09-30 | 2020-01-17 | 腾讯科技(深圳)有限公司 | Face tracking method, device, equipment and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113762013A (en) | 2021-12-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111723727B (en) | Cloud monitoring method and device based on edge computing, electronic equipment and storage medium | |
| CN109993150B (en) | Method and device for identifying age | |
| CN109104620A (en) | A kind of short video recommendation method, device and readable medium | |
| CN109784177A (en) | Missing crew's method for rapidly positioning, device and medium based on images match | |
| CN108230346B (en) | Method and device for segmenting semantic features of image and electronic equipment | |
| CN113642519B (en) | A face recognition system and a face recognition method | |
| CN114679607B (en) | Video frame rate control method and device, electronic equipment and storage medium | |
| CN110248195B (en) | Method and apparatus for outputting information | |
| US20220309763A1 (en) | Method for identifying traffic light, device, cloud control platform and vehicle-road coordination system | |
| CN112989987B (en) | Method, device, equipment and storage medium for identifying crowd behavior | |
| CN113762013B (en) | Method and device for face recognition | |
| CN110008926B (en) | Method and device for identifying age | |
| CN110673886B (en) | Method and device for generating thermodynamic diagrams | |
| CN119946226A (en) | Monitoring and early warning method, device, equipment and medium based on multimodal large model | |
| CN112132120B (en) | Method and device for video structuring | |
| CN108062576B (en) | Method and apparatus for output data | |
| CN113033475B (en) | Target object tracking method, related device and computer program product | |
| CN116663954A (en) | Stadium recommending method, device, medium and equipment based on intelligent level | |
| CN115756256A (en) | Information labeling method, system, electronic equipment and storage medium | |
| CN115908219A (en) | Face recognition method, device, equipment and storage medium | |
| CN111797273B (en) | Method and device for adjusting parameters | |
| CN110378172B (en) | Information generation method and device | |
| CN111259689B (en) | Method and device for transmitting information | |
| CN112906655A (en) | User state judgment method and device | |
| CN112700657A (en) | Method and device for generating detection information, road side equipment and cloud control platform |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |