CN110502992B - A fast face recognition method for fixed scene video based on relational graph - Google Patents
A fast face recognition method for fixed scene video based on relational graph Download PDFInfo
- Publication number
- CN110502992B CN110502992B CN201910651569.7A CN201910651569A CN110502992B CN 110502992 B CN110502992 B CN 110502992B CN 201910651569 A CN201910651569 A CN 201910651569A CN 110502992 B CN110502992 B CN 110502992B
- Authority
- CN
- China
- Prior art keywords
- face
- image
- fixed scene
- scene video
- recognized
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Marketing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Primary Health Care (AREA)
- Human Resources & Organizations (AREA)
- Economics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Tourism & Hospitality (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention relates to a method for rapidly identifying a face of a fixed scene video based on a relational graph. The technical scheme is as follows: and (3) intercepting a face image and a non-face image from the fixed scene video, and forming a training data set D2 by the gray level images after gray level transformation. A strong classifier h (x) is trained using all the images in the training data set D2. And processing the human faces in the fixed scene videos with the same n segment lengths by using a graph algorithm to obtain a human relation graph in the fixed scene videos. And (3) carrying out face detection on each frame of image in the fixed scene video to be recognized by using a strong classifier h (x), matching the image with the face image in the face sample library D, and updating the unidentified face matching degree priority queue by using the relation atlas of the person in the fixed scene video. And if the maximum matching degree of the updated matching degree priority queue is greater than the threshold value T1 of the matching degree, the identification is successful, otherwise, the identification is abandoned. The invention has the characteristics of real-time monitoring, small parameter quantity and high accuracy of face recognition.
Description
Technical Field
The invention belongs to the technical field of rapid face recognition. In particular to a method for rapidly identifying human faces of fixed scene videos based on a relational graph.
Technical Field
The fixed scene video device has the advantages of small occupied area, easiness in installation, high precision, low real-time performance, low energy consumption and the like, and becomes a preferred mode for building a security system.
The fixed scene video represents a video obtained by shooting a person or an object appearing in the visual field range of the shooting device under a specific background environment, such as a security video and an attendance system. At present, the monitoring and security system also needs workers to monitor on a video monitoring platform manually, when the workers face a large amount of fixed scene video data, the workers are difficult to have enough energy and time to observe specific contents in monitoring, and observation results are interfered by subjective factors to a great extent, so that the monitoring results are inaccurate, and timely coping strategies cannot be made. It is difficult to identify fast moving objects, such as people running quickly through the surveillance area. And a large amount of manpower and material resources are consumed by visual observation, which causes waste. There are also many face detection algorithms based on deep learning and machine learning, but since they consider many additional factors, such as environmental background, although the algorithms are made robust, the algorithms are easily interfered by irrelevant factors, so that the number of parameters in the algorithms becomes large.
The patent technology of the human face recognition optimization method based on the Adaboost algorithm (CN201510203079) needs to train a large number of human face sample images, although the accuracy is high, the parameter quantity is huge, and the real-time performance is difficult to achieve. The patent of "face recognition method and apparatus" (CN201410602236) is a technology that considers environmental background factors such as clothing features of people and shooting time of photos, increases data volume, introduces irrelevant features, and finally interferes the recognition result of the face. The patent technology of the 'face recognition method' (CN201310748379) uses too many face parameters, so that the algorithm efficiency is greatly reduced, and the instantaneity cannot be achieved.
Object of the Invention
The invention aims to overcome the defects of the prior art and provides a relation graph-based fast face recognition method for a fixed scene video, which has the advantages of real-time monitoring, small parameter quantity and high face recognition accuracy.
In order to achieve the purpose, the technical scheme adopted by the invention comprises the following steps:
step 1, data preprocessing
M face images and L non-face images are intercepted from a fixed scene video to form a training data set D2, wherein one face image represents a face. Firstly, converting N images in a training data set D2 into C multiplied by C gray images one by one, wherein N is M + L; k2 Gabor kernels are formed by K different proportionality coefficients and K1 different rotation angles, the gray level images are decomposed by K2 Gabor kernels, and each image in a training data set D2 is decomposed into K2 matrixes with the size of C multiplied by C; wherein, K2 is K1 × K, C, K, K1 and K2 are natural numbers; and (4) forming a face sample library D by all the face images in the training data set D2, and calibrating all the face images.
Step 2, training process of face detector
Step 2.1, establish training sample (x)p,yp),p=1,......,N。
xpRepresenting K2 matrixes with the size of C multiplied by C obtained by Gabor decomposition of the p-th sample in the training data set D2;
yppresentation trainingWhether the p-th sample in the data set D2 is a face image;
if xpIs a face image, then yp1 is ═ 1; if xpIs a non-face image, then yp=-1。
p=1,......,N;
q=1,......,K2;
n represents the number of sequence numbers of the last image in the training data set D2;
k2 represents the number of sequence numbers of the last feature of each training sample.
Step 2.3, settingFor the jth feature of the ith training sample, the jth feature f of each training sample is usedjFinding the optimal weak classifier hj(fj,θj,γj) Threshold parameter theta ofjClass parameter gammajSo that the classification error ejAnd minimum.
In formulae (1) and (2): thetajRepresenting a threshold parameter in the jth weak classifier;
γjrepresenting the category parameter in the jth weak classifier;
representation classifier hj(fj;θj,γj) Whether the jth feature in the ith training sample is classified as erroneous,
in formula (3): y isi=hj(fj;θj,γj) Indicating that the classification is correct;
yi≠hj(fj;θj,γj) Indicating a classification error.
By using the jth feature f of each training samplejTraining to obtain the optimal weak classifier hj(fj,θj,γj):
In formula (4): thetajRepresents the jth optimal weak classifier h obtained by optimizationj(fj;θj,γj) A threshold parameter of (d);
γjrepresents the jth optimal weak classifier h obtained by optimizationj(fj;θj,γj) The category parameter of (2).
Step 2.4, updating the weight of each feature of the next iteration
If the jth feature of the ith training sample is utilizedMake the ith training sample by the jth optimal weak classifier hj(fj;θj,γj) If the identification is correct, the j +1 th feature of the ith training sample of the next iteration isWeight of (2)
If the jth feature of the ith training sample is utilizedMake the ith training sample unable to be optimized by the jth weak classifier hj(fj;θj,γj) If the identification is correct, the j +1 th feature of the ith training sample of the next iteration isWeight of (2)
In step 2.4: e.g. of the typejRepresents hj(fj;θj,γj) The classification error of (2).
And step 2.5, repeating the step 2.3 and the step 2.4 until K2 features are traversed to obtain a final strong classifier h (x):
in formulae (5) and (6): x represents a frame of image in the fixed scene video to be identified;
fjrepresents the jth feature in the total K2 features after K2 Gabor nuclei are decomposed in x;
θjrepresents the jth optimal weak classifier h obtained by optimizationj(fj;θj,γj) A threshold parameter of (d);
γjrepresents the jth optimal weak classifier h obtained by optimizationj(fj;θj,γj) A category parameter of (1);
αjto represent the jth weak classifier hj(fj,θj,γj) The weight occupied in the final strong classifier h (x);
ejrepresents hj(fj;θj,γj) The classification error of (2);
Step 3, fast face classification process
And 3.1, performing face detection on one frame of image in the fixed scene video to be recognized by adopting the strong classifier h (x) obtained in the step 2, wherein the image subjected to face detection is a face image set D3 to be recognized.
And 3.2, performing gray level histogram transformation on K2 features of all the face images in the face image set D3 to be recognized to obtain a gray level histogram set X of all the face images in the face image set D3 to be recognized.
And 3.3, performing nearest neighbor matching on the gray level histogram set X and the gray level histograms of K2 features of each face image in the face sample library D by using a graph-based relation atlas fast classification algorithm to obtain the types of all face images to be recognized in the face image set D3 to be recognized.
And 3.4, repeating the steps of 3.1-3.3 until the image of each frame in the fixed scene video to be identified is identified.
The relation map fast classification algorithm based on the graph comprises the following specific steps:
step 1, establishing a relationship map of people in a fixed scene video by using a map algorithm
1.1, establishing a face recognition training data set D1 by using n fixed scene videos with the same length, wherein M kinds of faces are shared in the face recognition training data set D1, and establishing a relation matrix U with the size of M multiplied by M; and initializing a relation matrix U into a matrix with all elements being zero, wherein the rows and the columns of the relation matrix U all represent the types of the human faces in the fixed scene video.
1.2, step by step, taking the J-th section of fixed scene video in a face recognition training data set D1, taking one frame of fixed scene video image every T frames, wherein the J-th section of fixed scene video has T fixed scene video images in total, and forming the T fixed scene video images into a fixed scene video image set QJ,1≤J≤n。
1.3 step by step, fetch the fixed scene video image set QJOne frame of fixed scene video image O in (1)mM is more than or equal to 1 and less than or equal to T, and the frame of fixed scene video image O is subjected tomAnd (3) carrying out coordinate system calibration: the left lower corner is marked as the zero point of a coordinate system, the left side edge is the positive direction of the Y axis, and the lower side edge establishes a rectangular coordinate system for the positive direction of the X axis.
1.4, the strong classifier h (x) is used for carrying out substep on the frame of fixed scene video image OmDetecting a face part, and manually calibrating the detected face part; each face is labeled with a number k, where k is 1, 2, … …, M.
1.5, dividing the frame of fixed scene video image O after the calibration and numberingmThe center point of the face part in (1) is set as (a)k,bk) And k is the number of each face.
1.6, step by step, firstly setting a distance threshold value d, and then calculating the frame of fixed scene video image OmIf the distance l is larger than a set distance threshold value d, setting the distance l to be infinite; if the distance l is smaller than the set threshold value d, the distance l is not changed.
1.7, dividing the step, if the distance l is infinite, setting 1/l as 0, and adding the reciprocal 1/l of the distance l to the corresponding position of the relation matrix U.
1.8, repeating the 1.6 and 1.7 steps until the frame of the fixed scene video image OmThe distance l between the center points of any two face portions is calculated.
1.9, stepping and repeating the 1.3 to 1.8 stepping until a fixed scene video image set QJThe distance l between the center points of all arbitrary two face portions in each frame of the fixed scene video image is calculated.
1.10, the processing method of n sections of fixed scene videos in the face recognition training data set D1 is the same as that of 1.9, and a relation matrix U with the size of M multiplied by M is finally obtained; and carrying out normalization operation on the values in the relation matrix U, wherein the larger the value in the relation matrix U is, the tighter the human relation represented by the rows and the columns in the relation matrix U is, and the relation matrix U is the obtained relation map.
2, rapidly classifying the human face by using the relation atlas step by step
Step 2.1, carrying out nearest neighbor matching on a gray level histogram set X of the face image to be recognized and gray level histograms of K2 features of each face image in a face sample library D to obtain a matching degree priority queue set P (P) with the matching degree from large to small1,P2,……,Pr) And r represents the number of face images in the face image set D3 to be recognized, Pb=(gb,1,gb,2,……,gb,M) G represents the value of the matching degree, M represents the number of the types of the faces in the face sample library D, b represents the b-th face in the face image set D3 to be recognized, and b is more than or equal to 1 and less than or equal to r.
If the matching degree priority queue P of the b-th face in the face image to be recognizedbMaximum value g of medium matching degreebmaxIf the matching degree is greater than the set matching degree threshold T1, the maximum value g of the matching degree is identifiedbmaxThe corresponding face type; similarly, the maximum value g corresponding to the matching degree of all the faces in the face image to be recognized1max,g2max,……,grmaxIf the face image is larger than or equal to the set matching degree threshold T1, the face image is identified as the maximum matching degree value g1max,g2max,……,grmaxThe corresponding human face types form a recognized human face data set R1; the remaining face images constitute the candidate face set R2.
2.2, step 2, for the face images in the candidate face set R2, updating all face image matching degree priority queues P in the candidate face set R2 on the basis of the matching degree priority queue set P by adopting the relation atlas U obtained by step 1 and the faces in the identified face data set R11,P2,……,PcC represents the number of faces in the candidate face set R2, and a matching degree priority queue P is addedcDegree of medium matching gc,IThe method comprises the following steps:
in formula (7): u represents a relation map obtained in the step 1;
gc,Irepresenting the degree of matching of the c-th candidate face image in the candidate face set R2 before updating with the I-th personal face image in the recognized face data set R1;
representing the degree of matching between the c-th candidate face image in the updated candidate face set R2 and the I-th personal face image in the recognized face data set R1;
uIindicates the degree of matching gc,IThe face type, I ═ 1, 2, … …, M;
vzand the type of the face in the face data set R1 is identified, and z is more than or equal to 1 and less than M.
Step 2.3, matching degree priority queue P for face images in the updated candidate face set R21,P2,……,PcIf the maximum value g corresponding to the matching degrees of all the faces in the candidate face set R21max,g2max,……,gcmaxIf the face image is larger than or equal to the set matching degree threshold T1, the face image is identified as the maximum matching degree value g1max,g2max,……,gcmaxAnd removing the successfully recognized face images from the candidate face set R2 according to the corresponding face types, and adding the recognized face data set R1, wherein the number of the face images to be recognized in the candidate face set R2 is c1, and c1 is more than 0 and less than c.
The step 2.4, the step 2.2 and the step 2.3 are repeated until the face image matching degree priority queue P in the candidate face set R21,P2,……,Pc1No longer updated, if final match priority queue P1,P2,……,Pc1Maximum value g of degree of matching1max,g2max,……,gc1maxIf the matching degree is still smaller than the threshold value T1, abandoning the recognition of the residual face images in the candidate face set R2; and finishing the recognition of the face image to be recognized in the frame.
Due to the adoption of the technical scheme, the invention has the following positive effects:
the invention utilizes the relational atlas technology of the graph to model the types of the human faces in the fixed scene video, excavate the internal connection of different human faces, and does not need to carry out special training and learning on different human faces, thereby greatly reducing the calculated amount, complexity and parameter quantity of the recognition algorithm, shortening the processing time, having small parameter quantity and realizing real-time monitoring.
The relation atlas-based fixed scene video face rapid identification method is simple to use, and can rapidly and accurately identify the face in the fixed scene video to be identified, such as a monitoring video, only by requiring a worker to mark different faces in advance.
The method can quickly identify the designated face in the massive offline fixed scene videos, and has the function of re-identifying the historical fixed scene videos.
The method can also quickly judge the faces which never appear in the fixed scene video, gives an alarm in a short time, and can be used for community security and personnel investigation.
The relation map technology established by the invention can dig out the internal connection of different faces through video data, can also establish the social network among people, and can be used for anti-fraud technology. In anti-fraud, the characteristics of group crime often appear, so that the clustering of people on the network can be analyzed through the established social network, and the method has strong guidance for researching commonalities among people and researching the characteristics of people in the social network.
Therefore, the invention has the characteristics of real-time monitoring, small parameter quantity and high accuracy of face recognition.
Detailed Description
The invention is further described with reference to specific embodiments, without limiting its scope.
Example 1
A method for rapidly identifying human faces of fixed scene videos based on a relational graph.
Step 1, data preprocessing
20 face images and 40 non-face images are cut from the fixed scene video to form a training data set D2, wherein one face image represents one face. Firstly, converting 60 images in a training data set D2 into 16 x 16 gray images one by one, wherein 60 is 40+ 20; forming 25 Gabor kernels by using 5 different scale coefficients and 5 different rotation angles, decomposing the gray level images by using the 25 Gabor kernels, and decomposing each image in the training data set D2 into 25 matrixes with the size of 16 multiplied by 16; wherein, 25 is 5 × 5; and (4) forming a face sample library D by all the face images in the training data set D2, and calibrating all the face images.
Step 2, training process of face detector
Step 2.1, establish training sample (x)p,yp),p=1,……,60。
xpRepresenting 25 matrixes with the size of 16 multiplied by 16 obtained by Gabor decomposition of the p-th sample in the training data set D2;
ypindicating whether the p-th sample in the training data set D2 is a face image;
if xpIs a face image, then yp1 is ═ 1; if xpIs a non-face image, then yp=-1。
p=1,......,60;
q=1,......,25;
60 represents the number of sequence numbers of the last image in the training data set D2;
and 25 denotes the number of the last feature of each training sample.
Step 2.3, settingFor the 10 th feature of the 30 th training sample, the 10 th feature f of each training sample is used10Finding the optimal weak classifier h10(f10,θ10,γ10) Threshold parameter theta of10Class parameter gamma10So that the classification error e10And minimum.
In formulae (1) and (2): theta10Represents the threshold parameter in the 10 th weak classifier;
γ10representing the category parameter in the 10 th weak classifier;
representation classifier h10(f10;θ10,γ10) Whether the 10 th feature in the ith training sample is classified as erroneous,
in formula (3): y isi=h10(f10;θ10,γ10) Indicating that the classification is correct;
yi≠h10(f10;θ10,γ10) Indicating a classification error.
By using the 10 th feature f of each training sample10Training to obtain the optimal weak classifier h10(f10,θ10,γ10):
In formula (4): theta10Represents the 10 th best weak classifier h obtained by optimization10(f10;θ10,γ10) A threshold parameter of (d);
γ10represents the 10 th best weak classifier h obtained by optimization10(f10;θ10,γ10) The category parameter of (2).
Step 2.4, updating the weight of each feature of the next iteration
If the 10 th feature of the 30 th training sample is usedMake the 30 th training sample by the 10 th optimal weak classifier h10(f10;θ10,γ10) If the identification is correct, the 10+1 th feature of the 30 th training sample of the next iteration isWeight of (2)
If the 10 th feature of the 30 th training sample is usedMake the 30 th training sample unable to be selected by the 10 th optimal weak classifier h10(f10;θ10,γ10) If the identification is correct, the 10+1 th feature of the 30 th training sample of the next iteration isWeight of (2)
In step 2.4: e.g. of the type10Represents h10(f10;θ10,γ10) The classification error of (2).
And 2.5, repeating the steps 2.3 and 2.4 until 25 features are traversed to obtain a final strong classifier h (x):
in formulae (5) and (6): x represents a frame of image in the fixed scene video to be identified;
fjrepresenting the jth feature in all 25 features after decomposition of 25 Gabor kernels in x;
θjrepresents the jth optimal weak classifier h obtained by optimizationj(fj;θj,γj) A threshold parameter of (d);
γjrepresents the jth optimal weak classifier h obtained by optimizationj(fj;θj,γj) A category parameter of (1);
αjto represent the jth weak classifier hj(fj,θj,γj) The weight occupied in the final strong classifier h (x);
ejrepresents hj(fj;θj,γj) The classification error of (2);
Step 3, fast face classification process
And 3.1, performing face detection on one frame of image in the fixed scene video to be recognized by adopting the strong classifier h (x) obtained in the step 2, wherein the image subjected to face detection is a face image set D3 to be recognized.
And 3.2, performing gray level histogram transformation on 25 features of all the face images in the face image set D3 to be recognized to obtain a gray level histogram set X of all the face images in the face image set D3 to be recognized.
And 3.3, performing nearest neighbor matching on the gray level histogram set X and the gray level histograms of 25 features of each face image in the face sample library D by using a graph-based relation atlas fast classification algorithm to obtain the types of all face images to be recognized in the face image set D3.
And 3.4, repeating the steps of 3.1-3.3 until the image of each frame in the fixed scene video to be identified is identified.
The relation map fast classification algorithm based on the graph comprises the following specific steps:
step 1, establishing a relationship map of people in a fixed scene video by using a map algorithm
1.1, establishing a face recognition training data set D1 by using 15 fixed scene videos with the same length, wherein 20 faces are shared in the face recognition training data set D1, and establishing a relation matrix U with the size of 20 multiplied by 20; and initializing a relation matrix U into a matrix with all elements being zero, wherein the rows and the columns of the relation matrix U all represent the types of the human faces in the fixed scene video.
1.2, taking the 10 th section of fixed scene video in the face recognition training data set D1, taking one frame of fixed scene video image every 5 frames, wherein the 10 th section of fixed scene video has 80 frames of fixed scene video images, and fixing 80 framesFixed scene video image composition fixed scene video image set Q10。
1.3 step by step, fetch the fixed scene video image set Q10One frame of fixed scene video image O in (1)40For said one frame of fixed scene video image O40And (3) carrying out coordinate system calibration: the left lower corner is marked as the zero point of a coordinate system, the left side edge is the positive direction of the Y axis, and the lower side edge establishes a rectangular coordinate system for the positive direction of the X axis.
1.4, the strong classifier h (x) is used for carrying out substep on the frame of fixed scene video image O10Detecting a face part, and manually calibrating the detected face part; each face is assigned a number k, where k is 1, 2, … …, 20.
1.5, dividing the frame of fixed scene video image O after the calibration and numbering40The center point of the face part in (1) is set as (a)k,bk) And k is the number of each face.
1.6, step by step, firstly setting the distance threshold d as 80, and then calculating the frame of fixed scene video image O40Wherein, the distance l between the center points of two face parts is 59, and since the distance l 59 is smaller than the set distance threshold d 80, the distance l 59 is not changed.
Step 1.7, add the reciprocal 1/59 of the distance l 59 to the corresponding position of the relationship matrix U.
1.8, repeating the 1.6, 1.7 steps until the frame of fixed scene video image O40The distance l between the center points of any two face portions is calculated.
1.9, stepping and repeating the 1.3 to 1.8 stepping until a fixed scene video image set Q10The distance l between the center points of all arbitrary two face portions in each frame of the fixed scene video image is calculated.
1.10 substep, the processing method of 15 sections of fixed scene videos in the face recognition training data set D1 is the same as 1.9 substep, and a relationship matrix U with the size of 20 multiplied by 20 is finally obtained; and carrying out normalization operation on the values in the relation matrix U, wherein the larger the value in the relation matrix U is, the tighter the human relation represented by the rows and the columns in the relation matrix U is, and the relation matrix U is the obtained relation map.
2, rapidly classifying the human face by using the relation atlas step by step
Step 2.1, carrying out nearest neighbor matching on a gray level histogram set X of the face image to be recognized and a gray level histogram of 25 features of each face image in a face sample library D to obtain a matching degree priority queue set P (P) with the matching degree from large to small1,P2,……,P30) 30 denotes the number of face images in the set of face images D3 to be recognized, P16=(g16,1,g16,2,……,g16,20) G represents a value of the degree of matching, 20 represents the number of types of faces in the face sample library D, and 16 represents the 16 th face in the set of face images D3 to be recognized.
If the matching degree priority queue P of the 16 th face in the face image to be recognized16Maximum value g of medium matching degree16maxIf the matching degree is greater than the set matching degree threshold value of 0.85, the maximum value g of the matching degree is identified16maxThe corresponding face type; similarly, the maximum value g corresponding to the matching degree of all the faces in the face image to be recognized1max,g2max,……,g30maxIf the human face image is more than or equal to the set matching degree threshold value of 0.85, the human face image is identified as the maximum value g of the matching degree1max,g2max,……,g30maxThe corresponding human face types form a recognized human face data set R1; the remaining face images constitute the candidate face set R2.
2.2, step 2, for the face images in the candidate face set R2, updating all face image matching degree priority queues P in the candidate face set R2 on the basis of the matching degree priority queue set P by adopting the relation atlas U obtained by step 1 and the faces in the identified face data set R11,P2,……,P1010 represents the number of faces in the candidate face set R2, and the matching degree priority queue P is increased10Degree of medium matching g10,5The method comprises the following steps:
in formula (7): u represents a relation map obtained in the step 1;
g10,5representing the degree of matching of the 10 th candidate face image in the candidate face set R2 before updating with the 5 th personal face image in the recognized face data set R1;
indicating the degree of matching between the 10 th candidate face image in the updated candidate face set R2 and the 5 th personal face image in the recognized face data set R1;
u5indicates the degree of matching g5The type of face to which it belongs;
vzand the type of the face in the face data set R1 is identified, and z is more than or equal to 1 and less than 20.
Step 2.3, matching degree priority queue P for face images in the updated candidate face set R21,P2,……,P10If the maximum value g corresponding to the matching degrees of all the faces in the candidate face set R21max,g2max,……,g10maxIf the human face image is more than or equal to the set matching degree threshold value of 0.85, the human face image is identified as the maximum value g of the matching degree1max,g2max,……,g10maxAnd removing the successfully recognized face images from the candidate face set R2 according to the corresponding face types, and adding the recognized face data set R1, wherein the number of the face images to be recognized in the candidate face set R2 is 3.
The step 2.4, the step 2.2 and the step 2.3 are repeated until the face image matching degree priority queue P in the candidate face set R21,P2,……,P3Until no more updates, the final match priority queue P1,P2,……,P3Maximum value g of degree of matching1max,g2max,……,g3maxIf the matching degree is still less than the threshold value of 0.85, abandoning to identify the residual face images in the candidate face set R2; and finishing the recognition of the face image to be recognized in the frame.
Example 2
A method for rapidly identifying human faces of fixed scene videos based on a relational graph. The method for rapidly identifying the human face comprises the following steps:
step 1, data preprocessing
30 face images and 43 non-face images are cut from the fixed scene video to form a training data set D2, wherein one face image represents one face. Firstly, converting 73 images in a training data set D2 into 32 x 32 gray images one by one, wherein 73 is 30+ 43; forming 28 Gabor kernels by using 4 different scale coefficients and 7 different rotation angles, decomposing the gray level images by using the 28 Gabor kernels, and decomposing each image in the training data set D2 into 28 matrixes with the size of 32 multiplied by 32; wherein, 28 is 4 × 7; and (4) forming a face sample library D by all the face images in the training data set D2, and calibrating all the face images.
Step 2, training process of face detector
Step 2.1, establish training sample (x)p,yp),p=1,……,73。
xpRepresenting 28 matrixes with the size of 32 multiplied by 32 obtained by Gabor decomposition of the p-th sample in the training data set D2;
ypindicating whether the p-th sample in the training data set D2 is a face image;
if xpIs a face image, then yp1 is ═ 1; if xpIs a non-face image, then yp=-1。
p=1,......,73;
q=1,......,28;
73 denotes the number of sequence numbers of the last image in the training data set D2;
and 28 denotes the number of the last feature of each training sample.
Step 2.3, settingFor the 14 th feature of the 35 th training sample, the 14 th feature f of each training sample is used14Finding the optimal weak classifier h14(f14,θ14,γ14) Threshold parameter theta of14Class parameter gamma14So that the classification error e14And minimum.
In formulae (1) and (2): theta14Represents the threshold parameter in the 14 th weak classifier;
γ14representing class parameters in the 14 th weak classifier;
representation classifier h14(f14;θ14,γ14) Whether the 14 th feature in the ith training sample is classified as erroneous,
in formula (3): y isi=h14(f14;θ14,γ14) Indicating that the classification is correct;
yi≠h14(f14;θ14,γ14) Indicating a classification error.
By using the 14 th feature f of each training sample14Training to obtain the optimal weak classifier h14(f14,θ14,γ14):
In formula (4): theta14Represents the 14 th best weak classifier h obtained by optimization14(f14;θ14,γ14) A threshold parameter of (d);
γ14represents the 14 th best weak classifier h obtained by optimization14(f14;θ14,γ14) The category parameter of (2).
Step 2.4, updating the weight of each feature of the next iteration
If the 14 th feature of the 35 th training sample is usedMake the 35 th training sample by the 14 th optimal weak classifier h14(f14;θ14,γ14) If the identification is correct, the 14+1 th feature of the 35 th training sample of the next iteration isWeight of (2)
If the 14 th feature of the 35 th training sample is usedMake the 35 th training sample unable to be optimized by the 14 th weak classifier h14(f14;θ14,γ14) If the identification is correct, the 14+1 th feature of the 35 th training sample of the next iteration isWeight of (2)
In step 2.4: e.g. of the type14Represents h14(f14;θ14,γ14) The classification error of (2).
And 2.5, repeating the steps 2.3 and 2.4 until 28 features are traversed to obtain a final strong classifier h (x):
in formulae (5) and (6): x represents a frame of image in the fixed scene video to be identified;
fjrepresenting the jth feature in all 28 features after the decomposition of 28 Gabor kernels in x;
θjrepresents the jth optimal weak classifier h obtained by optimizationj(fj;θj,γj) A threshold parameter of (d);
γjrepresents the jth optimal weak classifier h obtained by optimizationj(fj;θj,γj) A category parameter of (1);
αjto represent the jth weak classifier hj(fj,θj,γj) The weight occupied in the final strong classifier h (x);
ejrepresents hj(fj;θj,γj) The classification error of (2);
Step 3, fast face classification process
And 3.1, performing face detection on one frame of image in the fixed scene video to be recognized by adopting the strong classifier h (x) obtained in the step 2, wherein the image subjected to face detection is a face image set D3 to be recognized.
And 3.2, performing gray level histogram transformation on 28 features of all the face images in the face image set D3 to be recognized to obtain a gray level histogram set X of all the face images in the face image set D3 to be recognized.
And 3.3, performing nearest neighbor matching on the gray level histogram set X and the gray level histograms of 28 features of each face image in the face sample library D by using a graph-based relation atlas fast classification algorithm to obtain the types of all face images to be recognized in the face image set D3.
And 3.4, repeating the steps of 3.1-3.3 until the image of each frame in the fixed scene video to be identified is identified.
The relation map fast classification algorithm based on the graph comprises the following specific steps:
step 1, establishing a relationship map of people in a fixed scene video by using a map algorithm
1.1, establishing a face recognition training data set D1 by using 40 segments of same fixed scene videos, wherein the face recognition training data set D1 contains 30 faces in total, and establishing a relationship matrix U with the size of 30 multiplied by 30; and initializing a relation matrix U into a matrix with all elements being zero, wherein the rows and the columns of the relation matrix U all represent the types of the human faces in the fixed scene video.
1.2, taking the 20 th section of fixed scene video in a face recognition training data set D1, taking one frame of fixed scene video image every 15 frames, wherein the 20 th section of fixed scene video has 18 fixed scene video images, and forming a fixed scene video image set Q by the 18 fixed scene video images20。
1.3 step by step, fetch the fixed scene video image set Q20One frame of fixed scene video image O in (1)10For said one frame of fixed scene video image O10And (3) carrying out coordinate system calibration: the left lower corner is marked as the zero point of a coordinate system, the left side edge is the positive direction of the Y axis, and the lower side edge establishes a rectangular coordinate system for the positive direction of the X axis.
1.4, the strong classifier h (x) is used for carrying out substep on the frame of fixed scene video image O10Detecting a face part, and manually calibrating the detected face part; each face is assigned a number k, where k is 1, 2, … …, 30.
1.5, dividing the frame of fixed scene video image O after the calibration and numbering10The center point of the face part in (1) is set as (a)k,bk) And k is the number of each face.
1.6, step by step, firstly setting the distance threshold value d as 70, and then calculating the frame of fixed scene video image O10The distance l between the center points of any two human face parts is 83; since the distance l is 83 larger than the set distance threshold d is 70, the distance l is set to infinity.
1.7, step by step, setting 1/l as 0 as the distance l is infinite; and adding 1/l to 0 to the corresponding position of the relation matrix U.
1.8, repeating the 1.6 and 1.7 steps until the frame of the fixed scene video image O10The distance l between the center points of any two face portions is calculated.
1.9, stepping and repeating the 1.3 to 1.8 stepping until a fixed scene video image set Q20The distance l between the center points of all arbitrary two face portions in each frame of the fixed scene video image is calculated.
1.10, the processing method of 40 sections of fixed scene videos in the face recognition training data set D1 is the same as that of 1.9, and a relation matrix U with the size of 30 x 30 is finally obtained; and carrying out normalization operation on the values in the relation matrix U, wherein the larger the value in the relation matrix U is, the tighter the human relation represented by the rows and the columns in the relation matrix U is, and the relation matrix U is the obtained relation map.
2, rapidly classifying the human face by using the relation atlas step by step
Step 2.1, carrying out nearest neighbor matching on the gray histogram set X of the face image to be recognized and the gray histograms of the 28 features of each face image in the face sample library D to obtain a matching degree priority queue set P (P) with the matching degree from large to small1,P2,……,P70) 70 denotes the number of face images in the set of face images D3 to be recognized, P55=(g55,1,g55,2,……,g55,30) G represents the value of the matching degree, 30 represents the number of the types of the faces in the face sample library D, and 55 represents the 55 th face in the face image set D3 to be recognized.
If the matching degree priority queue P of the 55 th face in the face image to be recognized55Maximum value g of medium matching degree55maxIf the matching degree is greater than the set matching degree threshold value of 0.77, the maximum value g of the matching degree is identified55maxThe corresponding face type; similarly, the maximum value g corresponding to the matching degree of all the faces in the face image to be recognized1max,g2max,……,g70maxIf the face image is more than or equal to the set matching degree threshold value of 0.77, the face image is identified as the maximum matching degree value g1max,g2max,……,g70maxThe corresponding human face types form a recognized human face data set R1; the remaining face images constitute the candidate face set R2.
2.2, step 2, for the face images in the candidate face set R2, updating all face image matching degree priority queues P in the candidate face set R2 on the basis of the matching degree priority queue set P by adopting the relation atlas U obtained by step 1 and the faces in the identified face data set R11,P2,……,P3838 represents the number of faces in the candidate face set R2, and adds a matching degree priority queue P38Degree of medium matching g38,3The method comprises the following steps:
in formula (7): u represents a relation map obtained in the step 1;
g38,3representing the degree of matching of the 38 th candidate face image in the candidate face set R2 before updating with the 3 rd personal face image in the recognized face data set R1;
indicating the degree of matching between the 38 th candidate face image in the updated candidate face set R2 and the 3 rd personal face image in the recognized face data set R1;
u3indicates the degree of matching g3The type of face to which it belongs;
vzand the type of the face in the face data set R1 is identified, and z is more than or equal to 1 and less than 30.
Step 2.3, matching degree priority queue P for face images in the updated candidate face set R21,P2,……,P38If all the candidate faces in the candidate face set R2Maximum value g corresponding to matching degree of human face1max,g2max,……,g38maxIf the face image is more than or equal to the set matching degree threshold value of 0.77, the face image is identified as the maximum matching degree value g1max,g2max,……,g38maxAnd removing the successfully recognized face images from the candidate face set R2 according to the corresponding face types, and adding the recognized face data set R1, wherein the number of the face images to be recognized in the candidate face set R2 is 20.
The step 2.4, the step 2.2 and the step 2.3 are repeated until the face image matching degree priority queue P in the candidate face set R21,P2,……,P20Until no more updates, the final match priority queue P1,P2,……,P20Maximum value g of degree of matching1max,g2max,……,g20maxIf the matching degree is greater than the threshold value of 0.77, the maximum value g of the matching degree is identified1max,g2max,……,g20maxAnd the face image to be recognized in the frame is recognized.
The embodiment has the following positive effects:
the specific implementation mode utilizes the relational graph technology of the graph to model the types of the human faces in the fixed scene video, digs the internal connection of different human faces, and does not need to carry out special training and learning on the different human faces, thereby greatly reducing the calculated amount, complexity and parameter quantity of the recognition algorithm, shortening the processing time, having small parameter quantity and realizing real-time monitoring.
The relation-atlas-based method for rapidly identifying the face of the fixed scene video is simple to use, and the face can be rapidly and accurately identified in the fixed scene video to be identified, such as a monitoring video, only by requiring a worker to mark different faces in advance.
The specific implementation method can rapidly identify the designated human face in massive offline fixed scene videos, and has the function of re-identifying the historical fixed scene videos.
The method and the device can quickly judge the face which never appears in the fixed scene video, send out an alarm in a short time, and can be used for community security and personnel investigation.
The relationship graph technology established by the specific implementation mode can be used for mining the internal connection of different faces through video data, establishing a social network among people and being used for anti-fraud technology. In anti-fraud, the characteristics of group crime often appear, so that the clustering of people on the network can be analyzed through the established social network, and the method has strong guidance for researching commonalities among people and researching the characteristics of people in the social network.
Therefore, the specific implementation mode has the characteristics of real-time monitoring, small parameter quantity and high accuracy of face recognition.
Claims (1)
1. A fast face recognition method of a fixed scene video based on a relation atlas is characterized in that the fast face recognition method of the fixed scene video comprises the following specific steps:
step 1, data preprocessing
M face images and L non-face images are intercepted from a fixed scene video to form a training data set D2, wherein one face image represents a face; firstly, converting N images in a training data set D2 into C multiplied by C gray images one by one, wherein N is M + L; k2 Gabor kernels are formed by K different proportionality coefficients and K1 different rotation angles, the gray level images are decomposed by K2 Gabor kernels, and each image in a training data set D2 is decomposed into K2 matrixes with the size of C multiplied by C; wherein, K2 is K1 × K, C, K, K1 and K2 are natural numbers; all the face images in the training data set D2 form a face sample library D, and all the face images are calibrated;
step 2, training process of face detector
Step 2.1, establish training sample (x)p,yp),p=1,……,N;
xpRepresents K2C × C matrixes obtained by Gabor decomposition of the p-th sample in the training data set D2,
yprepresenting the number of exercisesWhether the p-th sample in the set D2 is a face image,
if xpIs a face image, then yp1 is ═ 1; if xpIs a non-face image, then yp=-1;
p=1,……,N,
q=1,……,K2,
n denotes the number of sequence numbers of the last image in the training data set D2,
k2 represents the number of the sequence numbers of the last feature of each training sample;
step 2.3, setting fj iFor the jth feature of the ith training sample, the jth feature f of each training sample is usedjFinding the optimal weak classifier hj(fj,θj,γj) Threshold parameter theta ofjClass parameter gammajSo that the classification error ejMinimum;
in formulae (1) and (2): thetajRepresenting the threshold parameter in the jth weak classifier,
γjindicates the class parameters in the jth weak classifier,
representation classifier hj(fj;θj,γj) Whether the jth feature in the ith training sample is classified as erroneous,
in formula (3): y isi=hj(fj;θj,γj) It is indicated that the classification is correct,
yi≠hj(fj;θj,γj) Indicating a classification error;
by using the jth feature f of each training samplejTraining to obtain the optimal weak classifier hj(fj,θj,γj):
In formula (4): thetajRepresents the jth optimal weak classifier h obtained by optimizationj(fj;θj,γj) The threshold value parameter of (a) is,
γjrepresents the jth optimal weak classifier h obtained by optimizationj(fj;θj,γj) A category parameter of (1);
step 2.4, updating the weight of each feature of the next iteration
If the jth feature f of the ith training sample is utilizedj iSo that the ith training sample is optimized by the jthWeak classifier h ofj(fj;θj,γj) If the identification is correct, the j +1 th feature of the ith training sample of the next iteration isWeight of (2)
If the jth feature f of the ith training sample is utilizedj iMake the ith training sample unable to be optimized by the jth weak classifier hj(fj;θj,γj) If the identification is correct, the j +1 th feature of the ith training sample of the next iteration isWeight of (2)
In step 2.4: e.g. of the typejRepresents hj(fj;θj,γj) The classification error of (2);
and step 2.5, repeating the step 2.3 and the step 2.4 until K2 features are traversed to obtain a final strong classifier h (x):
in formulae (5) and (6): the x-table does not apply to one frame of image in the fixed scene video to be recognized,
fjrepresents the jth feature in the total K2 features decomposed by K2 Gabor nuclei in x,
θjrepresentation by optimizationTo the jth optimal weak classifier hj(fj;θj,γj) The threshold value parameter of (a) is,
γjrepresents the jth optimal weak classifier h obtained by optimizationj(fj;θj,γj) The category parameter of (a) is,
αjto represent the jth weak classifier hj(fj,θj,γj) The weight occupied in the final strong classifier h (x),
ejrepresents hj(fj;θj,γj) The error in the classification of (2) is,
step 3, fast face classification process
Step 3.1, performing face detection on one frame of image in the fixed scene video to be recognized by adopting the strong classifier h (x) obtained in the step 2, wherein the image subjected to face detection is a face image set D3 to be recognized;
3.2, performing gray level histogram transformation on K2 features of all the face images in the face image set D3 to be recognized to obtain a gray level histogram set X of all the face images in the face image set D3 to be recognized;
3.3, performing nearest neighbor matching on the gray level histogram set X and the gray level histograms of K2 features of each face image in the face sample library D by using a graph-based relation atlas fast classification algorithm to obtain the types of all face images to be recognized in a face image set D3 to be recognized;
3.4, repeating the steps of 3.1-3.3 until the image of each frame in the fixed scene video to be identified is identified;
the relation map fast classification algorithm based on the graph comprises the following specific steps:
step 1, establishing a relationship map of people in a fixed scene video by using a map algorithm
1.1, establishing a face recognition training data set D1 by using n fixed scene videos with the same length, wherein M kinds of faces are shared in the face recognition training data set D1, and establishing a relation matrix U with the size of M multiplied by M; initializing a relation matrix U into a matrix with all elements being zero, wherein the rows and the columns of the relation matrix U all represent the types of human faces in the fixed scene video;
1.2, step by step, taking the J-th section of fixed scene video in a face recognition training data set D1, taking one frame of fixed scene video image every T frames, wherein the J-th section of fixed scene video has T fixed scene video images in total, and forming the T fixed scene video images into a fixed scene video image set QJ,1≤J≤n;
1.3 step by step, fetch the fixed scene video image set QJOne frame of fixed scene video image O in (1)mM is more than or equal to 1 and less than or equal to T, and the frame of fixed scene video image O is subjected tomAnd (3) carrying out coordinate system calibration: the left lower corner is marked as a zero point of a coordinate system, the left side edge is the positive direction of a Y axis, and the lower side edge is the positive direction of an X axis to establish a rectangular coordinate system;
1.4, the strong classifier h (x) is used for carrying out substep on the frame of fixed scene video image OmDetecting a face part, and manually calibrating the detected face part; each face is marked with a number k, wherein k is 1, 2, … …, M;
1.5, dividing the frame of fixed scene video image O after the calibration and numberingmPerson in (1)The center point of the face part is set as (a)k,bk) Wherein k is the number of each face;
1.6, step by step, firstly setting a distance threshold value d, and then calculating the frame of fixed scene video image OmIf the distance l is larger than a set distance threshold value d, setting the distance l to be infinite; if the belonging distance l is smaller than a set threshold value d, the distance l is not changed;
1.7, dividing the step, if the distance l is infinite, setting 1/l as 0, and adding the reciprocal 1/l of the distance l to the position corresponding to the relation matrix U;
1.8, repeating the 1.6 and 1.7 steps until the frame of the fixed scene video image OmThe distances l between the center points of any two human face parts are calculated;
1.9, stepping and repeating the 1.3 to 1.8 stepping until a fixed scene video image set QJThe distance l between the center points of any two face parts in each frame of fixed scene video image is calculated;
1.10, the processing method of n sections of fixed scene videos in the face recognition training data set D1 is the same as that of 1.9, and a relation matrix U with the size of M multiplied by M is finally obtained; carrying out normalization operation on the values in the relation matrix U, wherein the larger the value in the relation matrix U is, the tighter the human relation represented by the rows and the columns in the relation matrix U is, and the relation matrix U is the obtained relation map;
2, rapidly classifying the human face by using the relation atlas step by step
Step 2.1, carrying out nearest neighbor matching on a gray level histogram set X of the face image to be recognized and gray level histograms of K2 features of each face image in a face sample library D to obtain a matching degree priority queue set P (P) with the matching degree from large to small1,P2,……,Pr) And r represents the number of face images in the face image set D3 to be recognized, Pb=(gb,1,gb,2,……,gb,M) G represents the value of the degree of matching, M represents the person in the face sample library DB represents the b-th face in the face image set D3 to be recognized, wherein b is more than or equal to 1 and less than or equal to r;
if the matching degree priority queue P of the b-th face in the face image to be recognizedbMaximum value g of medium matching degreebmaxIf the matching degree is greater than the set matching degree threshold T1, the maximum value g of the matching degree is identifiedbmaxThe corresponding face type; similarly, the maximum value g corresponding to the matching degree of all the faces in the face image to be recognized1max,g2max,……,grmaxIf the face image is larger than or equal to the set matching degree threshold T1, the face image is identified as the maximum matching degree value g1max,g2max,……,grmaxThe corresponding human face types form a recognized human face data set R1; the remaining face images constitute a candidate face set R2;
2.2, step 2, for the face images in the candidate face set R2, updating all face image matching degree priority queues P in the candidate face set R2 on the basis of the matching degree priority queue set P by adopting the relation atlas U obtained by step 1 and the faces in the identified face data set R11,P2,……,PcC represents the number of faces in the candidate face set R2, and a matching degree priority queue P is addedcDegree of medium matching gc,IThe method comprises the following steps:
in formula (7): u represents the relationship map obtained in the 1 st substep,
gc,Iindicating the degree of matching of the c-th candidate face image in the candidate face set R2 before updating with the 1 st individual face image in the recognized face data set R1,
indicating the number of faces identified in the c-th candidate face image in the updated candidate face set R2According to the matching degree of the ith personal face image in the set R1,
uIindicates the degree of matching gc,IThe face type to which the face belongs, I ═ 1, 2, … …, M,
vzrepresenting the types of the faces in the face data set R1 which are recognized, wherein z is more than or equal to 1 and less than M;
step 2.3, matching degree priority queue P for face images in the updated candidate face set R21,P2,……,PcIf the maximum value g corresponding to the matching degrees of all the faces in the candidate face set R21max,g2max,……,gcmaxIf the face image is larger than or equal to the set matching degree threshold T1, the face image is identified as the maximum matching degree value g1max,g2max,……,gcmaxAnd removing the successfully recognized face images from the candidate face set R2 according to the corresponding face types, adding the recognized face data set R1, wherein the number of the face images to be recognized in the candidate face set R2 is c1, and c1 is more than 0 and less than c:
the step 2.4, the step 2.2 and the step 2.3 are repeated until the face image matching degree priority queue P in the candidate face set R21,P2,……,Pc1No longer updated, if final match priority queue P1,P2,……,Pc1Maximum value g of degree of matching1max,g2max,……,gc1maxIf the matching degree is still smaller than the threshold value T1, abandoning the recognition of the residual face images in the candidate face set R2; and finishing the recognition of the face image to be recognized in the frame.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910651569.7A CN110502992B (en) | 2019-07-18 | 2019-07-18 | A fast face recognition method for fixed scene video based on relational graph |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910651569.7A CN110502992B (en) | 2019-07-18 | 2019-07-18 | A fast face recognition method for fixed scene video based on relational graph |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN110502992A CN110502992A (en) | 2019-11-26 |
| CN110502992B true CN110502992B (en) | 2021-06-15 |
Family
ID=68586015
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910651569.7A Active CN110502992B (en) | 2019-07-18 | 2019-07-18 | A fast face recognition method for fixed scene video based on relational graph |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN110502992B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115050085B (en) * | 2022-08-15 | 2022-11-01 | 珠海翔翼航空技术有限公司 | Method, system and equipment for recognizing objects of analog machine management system based on map |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN100336070C (en) * | 2005-08-19 | 2007-09-05 | 清华大学 | Method of robust human face detection in complicated background image |
| US8666198B2 (en) * | 2008-03-20 | 2014-03-04 | Facebook, Inc. | Relationship mapping employing multi-dimensional context including facial recognition |
| CN104463091B (en) * | 2014-09-11 | 2018-04-06 | 上海大学 | A kind of facial image recognition method based on image LGBP feature subvectors |
| CN104657718B (en) * | 2015-02-13 | 2018-12-14 | 武汉工程大学 | A kind of face identification method based on facial image feature extreme learning machine |
| CN106326843B (en) * | 2016-08-15 | 2019-08-16 | 武汉工程大学 | A kind of face identification method |
| CN107180252A (en) * | 2017-05-10 | 2017-09-19 | 杨明艳 | A kind of police field identity characteristic gathers the manufacture method and equipment of product |
| CN109766786B (en) * | 2018-12-21 | 2020-10-23 | 深圳云天励飞技术有限公司 | Character relation analysis method and related product |
-
2019
- 2019-07-18 CN CN201910651569.7A patent/CN110502992B/en active Active
Also Published As
| Publication number | Publication date |
|---|---|
| CN110502992A (en) | 2019-11-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11194997B1 (en) | Method and system for thermal infrared facial recognition | |
| CN110018524B (en) | A vision-attribute-based X-ray security inspection contraband identification method | |
| CN105608446B (en) | A method and device for detecting abnormal events in a video stream | |
| CN110717481B (en) | Method for realizing face detection by using cascaded convolutional neural network | |
| CN110728223A (en) | Helmet wearing identification method based on deep learning | |
| CN111461101B (en) | Method, device, equipment and storage medium for identifying work clothes mark | |
| CN113076969B (en) | Image target detection method based on Gaussian mixture loss function | |
| CN106570467A (en) | Convolutional neutral network-based worker absence-from-post detection method | |
| CN110119734A (en) | Cutter detecting method and device | |
| CN110728252B (en) | Face detection method applied to regional personnel motion trail monitoring | |
| CN106327502A (en) | Multi-scene multi-target recognition and tracking method in security video | |
| CN113112151B (en) | Intelligent wind control evaluation method and system based on multidimensional sensing and enterprise data quantification | |
| CN108052929A (en) | Parking space state detection method, system, readable storage medium storing program for executing and computer equipment | |
| CN112258490A (en) | Low-emissivity coating intelligent damage detection method based on optical and infrared image fusion | |
| CN118898841B (en) | Law enforcement quality supervision system based on computer vision and semantic analysis | |
| CN117994700A (en) | Intelligent construction site personnel behavior recognition system and method based on AI intelligent recognition | |
| CN114973019B (en) | A method and system for detecting and classifying geospatial information changes based on deep learning | |
| Ye et al. | An image-based approach for automatic detecting tasseling stage of maize using spatio-temporal saliency | |
| CN106295716A (en) | A kind of movement of traffic objective classification method based on video information and device | |
| CN116758421A (en) | Remote sensing image directed target detection method based on weak supervised learning | |
| CN117690086A (en) | Flood prevention gate identification and control method and system based on 5G and AI technology | |
| CN117727084A (en) | Face recognition system and method based on big data | |
| CN111241165B (en) | Artificial intelligence education system based on big data and data processing method | |
| CN110502992B (en) | A fast face recognition method for fixed scene video based on relational graph | |
| CN110458064B (en) | Combining data-driven and knowledge-driven low-altitude target detection and recognition methods |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |