CN115995110A - Face recognition method based on improved PCA+SVC - Google Patents
Face recognition method based on improved PCA+SVC Download PDFInfo
- Publication number
- CN115995110A CN115995110A CN202310070590.4A CN202310070590A CN115995110A CN 115995110 A CN115995110 A CN 115995110A CN 202310070590 A CN202310070590 A CN 202310070590A CN 115995110 A CN115995110 A CN 115995110A
- Authority
- CN
- China
- Prior art keywords
- kernel function
- dimensional data
- svc
- data sequence
- classification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
本公开是关于一种基于改进的PCA+SVC的人脸识别方法、装置、电子设备以及存储介质。其中,该方法包括:将人脸图像信息输入人脸图像中心以二维灰度矩阵的形式读取图像,并转化为一维数据序列;将所述一维数据序列随机进行随机裁剪、颜色改变、水平翻转、竖直翻转、加入噪声、仿射变换处理,生成数据增强的一维数据序列;基于PCA法对所述一维数据序列或所述数据增强的一维数据序列进行数据特征提取,生成面部特征;分别基于高斯核函数、基于多项式核函数、基于Sigmoid核函数的SVC法对所述面部特征进行并行识别分类处理,并基于预设权重进行集中仲裁产生最终决策结果,完成人脸识别。本公开通过改进的PCA+SVC进行人脸识别,提高了人脸识别的精确度。
The present disclosure relates to an improved PCA+SVC-based face recognition method, device, electronic equipment and storage medium. Wherein, the method includes: inputting the face image information into the center of the face image to read the image in the form of a two-dimensional grayscale matrix, and converting it into a one-dimensional data sequence; randomly cutting and changing the color of the one-dimensional data sequence , horizontal flipping, vertical flipping, adding noise, affine transformation processing, generating a data-enhanced one-dimensional data sequence; performing data feature extraction on the one-dimensional data sequence or the data-enhanced one-dimensional data sequence based on the PCA method, Generate facial features; perform parallel identification and classification processing on the facial features based on the SVC method based on Gaussian kernel function, polynomial kernel function, and Sigmoid kernel function respectively, and conduct centralized arbitration based on preset weights to generate the final decision result, and complete face recognition . The present disclosure uses the improved PCA+SVC to perform face recognition, thereby improving the accuracy of face recognition.
Description
技术领域Technical Field
本公开涉及人脸识别领域,具体而言,涉及一种基于改进的PCA+SVC的人脸识别方法、装置、电子设备以及计算机可读存储介质。The present disclosure relates to the field of face recognition, and in particular, to a face recognition method, device, electronic device and computer-readable storage medium based on improved PCA+SVC.
背景技术Background Art
人脸识别是一种生物识别技术,利用摄像机采集到的脸部信息流提取特征从而进行身份识别。作为生物特征识别领域中的重点研究项目,人脸识别已经深入到人们生产生活的方方面面,从移动端刷脸登录、便捷支付到智慧城市建设,其应用越来越广泛。从技术上看,在特定实验条件下,人脸识别达到了99%以上的准确率;在实际应用场景中,由于遮挡、光照强度等因素的干扰,准确率有所降低,但也可达到95%以上的准确率。Face recognition is a biometric technology that uses facial information streams collected by cameras to extract features for identity recognition. As a key research project in the field of biometric recognition, face recognition has penetrated into all aspects of people's production and life, from mobile face-swiping login, convenient payment to smart city construction, and its application is becoming more and more extensive. From a technical point of view, under specific experimental conditions, face recognition has achieved an accuracy rate of more than 99%; in actual application scenarios, due to interference from factors such as occlusion and light intensity, the accuracy rate is reduced, but it can also reach an accuracy rate of more than 95%.
目前主流的人脸识别技术采用基于外观的人脸识别算法,将面部区域作为一个整体提取全局信息辨识人脸。又可以根据特征提取方式的不同细分为线性方法和非线性方法。对于线性方法,最著名的方法之一是Turk等人提出的“特征脸”,它使用主成分分析(PCA)将图像线性投影至训练集图像的低维空间,提取人脸的特征向量组成特征空间(特征脸)。对于一副图像,将其与提取到的特征空间进行比较来识别人脸,确定身份。Yang等人提出了二维主成分分析(2DPCA),基于二维图像构造图像协方差矩阵提取特征向量。对于非线性方法,Kim等人提出了核主成分分析,采用多项式核通过非线性映射将人脸图像映射到特征空间,再使用多个线性支持向量机(SVM)用作人脸识别器,结合每个SVM输出进行仲裁以产生最终决策。随着计算机处理能力的提高,基于深度学习的人脸识别技术可以通过网络自动学习人脸面部特征,识别人脸。然而神经网络组成神经元数目众多,具有所需训练数据大、训练耗时长、难于收敛等缺点。At present, the mainstream face recognition technology adopts the face recognition algorithm based on appearance, which extracts global information from the face area as a whole to identify the face. It can be further divided into linear methods and nonlinear methods according to different feature extraction methods. For linear methods, one of the most famous methods is the "Eigenface" proposed by Turk et al., which uses principal component analysis (PCA) to linearly project the image into the low-dimensional space of the training set image, extracts the eigenvector of the face to form the feature space (eigenface). For an image, it is compared with the extracted feature space to identify the face and determine the identity. Yang et al. proposed two-dimensional principal component analysis (2DPCA), which constructs the image covariance matrix based on the two-dimensional image to extract the eigenvector. For nonlinear methods, Kim et al. proposed kernel principal component analysis, which uses a polynomial kernel to map the face image to the feature space through nonlinear mapping, and then uses multiple linear support vector machines (SVM) as face recognizers, and combines the output of each SVM for arbitration to produce the final decision. With the improvement of computer processing power, face recognition technology based on deep learning can automatically learn facial features of the face through the network and recognize the face. However, neural networks are composed of a large number of neurons, which has the disadvantages of requiring a large amount of training data, taking a long time to train, and being difficult to converge.
现有的人脸识别技术流程主要有以下几个步骤:The existing face recognition technology process mainly consists of the following steps:
(1).输入人脸图像:(a)以二维灰度矩阵的形式读取人脸图像;(b)一维化灰度矩阵,得到输入数据。(1) Input face image: (a) Read the face image in the form of a two-dimensional grayscale matrix; (b) Convert the grayscale matrix into one dimension to obtain input data.
(2).使用PCA方法进行面部特征提取:(a)计算输入数据均值向量;(b)将输入数据减去均值数据,以均值数据为中心对数据进行规格化处理;(c)计算并分解协方差矩阵,得到按特征值降序排序的特征向量;(d)取前k个特征向量得到特征脸,计算规格化后的数据在特征脸空间的投影,提取面部特征。(2) Use PCA method to extract facial features: (a) Calculate the mean vector of input data; (b) Subtract the mean data from the input data and normalize the data around the mean data; (c) Calculate and decompose the covariance matrix to obtain eigenvectors sorted in descending order of eigenvalues; (d) Take the first k eigenvectors to obtain eigenfaces, calculate the projection of the normalized data in the eigenface space, and extract facial features.
(3).SVC识别分类:(a)选择核函数,将提取到的面部特征映射到高维空间;(b)使用网格搜索,获得最优惩罚系数等参数;(c)确定最优决策边界,得到人脸图像识别结果。(3). SVC recognition and classification: (a) Select the kernel function to map the extracted facial features into a high-dimensional space; (b) Use grid search to obtain the optimal penalty coefficient and other parameters; (c) Determine the optimal decision boundary to obtain the face image recognition result.
(4).输出识别结果。(4). Output recognition results.
现有技术主要存在以下缺点:The existing technology mainly has the following disadvantages:
(1).数据中心化受极端值影响。在实际应用场景中,人脸图像采集容易受到遮挡、光照等强度的干扰,从而导致收集到的数据存在不一致、噪声、不完整(如重要属性缺失)等问题。使用PCA方法对数据进行一致性规约时,需要对数据集中的数据求均值向量,进行数据中心化。由于均值向量易受极端值影响,简单使用与均值向量相减的方法进行数据中心化,会造成提取到的特征空间解释性降低,影响人脸识别准确率。(1) Data centering is affected by extreme values. In actual application scenarios, face image collection is easily affected by occlusion, light intensity, and other factors, which can lead to inconsistency, noise, incompleteness (such as missing important attributes) and other problems in the collected data. When using the PCA method to perform consistency reduction on data, it is necessary to calculate the mean vector of the data in the data set and perform data centering. Since the mean vector is easily affected by extreme values, simply using the method of subtracting the mean vector to perform data centering will reduce the interpretability of the extracted feature space and affect the accuracy of face recognition.
(2).SVC依赖核函数选择。训练SVC识别人脸图像,核函数的选择影响SVC算法效果,所以需要根据采集到的人脸图像特征,尝试多种核函数,达到良好的人脸识别效果。(2) SVC relies on kernel function selection. When training SVC to recognize face images, the choice of kernel function affects the performance of the SVC algorithm. Therefore, it is necessary to try multiple kernel functions based on the features of the collected face images to achieve good face recognition results.
(3)网格搜索计算效率低。网格搜索将待搜索参数范围根据拟定坐标系划分网格,形成参数组合。通过遍历参数组合找到全局最优解。这以产生大量不必要无效计算为代价。(3) Grid search has low computational efficiency. Grid search divides the parameter range to be searched into grids according to the proposed coordinate system to form parameter combinations. The global optimal solution is found by traversing the parameter combinations. This comes at the cost of generating a large number of unnecessary and invalid calculations.
因此,需要一种或多种方法解决上述问题。Therefore, one or more methods are needed to solve the above problems.
需要说明的是,在上述背景技术部分公开的信息仅用于加强对本公开的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。It should be noted that the information disclosed in the above background technology section is only used to enhance the understanding of the background of the present disclosure, and therefore may include information that does not constitute the prior art known to ordinary technicians in the field.
发明内容Summary of the invention
本公开的目的在于提供一种基于改进的PCA+SVC的人脸识别方法、装置、电子设备以及计算机可读存储介质,进而至少在一定程度上克服由于相关技术的限制和缺陷而导致的一个或者多个问题。The purpose of the present disclosure is to provide a face recognition method, device, electronic device and computer-readable storage medium based on improved PCA+SVC, thereby overcoming one or more problems caused by the limitations and defects of related technologies at least to a certain extent.
根据本公开的一个方面,提供一种基于改进的PCA+SVC的人脸识别方法,包括:According to one aspect of the present disclosure, a face recognition method based on improved PCA+SVC is provided, comprising:
接收人脸图像信息,将所述人脸图像信息输入人脸图像中心以二维灰度矩阵的形式读取图像,以将所述人脸图像信息转化为一维数据序列,并将所述一维数据序列发送至数据增强模块或面部特征提取模块;Receiving facial image information, inputting the facial image information into the facial image center to read the image in the form of a two-dimensional grayscale matrix, so as to convert the facial image information into a one-dimensional data sequence, and sending the one-dimensional data sequence to a data enhancement module or a facial feature extraction module;
数据增强模块接收到一维数据序列后,将所述一维数据序列随机进行随机裁剪、颜色改变、水平翻转、竖直翻转、加入噪声、仿射变换处理,生成数据增强的一维数据序列,并将所述数据增强的一维数据序列发送至面部特征提取模块;After receiving the one-dimensional data sequence, the data enhancement module randomly crops, changes color, flips horizontally, flips vertically, adds noise, and performs affine transformation on the one-dimensional data sequence to generate a data enhanced one-dimensional data sequence, and sends the data enhanced one-dimensional data sequence to the facial feature extraction module;
面部特征提取模块接收到所述一维数据序列或所述数据增强的一维数据序列后,基于PCA法对所述一维数据序列或所述数据增强的一维数据序列进行数据特征提取,生成面部特征,并将所述面部特征发送至识别分类模块;After receiving the one-dimensional data sequence or the data-enhanced one-dimensional data sequence, the facial feature extraction module performs data feature extraction on the one-dimensional data sequence or the data-enhanced one-dimensional data sequence based on the PCA method, generates facial features, and sends the facial features to the recognition and classification module;
所述识别分类模块接收到所述面部特征后,分别基于高斯核函数的SVC法、基于多项式核函数的SVC法、基于Sigmoid核函数的SVC法对所述面部特征进行并行识别分类处理,分别生成基于高斯核函数的SVC法分类结果、基于多项式核函数的SVC法分类结果、基于Sigmoid核函数的SVC法分类结果,并基于预设权重进行集中仲裁产生最终决策结果,完成人脸识别。After receiving the facial features, the recognition and classification module performs parallel recognition and classification processing on the facial features based on the SVC method based on the Gaussian kernel function, the SVC method based on the polynomial kernel function, and the SVC method based on the Sigmoid kernel function, respectively, and generates classification results of the SVC method based on the Gaussian kernel function, the SVC method based on the polynomial kernel function, and the SVC method based on the Sigmoid kernel function, respectively, and performs centralized arbitration based on preset weights to produce a final decision result to complete face recognition.
在本公开的一种示例性实施例中,所述方法还包括:In an exemplary embodiment of the present disclosure, the method further includes:
接收人脸图像信息,将所述人脸图像信息输入人脸图像中心以二维灰度矩阵的形式读取图像,以将所述人脸图像信息转化为一维数据序列,并将所述一维数据序列基于预设概率ρ发送至数据增强模块,将所述一维数据序列基于预设概率1-ρ发送至面部特征提取模块;Receive facial image information, input the facial image information into the facial image center to read the image in the form of a two-dimensional grayscale matrix, so as to convert the facial image information into a one-dimensional data sequence, and send the one-dimensional data sequence to a data enhancement module based on a preset probability ρ, and send the one-dimensional data sequence to a facial feature extraction module based on a preset probability 1-ρ;
数据增强模块接收到一维数据序列后,将所述一维数据序列随机进行随机裁剪、颜色改变、水平翻转、竖直翻转、加入噪声、仿射变换处理,生成数据增强的一维数据序列,并将所述数据增强的一维数据序列发送至面部特征提取模块;After receiving the one-dimensional data sequence, the data enhancement module randomly crops, changes color, flips horizontally, flips vertically, adds noise, and performs affine transformation on the one-dimensional data sequence to generate a data enhanced one-dimensional data sequence, and sends the data enhanced one-dimensional data sequence to the facial feature extraction module;
面部特征提取模块接收到所述一维数据序列或所述数据增强的一维数据序列后,基于PCA法对所述一维数据序列或所述数据增强的一维数据序列进行数据特征提取,生成面部特征,并将所述面部特征发送至识别分类模块。After receiving the one-dimensional data sequence or the data-enhanced one-dimensional data sequence, the facial feature extraction module performs data feature extraction on the one-dimensional data sequence or the data-enhanced one-dimensional data sequence based on the PCA method, generates facial features, and sends the facial features to the recognition and classification module.
在本公开的一种示例性实施例中,所述方法还包括:In an exemplary embodiment of the present disclosure, the method further includes:
数据增强模块接收到大小为M×N的一维数据序列后,将所述大小为M×N一维数据序列随机进行随机裁剪,并重采样至大小为M×N的数据;After receiving the one-dimensional data sequence of size M×N, the data enhancement module randomly crops the one-dimensional data sequence of size M×N and resamples it to data of size M×N;
或随机进行通过改变所述数据的灰度值对所述数据进行颜色改变处理;or randomly performing color change processing on the data by changing the gray value of the data;
或随机进行分别对所述数据进行水平翻转、竖直翻转处理以实现对所述人脸图像的扩增;or randomly performing horizontal flipping and vertical flipping processing on the data to achieve enlargement of the face image;
或随机进行在所述数据中加入将经过高斯分布采样的随机值矩阵的噪声以增强分类学习效果;Or randomly adding noise of a random value matrix sampled by Gaussian distribution to the data to enhance the classification learning effect;
或随机进行将所述数据完成二维坐标点到二维坐标点的线性变换以完成仿射变换处理;Or randomly performing a linear transformation of the data from two-dimensional coordinate points to two-dimensional coordinate points to complete an affine transformation process;
生成数据增强的一维数据序列,并将所述数据增强的一维数据序列发送至面部特征提取模块。A data-enhanced one-dimensional data sequence is generated, and the data-enhanced one-dimensional data sequence is sent to a facial feature extraction module.
在本公开的一种示例性实施例中,所述方法中基于PCA法对所述一维数据序列或所述数据增强的一维数据序列进行数据特征提取,生成面部特征包括:In an exemplary embodiment of the present disclosure, the method extracts data features from the one-dimensional data sequence or the data-enhanced one-dimensional data sequence based on the PCA method to generate facial features, including:
基于所述一维数据序列或所述数据增强的一维数据序列的n个一维数据序列{t1,t2,...tn}的大小确定分组组段数量I,得到所述数据增强的统计直方图,ti为长度M×N的一维序列;Determine the number of grouping segments I based on the size of n one-dimensional data sequences {t 1 , t 2 , ... t n } of the one-dimensional data sequence or the data-enhanced one-dimensional data sequence, and obtain a statistical histogram of the data enhancement, where t i is a one-dimensional sequence of length M×N;
对于像素点(1,j),统计n个样本每一组段的像素点个数。对于组段i,按照分配权重wij,其中nij为该组段像素点个数,n为输入样本的个数;For pixel point (1, j), count the number of pixels in each group of n samples. For group i, according to Assign weight w ij , where n ij is the number of pixels in the segment and n is the number of input samples;
根据得到加权均值向量其中Xij为组段i像素点的像素值之和;according to Get the weighted mean vector Where Xij is the sum of the pixel values of the pixel points in group i;
根据对输入数据进行中心化处理,得到{t1,t2,...tn}new,tj为第j个训练样本;according to Centralize the input data to obtain {t 1 , t 2 , ... t n } new , where t j is the jth training sample;
对经过数据中心化处理的数据{t1,t2,...tn}new按照公式计算,得到协方差矩阵 For the data {t 1 , t 2 , ... t n } new processed in the data set, use the formula Calculate and get the covariance matrix
采用奇异值分解(SVD),求解ATA所有的特征值{λ1,λ2,...λK}及对应的特征向量{V1,V2,...VK};Using singular value decomposition (SVD), solve all eigenvalues {λ 1 ,λ 2 ,...λ K } and corresponding eigenvectors {V 1 ,V 2 ,...V K } of A T A;
按照特征值从大到小的顺序排序特征向量{V1,V2,...VK},根据人脸识别精度要求选择前k个特征向量{V1,V2,...Vk},根据公式构成特征脸空间{e1,e2,...ek};Sort the feature vectors {V 1 , V 2 , ...V K } in descending order according to the eigenvalue, select the first k feature vectors {V 1 , V 2 , ...V k } according to the face recognition accuracy requirement, and use the formula Construct the eigenface space {e 1 , e 2 , ...e k };
将经过数据中心化处理的数据{t1,t2,...tn}new与特征脸空间{e1,e2,...ek}相乘,得到降维之后的特征表示{t1,t2,...tn}final,并送入识别分类模块。The data {t 1 , t 2 , ...t n } new that has been processed in the data set is multiplied by the eigenface space {e 1 , e 2 , ...e k } to obtain the feature representation {t 1 , t 2 , ...t n } final after dimensionality reduction, and is sent to the recognition and classification module.
在本公开的一种示例性实施例中,所述方法中基于高斯核函数的SVC法还包括:In an exemplary embodiment of the present disclosure, the SVC method based on the Gaussian kernel function in the method further includes:
S5.1.确定SVC的核函数惩罚系数、gamma值参数范围,所述惩罚系数范围为[0,1];S5.1. Determine the kernel function of SVC Penalty coefficient, gamma value parameter range, the penalty coefficient range is [0,1];
S5.2.基于确定参数的高斯核函数,将输入的特征表示{t1,t2,...tn}final及对应的类标签{t′1,t′1,...t′n}进行空间映射,得到高维数据{k1,k2,...kn}及对应的类标签{k′1,k′1,...k′n},通过最优化目标函数得到决策边界及分类结果,根据分类结果以十折交叉验证方式计算分类器精确度;S5.2. Based on the Gaussian kernel function with determined parameters, the input feature representation {t 1 , t 2 , ... t n } final and the corresponding class labels {t′ 1 , t′ 1 , ... t′ n } are spatially mapped to obtain high-dimensional data {k 1 , k 2 , ... k n } and the corresponding class labels {k′ 1 , k′ 1 , ... k′ n }. By optimizing the objective function Obtain the decision boundary and classification results, and calculate the classifier accuracy based on the classification results using a ten-fold cross validation method;
S5.3.采用改进的网格搜索方法,根据指定的参数范围对参数取值排列组合,得到M个参数结果。将M个参数结果分为m组,每组个参数,对每个组采样k次重复步骤S5.2,计算平均精确度并降序排列取前t个组;对t个组采样2k次重复步骤S5.2计算平均精确度并降序排列取前个组,直到得到最优参数,并输出最终识别分类结果Ri。S5.3. Use the improved grid search method to arrange and combine the parameter values according to the specified parameter range to obtain M parameter results. Divide the M parameter results into m groups, each group parameters, sample each group k times and repeat step S5.2, calculate the average accuracy and sort them in descending order to take the top t groups; sample t groups 2k times and repeat step S5.2 to calculate the average accuracy and sort them in descending order to take the top groups until the optimal parameters are obtained, and the final recognition and classification result R i is output.
在本公开的一种示例性实施例中,所述方法中基于多项式核函数的SVC法还包括:In an exemplary embodiment of the present disclosure, the SVC method based on the polynomial kernel function in the method further includes:
S6.1.确定SVC的核函数k(x,x′)=(xx′+c)d、惩罚系数、gamma值等参数范围,所述惩罚系数范围为[0,1];S6.1. Determine the parameter ranges of the kernel function k(x, x′)=(xx′+c) d , the penalty coefficient, the gamma value, etc. of the SVC, wherein the penalty coefficient range is [0,1];
S6.2.基于确定参数的多项式核函数,将输入的特征表示{t1,t2,...tn}final及对应的类标签{t′1,t′1,...t′n}进行空间映射,得到高维数据{k1,k2,...kn}及对应的类标签{k′1,k′1,...k′n},通过最优化目标函数得到决策边界及分类结果,根据分类结果以十折交叉验证方式计算分类器精确度;S6.2. Based on the polynomial kernel function with determined parameters, the input feature representation {t 1 , t 2 , ... t n } final and the corresponding class labels {t′ 1 , t′ 1 , ... t′ n } are spatially mapped to obtain high-dimensional data {k 1 , k 2 , ... k n } and the corresponding class labels {k′ 1 , k′ 1 , ... k′ n }. By optimizing the objective function Obtain the decision boundary and classification results, and calculate the classifier accuracy based on the classification results using a ten-fold cross validation method;
S6.3.采用改进的网格搜索方法,根据指定的参数范围对参数取值排列组合,得到M个参数结果。将M个参数结果分为m组,每组个参数,对每个组采样k次重复步骤S6.2,计算平均精确度并降序排列取前t个组;对t个组采样2k次重复步骤S6.2计算平均精确度并降序排列取前个组,直到得到最优参数,并输出最终识别分类结果Rj。S6.3. Use the improved grid search method to arrange and combine the parameter values according to the specified parameter range to obtain M parameter results. Divide the M parameter results into m groups, each group parameters, sample each group k times and repeat step S6.2, calculate the average accuracy and sort them in descending order to take the top t groups; sample t groups 2k times and repeat step S6.2 to calculate the average accuracy and sort them in descending order to take the top groups until the optimal parameters are obtained, and the final recognition and classification result R j is output.
在本公开的一种示例性实施例中,所述方法中基于Sigmoid核函数的SVC法还包括:In an exemplary embodiment of the present disclosure, the SVC method based on the Sigmoid kernel function in the method further includes:
S7.1.确定SVC的核函数k(x,x′)=tanh(kxx′+c)、惩罚系数、gamma值等参数范围,所述惩罚系数范围为[0,1];S7.1. Determine the parameter ranges of the kernel function k(x, x′)=tanh(kxx′+c), penalty coefficient, gamma value, etc. of the SVC, wherein the penalty coefficient range is [0,1];
S7.2.基于确定参数的基于Sigmoid核函数,将输入的特征表示{t1,t2,...tn}final及对应的类标签{t′1,t′1,...t′n}进行空间映射,得到高维数据{k1,k2,...kn}及对应的类标签{k′1,k′1,...k′n},通过最优化目标函数得到决策边界及分类结果,根据分类结果以十折交叉验证方式计算分类器精确度;S7.2. Based on the Sigmoid kernel function with determined parameters, the input feature representation {t 1 , t 2 , ... t n } final and the corresponding class labels {t′ 1 , t′ 1 , ... t′ n } are spatially mapped to obtain high-dimensional data {k 1 , k 2 , ... k n } and the corresponding class labels {k′ 1 , k′ 1 , ... k′ n }, and the objective function is optimized. Obtain the decision boundary and classification results, and calculate the classifier accuracy based on the classification results using a ten-fold cross validation method;
S7.3.采用改进的网格搜索方法,根据指定的参数范围对参数取值排列组合,得到M个参数结果。将M个参数结果分为m组,每组个参数,对每个组采样k次重复步骤S7.2,计算平均精确度并降序排列取前t个组;对t个组采样2k次重复步骤S7.2计算平均精确度并降序排列取前个组,直到得到最优参数,并输出最终识别分类结果Rk。S7.3. Use the improved grid search method to arrange and combine the parameter values according to the specified parameter range to obtain M parameter results. Divide the M parameter results into m groups, each group parameters, sample each group k times and repeat step S7.2, calculate the average accuracy and sort them in descending order to take the top t groups; sample t groups 2k times and repeat step S7.2 to calculate the average accuracy and sort them in descending order to take the top groups until the optimal parameters are obtained, and the final recognition and classification result R k is output.
在本公开的一种示例性实施例中,所述方法还包括:In an exemplary embodiment of the present disclosure, the method further includes:
所述识别分类模块接收到所述面部特征后,分别基于高斯核函数的SVC法、基于多项式核函数的SVC法、基于Sigmoid核函数的SVC法对所述面部特征进行并行识别分类处理,分别生成基于高斯核函数的SVC法分类结果Ri、基于多项式核函数的SVC法分类结果Rj、基于Sigmoid核函数的SVC法分类结果Rk;After receiving the facial features, the recognition and classification module performs parallel recognition and classification processing on the facial features based on the SVC method of Gaussian kernel function, the SVC method based on polynomial kernel function, and the SVC method based on Sigmoid kernel function, respectively, and generates classification results Ri of the SVC method based on Gaussian kernel function, Rj of the SVC method based on polynomial kernel function, and Rk of the SVC method based on Sigmoid kernel function;
若所述基于高斯核函数的SVC法分类结果Ri、基于多项式核函数的SVC法分类结果Rj、基于Sigmoid核函数的SVC法分类结果Rk相同,则以所述基于高斯核函数的SVC法分类结果Ri、基于多项式核函数的SVC法分类结果Rj、基于Sigmoid核函数的SVC法分类结果Rk中的任一个分类结果作为决策结果,完成人脸识别;If the classification result R i of the SVC method based on the Gaussian kernel function, the classification result R j of the SVC method based on the polynomial kernel function, and the classification result R k of the SVC method based on the Sigmoid kernel function are the same, any one of the classification results R i of the SVC method based on the Gaussian kernel function, the classification result R j of the SVC method based on the polynomial kernel function, and the classification result R k of the SVC method based on the Sigmoid kernel function is used as the decision result to complete face recognition;
若所述基于高斯核函数的SVC法分类结果Ri、基于多项式核函数的SVC法分类结果Rj、基于Sigmoid核函数的SVC法分类结果Rk中的两个分类结果相同,则基于聚合函数聚合相同的两个分类结果作为决策结果,完成人脸识别;If two of the classification results of the SVC method based on the Gaussian kernel function R i , the classification results of the SVC method based on the polynomial kernel function R j , and the classification results of the SVC method based on the Sigmoid kernel function R k are the same, the two same classification results are aggregated based on the aggregation function as the decision result to complete face recognition;
若所述基于高斯核函数的SVC法分类结果Ri、基于多项式核函数的SVC法分类结果Rj、基于Sigmoid核函数的SVC法分类结果Rk均不相同,则基于预设权重ωi,ωj,ωk,所述ωi+ωj+ωk=1,进行集中仲裁产生最终决策结果,完成人脸识别。If the SVC classification result R i based on the Gaussian kernel function, the SVC classification result R j based on the polynomial kernel function, and the SVC classification result R k based on the Sigmoid kernel function are all different, then based on the preset weights ω i , ω j , ω k , ω i +ω j +ω k = 1, centralized arbitration is performed to produce the final decision result, and face recognition is completed.
在本公开的一个方面,提供一种基于改进的PCA+SVC的人脸识别装置,包括:In one aspect of the present disclosure, a face recognition device based on improved PCA+SVC is provided, comprising:
输入人脸图像中心模块,用于接收人脸图像信息,将所述人脸图像信息输入人脸图像中心以二维灰度矩阵的形式读取图像,以将所述人脸图像信息转化为一维数据序列,并将所述一维数据序列发送至数据增强模块;An input face image center module is used to receive face image information, input the face image information into the face image center to read the image in the form of a two-dimensional grayscale matrix, so as to convert the face image information into a one-dimensional data sequence, and send the one-dimensional data sequence to a data enhancement module;
数据增强模块,用于接收到一维数据序列后,将所述一维数据序列随机进行随机裁剪、颜色改变、水平翻转、竖直翻转、加入噪声、仿射变换处理,生成数据增强的一维数据序列,并将所述数据增强的一维数据序列发送至面部特征提取模块;A data enhancement module is used to randomly crop, change color, flip horizontally, flip vertically, add noise, and perform affine transformation on the one-dimensional data sequence after receiving the one-dimensional data sequence to generate a data-enhanced one-dimensional data sequence, and send the data-enhanced one-dimensional data sequence to the facial feature extraction module;
面部特征提取模块,用于接收到所述一维数据序列或所述数据增强的一维数据序列后,基于PCA法对所述一维数据序列或所述数据增强的一维数据序列进行数据特征提取,生成面部特征,并将所述面部特征发送至识别分类模块;A facial feature extraction module is used for, after receiving the one-dimensional data sequence or the data-enhanced one-dimensional data sequence, extracting data features from the one-dimensional data sequence or the data-enhanced one-dimensional data sequence based on the PCA method, generating facial features, and sending the facial features to the recognition and classification module;
识别分类模块,用于接收到所述面部特征后,分别基于高斯核函数的SVC法、基于多项式核函数的SVC法、基于Sigmoid核函数的SVC法对所述面部特征进行并行识别分类处理,分别生成基于高斯核函数的SVC法分类结果、基于多项式核函数的SVC法分类结果、基于Sigmoid核函数的SVC法分类结果,并基于预设权重进行集中仲裁产生最终决策结果,完成人脸识别;The recognition and classification module is used for, after receiving the facial features, performing parallel recognition and classification processing on the facial features based on the SVC method of Gaussian kernel function, the SVC method based on polynomial kernel function, and the SVC method based on Sigmoid kernel function, respectively, generating classification results of the SVC method based on Gaussian kernel function, the SVC method based on polynomial kernel function, and the SVC method based on Sigmoid kernel function, respectively, and performing centralized arbitration based on preset weights to generate a final decision result, thereby completing face recognition;
输出识别结果中心模块,用于接收所述识别分类模块发送的人脸识别结果,并将所述人脸识别结果打印输出。The output recognition result center module is used to receive the face recognition result sent by the recognition classification module and print out the face recognition result.
在本公开的一个方面,提供一种电子设备,包括:In one aspect of the present disclosure, there is provided an electronic device, comprising:
处理器;以及Processor; and
存储器,所述存储器上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时实现根据上述任意一项所述的方法。A memory having computer-readable instructions stored thereon, wherein the computer-readable instructions, when executed by the processor, implement the method according to any one of the above items.
在本公开的一个方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现根据上述任意一项所述的方法。In one aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored. When the computer program is executed by a processor, the method according to any one of the above items is implemented.
本公开的示例性实施例中的一种基于改进的PCA+SVC的人脸识别方法,其中,该方法包括:将人脸图像信息输入人脸图像中心以二维灰度矩阵的形式读取图像,并转化为一维数据序列;将所述一维数据序列随机进行随机裁剪、颜色改变、水平翻转、竖直翻转、加入噪声、仿射变换处理,生成数据增强的一维数据序列;基于PCA法对所述一维数据序列或所述数据增强的一维数据序列进行数据特征提取,生成面部特征;分别基于高斯核函数、基于多项式核函数、基于Sigmoid核函数的SVC法对所述面部特征进行并行识别分类处理,并基于预设权重进行集中仲裁产生最终决策结果,完成人脸识别。本公开通过改进的PCA+SVC进行人脸识别,提高了人脸识别的精确度。In an exemplary embodiment of the present disclosure, a face recognition method based on improved PCA+SVC is disclosed, wherein the method includes: inputting face image information into the center of the face image to read the image in the form of a two-dimensional grayscale matrix and converting it into a one-dimensional data sequence; randomly cropping, color changing, horizontally flipping, vertically flipping, adding noise, and affine transformation processing the one-dimensional data sequence to generate a data-enhanced one-dimensional data sequence; extracting data features from the one-dimensional data sequence or the data-enhanced one-dimensional data sequence based on the PCA method to generate facial features; performing parallel recognition and classification processing on the facial features based on the Gaussian kernel function, the polynomial kernel function, and the Sigmoid kernel function-based SVC method, and performing centralized arbitration based on preset weights to generate a final decision result to complete face recognition. The present disclosure improves the accuracy of face recognition by performing face recognition through improved PCA+SVC.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present disclosure.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
通过参照附图来详细描述其示例实施例,本公开的上述和其它特征及优点将变得更加明显。The above and other features and advantages of the present disclosure will become more apparent by describing in detail example embodiments thereof with reference to the attached drawings.
图1A-1B示出了根据本公开一示例性实施例的一种基于改进的PCA+SVC的人脸识别方法的流程图;1A-1B show a flow chart of a face recognition method based on improved PCA+SVC according to an exemplary embodiment of the present disclosure;
图2A-2B示出了根据本公开一示例性实施例的一种基于改进的PCA+SVC的人脸识别方法的面部特征提取模块示意图;2A-2B show a schematic diagram of a facial feature extraction module of a face recognition method based on an improved PCA+SVC according to an exemplary embodiment of the present disclosure;
图3A-3E示出了根据本公开一示例性实施例的一种基于改进的PCA+SVC的人脸识别方法的识别分类模块示意图;3A-3E are schematic diagrams showing a recognition and classification module of a face recognition method based on an improved PCA+SVC according to an exemplary embodiment of the present disclosure;
图4示出了根据本公开一示例性实施例的一种基于改进的PCA+SVC的人脸识别装置的示意框图;FIG4 shows a schematic block diagram of a face recognition device based on improved PCA+SVC according to an exemplary embodiment of the present disclosure;
图5示意性示出了根据本公开一示例性实施例的电子设备的框图;以及FIG5 schematically shows a block diagram of an electronic device according to an exemplary embodiment of the present disclosure; and
图6示意性示出了根据本公开一示例性实施例的计算机可读存储介质的示意图。FIG. 6 schematically shows a schematic diagram of a computer-readable storage medium according to an exemplary embodiment of the present disclosure.
具体实施方式DETAILED DESCRIPTION
现在将参考附图更全面地描述示例实施例。然而,示例实施例能够以多种形式实施,且不应被理解为限于在此阐述的实施例;相反,提供这些实施例使得本公开将全面和完整,并将示例实施例的构思全面地传达给本领域的技术人员。在图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。Example embodiments will now be described more fully with reference to the accompanying drawings. However, example embodiments can be implemented in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be comprehensive and complete and will fully convey the concepts of the example embodiments to those skilled in the art. The same reference numerals in the figures represent the same or similar parts, and thus their repeated description will be omitted.
此外,所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施例中。在下面的描述中,提供许多具体细节从而给出对本公开的实施例的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而没有所述特定细节中的一个或更多,或者可以采用其它的方法、组元、材料、装置、步骤等。在其它情况下,不详细示出或描述公知结构、方法、装置、实现、材料或者操作以避免模糊本公开的各方面。In addition, the described features, structures or characteristics may be combined in one or more embodiments in any suitable manner. In the following description, many specific details are provided to provide a full understanding of the embodiments of the present disclosure. However, those skilled in the art will appreciate that the technical solutions of the present disclosure may be practiced without one or more of the specific details, or other methods, components, materials, devices, steps, etc. may be adopted. In other cases, known structures, methods, devices, implementations, materials or operations are not shown or described in detail to avoid blurring the various aspects of the present disclosure.
附图中所示的方框图仅仅是功能实体,不一定必须与物理上独立的实体相对应。即,可以采用软件形式来实现这些功能实体,或在一个或多个软件硬化的模块中实现这些功能实体或功能实体的一部分,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。The block diagrams shown in the accompanying drawings are only functional entities and do not necessarily correspond to physically independent entities. That is, these functional entities can be implemented in software form, or these functional entities or parts of functional entities can be implemented in one or more software hardened modules, or these functional entities can be implemented in different networks and/or processor devices and/or microcontroller devices.
在本示例实施例中,首先提供了一种基于改进的PCA+SVC的人脸识别方法;参考图1A-1B中所示,该一种基于改进的PCA+SVC的人脸识别方法可以包括以下步骤:In this exemplary embodiment, a face recognition method based on improved PCA+SVC is first provided; referring to FIG. 1A-1B , the face recognition method based on improved PCA+SVC may include the following steps:
步骤S110,接收人脸图像信息,将所述人脸图像信息输入人脸图像中心以二维灰度矩阵的形式读取图像,以将所述人脸图像信息转化为一维数据序列,并将所述一维数据序列发送至数据增强模块或面部特征提取模块;Step S110, receiving facial image information, inputting the facial image information into the facial image center to read the image in the form of a two-dimensional grayscale matrix, so as to convert the facial image information into a one-dimensional data sequence, and sending the one-dimensional data sequence to a data enhancement module or a facial feature extraction module;
步骤S120,数据增强模块接收到一维数据序列后,将所述一维数据序列随机进行随机裁剪、颜色改变、水平翻转、竖直翻转、加入噪声、仿射变换处理,生成数据增强的一维数据序列,并将所述数据增强的一维数据序列发送至面部特征提取模块;Step S120, after receiving the one-dimensional data sequence, the data enhancement module randomly crops, changes color, flips horizontally, flips vertically, adds noise, and performs affine transformation on the one-dimensional data sequence to generate a data enhanced one-dimensional data sequence, and sends the data enhanced one-dimensional data sequence to the facial feature extraction module;
步骤S130,面部特征提取模块接收到所述一维数据序列或所述数据增强的一维数据序列后,基于PCA法对所述一维数据序列或所述数据增强的一维数据序列进行数据特征提取,生成面部特征,并将所述面部特征发送至识别分类模块;Step S130, after receiving the one-dimensional data sequence or the data-enhanced one-dimensional data sequence, the facial feature extraction module performs data feature extraction on the one-dimensional data sequence or the data-enhanced one-dimensional data sequence based on the PCA method, generates facial features, and sends the facial features to the recognition and classification module;
步骤S140,所述识别分类模块接收到所述面部特征后,分别基于高斯核函数的SVC法、基于多项式核函数的SVC法、基于Sigmoid核函数的SVC法对所述面部特征进行并行识别分类处理,分别生成基于高斯核函数的SVC法分类结果、基于多项式核函数的SVC法分类结果、基于Sigmoid核函数的SVC法分类结果,并基于预设权重进行集中仲裁产生最终决策结果,完成人脸识别。Step S140, after the recognition and classification module receives the facial features, it performs parallel recognition and classification processing on the facial features based on the SVC method based on the Gaussian kernel function, the SVC method based on the polynomial kernel function, and the SVC method based on the Sigmoid kernel function, respectively, and generates classification results of the SVC method based on the Gaussian kernel function, the SVC method based on the polynomial kernel function, and the SVC method based on the Sigmoid kernel function, respectively, and performs centralized arbitration based on preset weights to produce a final decision result, thereby completing face recognition.
本公开的示例性实施例中的一种基于改进的PCA+SVC的人脸识别方法,其中,该方法包括:将人脸图像信息输入人脸图像中心以二维灰度矩阵的形式读取图像,并转化为一维数据序列;将所述一维数据序列随机进行随机裁剪、颜色改变、水平翻转、竖直翻转、加入噪声、仿射变换处理,生成数据增强的一维数据序列;基于PCA法对所述一维数据序列或所述数据增强的一维数据序列进行数据特征提取,生成面部特征;分别基于高斯核函数、基于多项式核函数、基于Sigmoid核函数的SVC法对所述面部特征进行并行识别分类处理,并基于预设权重进行集中仲裁产生最终决策结果,完成人脸识别。本公开通过改进的PCA+SVC进行人脸识别,提高了人脸识别的精确度。In an exemplary embodiment of the present disclosure, a face recognition method based on improved PCA+SVC is disclosed, wherein the method includes: inputting face image information into the center of the face image to read the image in the form of a two-dimensional grayscale matrix and converting it into a one-dimensional data sequence; randomly cropping, color changing, horizontally flipping, vertically flipping, adding noise, and affine transformation processing the one-dimensional data sequence to generate a data-enhanced one-dimensional data sequence; extracting data features from the one-dimensional data sequence or the data-enhanced one-dimensional data sequence based on the PCA method to generate facial features; performing parallel recognition and classification processing on the facial features based on the Gaussian kernel function, the polynomial kernel function, and the Sigmoid kernel function-based SVC method, and performing centralized arbitration based on preset weights to generate a final decision result to complete face recognition. The present disclosure improves the accuracy of face recognition by performing face recognition through improved PCA+SVC.
下面,将对本示例实施例中的一种基于改进的PCA+SVC的人脸识别方法进行进一步的说明。Next, a face recognition method based on improved PCA+SVC in this exemplary embodiment will be further described.
在步骤S110中,可以接收人脸图像信息,将所述人脸图像信息输入人脸图像中心以二维灰度矩阵的形式读取图像,以将所述人脸图像信息转化为一维数据序列,并将所述一维数据序列发送至数据增强模块或面部特征提取模块。In step S110, facial image information can be received, and the facial image information can be input into the facial image center to read the image in the form of a two-dimensional grayscale matrix to convert the facial image information into a one-dimensional data sequence, and the one-dimensional data sequence can be sent to a data enhancement module or a facial feature extraction module.
在本示例的实施例中,所述方法还包括:In the embodiment of this example, the method further comprises:
接收人脸图像信息,将所述人脸图像信息输入人脸图像中心以二维灰度矩阵的形式读取图像,以将所述人脸图像信息转化为一维数据序列,并将所述一维数据序列基于预设概率ρ发送至数据增强模块,将所述一维数据序列基于预设概率1-ρ发送至面部特征提取模块;Receive facial image information, input the facial image information into the facial image center to read the image in the form of a two-dimensional grayscale matrix, so as to convert the facial image information into a one-dimensional data sequence, and send the one-dimensional data sequence to a data enhancement module based on a preset probability ρ, and send the one-dimensional data sequence to a facial feature extraction module based on a preset probability 1-ρ;
数据增强模块接收到一维数据序列后,将所述一维数据序列随机进行随机裁剪、颜色改变、水平翻转、竖直翻转、加入噪声、仿射变换处理,生成数据增强的一维数据序列,并将所述数据增强的一维数据序列发送至面部特征提取模块;After receiving the one-dimensional data sequence, the data enhancement module randomly crops, changes color, flips horizontally, flips vertically, adds noise, and performs affine transformation on the one-dimensional data sequence to generate a data enhanced one-dimensional data sequence, and sends the data enhanced one-dimensional data sequence to the facial feature extraction module;
面部特征提取模块接收到所述数据增强的一维数据序列或直接接收到所述一维数据序列后,基于PCA法对所述数据增强的一维数据序列或所述一维数据序列进行数据特征提取,生成面部特征,并将所述面部特征发送至识别分类模块。After receiving the data-enhanced one-dimensional data sequence or directly receiving the one-dimensional data sequence, the facial feature extraction module performs data feature extraction on the data-enhanced one-dimensional data sequence or the one-dimensional data sequence based on the PCA method, generates facial features, and sends the facial features to the recognition and classification module.
在本示例的实施例中,输入人脸图像中心将人脸图像处理成一维灰度矩阵,以概率ρ送入数据增强模块,或以概率1-ρ直接送入面部特征提取模块,概率ρ∈[0,1]。面部特征提取模块接收端同时与数据增强模块输出端和输入人脸图像中心输出端对接,提取面部特征。将面部特征提取模块提取到的面部特征送入识别分类模块处理,经过决策中心得到人脸的识别分类结果,并送入输出识别结果中心输出。对于一张大小为M×N的人脸图像,输入人脸图像中心以二维灰度矩阵的形式读取图像,并将其展平为M×N长度的一维序列,以概率ρ送入数据增强模块。In the embodiment of this example, the input face image center processes the face image into a one-dimensional grayscale matrix and sends it to the data enhancement module with probability ρ, or directly sends it to the facial feature extraction module with probability 1-ρ, and the probability ρ∈[0,1]. The receiving end of the facial feature extraction module is simultaneously connected to the output end of the data enhancement module and the output end of the input face image center to extract facial features. The facial features extracted by the facial feature extraction module are sent to the recognition and classification module for processing, and the recognition and classification results of the face are obtained through the decision center and sent to the output recognition result center for output. For a face image of size M×N, the input face image center reads the image in the form of a two-dimensional grayscale matrix, flattens it into a one-dimensional sequence of length M×N, and sends it to the data enhancement module with probability ρ.
在步骤S120中,可以数据增强模块接收到一维数据序列后,将所述一维数据序列随机进行随机裁剪、颜色改变、水平翻转、竖直翻转、加入噪声、仿射变换处理,生成数据增强的一维数据序列,并将所述数据增强的一维数据序列发送至面部特征提取模块。In step S120, after the data enhancement module receives the one-dimensional data sequence, the one-dimensional data sequence can be randomly cropped, color changed, horizontally flipped, vertically flipped, noise added, and affine transformed to generate a data enhanced one-dimensional data sequence, and the data enhanced one-dimensional data sequence is sent to the facial feature extraction module.
在本示例的实施例中,所述方法还包括:In the embodiment of this example, the method further comprises:
数据增强模块接收到大小为M×N的一维数据序列后,将所述大小为M×N一维数据序列随机进行随机裁剪,并重采样至大小为M×N的数据;After receiving the one-dimensional data sequence of size M×N, the data enhancement module randomly crops the one-dimensional data sequence of size M×N and resamples it to data of size M×N;
或随机进行通过改变所述数据的灰度值对所述数据进行颜色改变处理;or randomly performing color change processing on the data by changing the gray value of the data;
或随机进行分别对所述数据进行水平翻转、竖直翻转处理以实现对所述人脸图像的扩增;or randomly performing horizontal flipping and vertical flipping processing on the data to achieve enlargement of the face image;
或随机进行在所述数据中加入将经过高斯分布采样的随机值矩阵的噪声以增强分类学习效果;Or randomly adding noise of a random value matrix sampled by Gaussian distribution to the data to enhance the classification learning effect;
或随机进行将所述数据完成二维坐标点到二维坐标点的线性变换以完成仿射变换处理;Or randomly performing a linear transformation of the data from two-dimensional coordinate points to two-dimensional coordinate points to complete an affine transformation process;
生成数据增强的一维数据序列,并将所述数据增强的一维数据序列发送至面部特征提取模块。A data-enhanced one-dimensional data sequence is generated, and the data-enhanced one-dimensional data sequence is sent to a facial feature extraction module.
在本示例的实施例中,数据增强模块包含6种数据增强方法,分别为:(1)随机裁剪(2)颜色改变(3)水平翻转(4)竖直翻转(5)加入噪声(6)仿射变换。进入数据增强模块的图像随机选择一种方法变化。In the embodiment of this example, the data enhancement module includes 6 data enhancement methods, namely: (1) random cropping (2) color change (3) horizontal flip (4) vertical flip (5) adding noise (6) affine transformation. The image entering the data enhancement module randomly selects one of the methods to change.
(1).随机裁剪。该对图像区域进行随机裁剪,并重采样至M×N大小。该方式可以保留图像的主要特征,裁剪掉背景噪声,从而减少噪声带来的过拟合影响。(1) Random cropping. The image area is randomly cropped and resampled to M×N size. This method can retain the main features of the image and crop the background noise, thereby reducing the overfitting effect caused by noise.
(2).颜色改变。通过改变图像的灰度值实现数据增强。(2) Color change: Data enhancement is achieved by changing the grayscale value of the image.
(3).水平翻转。在水平方向上旋转图像,实现图像扩增。(3) Horizontal flip. Rotate the image horizontally to achieve image enlargement.
(4).竖直翻转。在竖直方向上旋转图像,实现图像扩增。(4) Vertical flip: Rotate the image vertically to achieve image enlargement.
(5).加入噪声。将经过高斯分布采样的随机值矩阵加入人脸图像,通过向图像添加噪点的方式帮助分类器学习更加强大的功能。(5) Add noise. Add a random value matrix sampled from a Gaussian distribution to the face image to help the classifier learn more powerful functions by adding noise to the image.
(6).仿射变换。实现二维坐标点到二维坐标点的线性变换,可以保持图像的“平行性”和“平直性”。(6) Affine transformation. A linear transformation from two-dimensional coordinate points to two-dimensional coordinate points can maintain the "parallelism" and "straightness" of the image.
在步骤S130中,可以面部特征提取模块接收到所述一维数据序列或所述数据增强的一维数据序列后,基于PCA法对所述一维数据序列或所述数据增强的一维数据序列进行数据特征提取,生成面部特征,并将所述面部特征发送至识别分类模块。In step S130, after the facial feature extraction module receives the one-dimensional data sequence or the data enhanced one-dimensional data sequence, it can perform data feature extraction on the one-dimensional data sequence or the data enhanced one-dimensional data sequence based on the PCA method, generate facial features, and send the facial features to the recognition and classification module.
在本示例的实施例中,所述方法中基于PCA法对所述一维数据序列或所述数据增强的一维数据序列进行数据特征提取,生成面部特征包括:In the embodiment of this example, the method extracts data features from the one-dimensional data sequence or the data-enhanced one-dimensional data sequence based on the PCA method to generate facial features, including:
基于所述一维数据序列或所述数据增强的一维数据序列的n个一维数据序列{t1,t2,...tn}的大小确定分组组段数量I,得到所述数据增强的统计直方图,ti为长度M×N的一维序列;Determine the number of grouping segments I based on the size of n one-dimensional data sequences {t 1 , t 2 , ... t n } of the one-dimensional data sequence or the data-enhanced one-dimensional data sequence, and obtain a statistical histogram of the data enhancement, where t i is a one-dimensional sequence of length M×N;
对于像素点(1,j),统计n个样本每一组段的像素点个数。对于组段i,按照分配权重wij,其中nij为该组段像素点个数,n为输入样本的个数;For pixel point (1, j), count the number of pixels in each group of n samples. For group i, according to Assign weight w ij , where n ij is the number of pixels in the segment and n is the number of input samples;
根据得到加权均值向量其中Xij为组段i像素点的像素值之和;according to Get the weighted mean vector Where Xij is the sum of the pixel values of the pixel points in group i;
根据对输入数据进行中心化处理,得到{t1,t2,...tn}new,tj为第j个训练样本;according to Centralize the input data to obtain {t 1 , t 2 , ... t n } new , where t j is the jth training sample;
对经过数据中心化处理的数据{t1,t2,...tn}new按照公式计算,得到协方差矩阵 For the data {t 1 , t 2 , ... t n } new processed in the data set, use the formula Calculate and get the covariance matrix
采用奇异值分解(SVD),求解ATA所有的特征值{λ1,λ2,...λK}及对应的特征向量{V1,V2,...VK};Using singular value decomposition (SVD), solve all eigenvalues {λ 1 ,λ 2 ,...λ K } and corresponding eigenvectors {V 1 ,V 2 ,...V K } of A T A;
按照特征值从大到小的顺序排序特征向量{V1,V2,...VK},根据人脸识别精度要求选择前k个特征向量{V1,V2,...Vk},根据公式构成特征脸空间{e1,e2,...ek};Sort the feature vectors {V 1 , V 2 , ...V K } in descending order according to the eigenvalue, select the first k feature vectors {V 1 , V 2 , ...V k } according to the face recognition accuracy requirement, and use the formula Construct the eigenface space {e 1 , e 2 , ...e k };
将经过数据中心化处理的数据{t1,t2,...tn}new与特征脸空间{e1,e2,...ek}相乘,得到降维之后的特征表示{t1,t2,...tn}final,并送入识别分类模块。The data {t 1 , t 2 , ...t n } new that has been processed in the data set is multiplied by the eigenface space {e 1 , e 2 , ...e k } to obtain the feature representation {t 1 , t 2 , ...t n } final after dimensionality reduction, and is sent to the recognition and classification module.
在本示例的实施例中,如图2A所示,面部特征提取模块主要采用改进的PCA方法对来自数据增强模块或输入人脸图像中心的数据提取特征,进行降维处理,主要步骤为:In the embodiment of this example, as shown in FIG2A , the facial feature extraction module mainly uses an improved PCA method to extract features from the data enhancement module or the center of the input face image and perform dimensionality reduction processing. The main steps are:
(1).统计直方图。根据输入数据{t1,t2,...tn}的大小确定分组组段数量I,得到输入数据的统计直方图。ti为长度M×N的一维序列。(1) Statistical histogram. According to the size of the input data {t 1 , t 2 , ... t n }, the number of grouping segments I is determined to obtain the statistical histogram of the input data. t i is a one-dimensional sequence of length M×N.
(2).计算权重。对于像素点(1,j),统计n个样本每一组段的像素点个数。对于组段i,按照分配权重wij,其中nij为该组段像素点个数,n为输入样本的个数。(2) Calculate the weight. For pixel point (1, j), count the number of pixels in each group of n samples. For group i, Assign weights w ij , where n ij is the number of pixels in the segment and n is the number of input samples.
(3).得到均值向量。根据得到加权均值向量其中Xij为组段i像素点的像素值之和。(3) Get the mean vector. According to Get the weighted mean vector Where Xij is the sum of the pixel values of the pixels in group i.
(4).中心化处理。根据对输入数据进行中心化处理,得到{t1,t2,...tn}new,tj为第j个训练样本。(4) Centralized processing. The input data is centralized to obtain {t 1 , t 2 , ... t n } new , where t j is the jth training sample.
(5).计算协方差矩阵。协方差矩阵表示图像中任意两个像素之间的关系,所以对经过数据中心化处理的数据{t1,t2,...tn}new按照公式计算,得到协方差矩阵 (5) Calculate the covariance matrix. The covariance matrix represents the relationship between any two pixels in the image, so for the data {t 1 , t 2 , ... t n } new after data processing, according to the formula Calculate and get the covariance matrix
(6).分解协方差矩阵。由于人脸图像维数较大,故采用奇异值分解(SVD),求解ATA所有的特征值{λ1,λ2,...λK}及对应的特征向量{V1,V2,...VK}。(6) Decomposition of the covariance matrix. Since the dimension of the face image is large, singular value decomposition (SVD) is used to solve all the eigenvalues {λ 1 ,λ 2 ,...λ K } of AT A and the corresponding eigenvectors {V 1 ,V 2 ,...V K }.
(7).按照特征值从大到小的顺序排序特征向量{V1,V2,...VK},根据人脸识别精度要求选择前k个特征向量{V1,V2,...Vk},根据公式构成特征脸空间{e1,e2,...ek]。图2B为空间中几例特征向量的可视化结果。(7) Sort the feature vectors {V 1 , V 2 , ... V K } in descending order of eigenvalues, select the first k feature vectors {V 1 , V 2 , ... V k } according to the face recognition accuracy requirements, and use the formula The eigenface space {e 1 , e 2 , ... e k ] is constructed. FIG2B is the visualization result of several eigenvectors in the space.
(8).将经过数据中心化处理的数据{t1,t2,...tn}new与特征脸空间{e1,e2,...ek}相乘,得到降维之后的特征表示{t1,t2,...tn}final,并送入识别分类模块。(8) Multiply the data {t 1 , t 2 , ... t n } new after data center processing with the eigenface space {e 1 , e 2 , ... e k } to obtain the feature representation {t 1 , t 2 , ... t n } final after dimensionality reduction, and send it to the recognition and classification module.
在步骤S140中,可以所述识别分类模块接收到所述面部特征后,分别基于高斯核函数的SVC法、基于多项式核函数的SVC法、基于Sigmoid核函数的SVC法对所述面部特征进行并行识别分类处理,分别生成基于高斯核函数的SVC法分类结果、基于多项式核函数的SVC法分类结果、基于Sigmoid核函数的SVC法分类结果,并基于预设权重进行集中仲裁产生最终决策结果,完成人脸识别。In step S140, after the recognition and classification module receives the facial features, it can perform parallel recognition and classification processing on the facial features based on the SVC method based on the Gaussian kernel function, the SVC method based on the polynomial kernel function, and the SVC method based on the Sigmoid kernel function, respectively, to generate classification results of the SVC method based on the Gaussian kernel function, the SVC method based on the polynomial kernel function, and the SVC method based on the Sigmoid kernel function, respectively, and perform centralized arbitration based on preset weights to produce a final decision result, thereby completing face recognition.
在本示例的实施例中,如图3A所示,识别分类模块主要采用改进的集成SVC方法对面部特征提取模块的特征表示输出{t1,t2,...tn}final进行处理,经过决策中心得到分类识别结果。该模块主要集成了三种SVC,分别为(1).基于高斯核函数的SVC(2).基于多项式核函数的SVC和(3).基于Sigmoid核函数的SVC。对于同一个特征表示,分别得到分类结果Ri,Rj,Rk,对三个分类结果分别乘以权重ωi,ωj,ωk,集中仲裁产生最终决策结果。权重ωi,ωj,ωk由所属的SVC分类正确的次数Cs除以分类总正确次数C得到,ωi+ωj+ωk=1。In the embodiment of this example, as shown in FIG3A , the recognition and classification module mainly uses an improved integrated SVC method to process the feature representation output {t 1 , t 2 , ... t n } final of the facial feature extraction module, and obtains the classification and recognition result through the decision center. The module mainly integrates three types of SVC, namely (1) SVC based on Gaussian kernel function (2) SVC based on polynomial kernel function and (3) SVC based on Sigmoid kernel function. For the same feature representation, the classification results R i , R j , R k are obtained respectively, and the three classification results are multiplied by weights ω i , ω j , ω k respectively, and the final decision result is generated by centralized arbitration. The weights ω i , ω j , ω k are obtained by dividing the number of correct classifications of the corresponding SVC C s by the total number of correct classifications C, ω i + ω j + ω k = 1.
在本示例的实施例中,所述方法中基于高斯核函数的SVC法还包括:In the embodiment of this example, the SVC method based on the Gaussian kernel function in the method further includes:
S5.1.确定SVC的核函数惩罚系数、gamma值参数范围,所述惩罚系数范围为[0,1];S5.1. Determine the kernel function of SVC Penalty coefficient, gamma value parameter range, the penalty coefficient range is [0,1];
S5.2.基于确定参数的高斯核函数,将输入的特征表示{t1,t2,...tn}final及对应的类标签{t′1,t′1,...t′n}进行空间映射,得到高维数据{k1,k2,...kn}及对应的类标签{k′1,k′1,...k′n},通过最优化目标函数得到决策边界及分类结果,根据分类结果以十折交叉验证方式计算分类器精确度;S5.2. Based on the Gaussian kernel function with determined parameters, the input feature representation {t 1 , t 2 , ... t n } final and the corresponding class labels {t′ 1 , t′ 1 , ... t′ n } are spatially mapped to obtain high-dimensional data {k 1 , k 2 , ... k n } and the corresponding class labels {k′ 1 , k′ 1 , ... k′ n }. By optimizing the objective function Obtain the decision boundary and classification results, and calculate the classifier accuracy based on the classification results using a ten-fold cross validation method;
S5.3.采用改进的网格搜索方法,根据指定的参数范围对参数取值排列组合,得到M个参数结果。将M个参数结果分为m组,每组个参数,对每个组采样k次重复步骤S5.2,计算平均精确度并降序排列取前t个组;对t个组采样2k次重复步骤S5.2计算平均精确度并降序排列取前个组,直到得到最优参数,并输出最终识别分类结果Ri。S5.3. Use the improved grid search method to arrange and combine the parameter values according to the specified parameter range to obtain M parameter results. Divide the M parameter results into m groups, each group parameters, sample each group k times and repeat step S5.2, calculate the average accuracy and sort them in descending order to take the top t groups; sample t groups 2k times and repeat step S5.2 to calculate the average accuracy and sort them in descending order to take the top groups until the optimal parameters are obtained, and the final recognition and classification result R i is output.
在本示例的实施例中,基于高斯核函数的SVC使用作为其核函数,可以对任意的数据映射成线性可分,对于没有先验知识的特征表示,基于高斯核的SVC具有良好的性能,核值范围为[0,1]。如图3B所示,其计算过程为:In the embodiment of this example, the Gaussian kernel-based SVC uses As its kernel function, any data can be mapped into linear separable. For feature representation without prior knowledge, SVC based on Gaussian kernel has good performance, and the kernel value range is [0,1]. As shown in Figure 3B, its calculation process is:
(a)确定SVC的惩罚系数、gamma值等参数范围。惩罚系数可以平衡错分样本和分类间隔,控制惩罚力度,范围为[0,1],越大的惩罚系数表示该分类器对边界内噪声点具有越小的容忍度。(a) Determine the range of parameters such as the penalty coefficient and gamma value of SVC. The penalty coefficient can balance the misclassified samples and the classification interval and control the penalty intensity. The range is [0,1]. The larger the penalty coefficient, the less tolerance the classifier has for noise points within the boundary.
(b)基于确定参数的高斯核函数,将输入的特征表示{t1,t2,...tn}final及对应的类标签{t′1,t′2,...t′n}进行空间映射,得到高维数据{k1,k2,...kn}及对应的类标签{k′1,k′1,...k′n},通过最优化目标函数得到决策边界及分类结果,根据分类结果以十折交叉验证方式计算分类器精确度。(b) Based on the Gaussian kernel function with determined parameters, the input feature representation {t 1 , t 2 , ... t n } final and the corresponding class labels {t′ 1 , t′ 2 , ... t′ n } are spatially mapped to obtain high-dimensional data {k 1 , k 2 , ... k n } and the corresponding class labels {k′ 1 , k′ 1 , ... k′ n }. By optimizing the objective function The decision boundary and classification results are obtained, and the classifier accuracy is calculated based on the classification results using a ten-fold cross validation method.
(c)采用改进的网格搜索方法,根据指定的参数范围对各个可能的参数取值排列组合,得到M个参数结果。将M个参数结果分为m组,每组个参数。对每个组采样k次重复步骤(b),计算平均精确度并降序排列取前t个组;对t个组采样2k次重复步骤(b)计算平均精确度并降序排列取前个组,直到得到最优参数,并输出最终识别分类结果Ri。(c) Using the improved grid search method, the possible parameter values are arranged and combined according to the specified parameter range to obtain M parameter results. The M parameter results are divided into m groups, each group Parameters. Sample each group k times and repeat step (b), calculate the average accuracy and sort in descending order to take the top t groups; sample t groups 2k times and repeat step (b) to calculate the average accuracy and sort in descending order to take the top groups until the optimal parameters are obtained, and the final recognition and classification result R i is output.
在本示例的实施例中,其特征在于,所述方法中基于多项式核函数的SVC法还包括:In the embodiment of this example, it is characterized in that the SVC method based on the polynomial kernel function in the method further includes:
S6.1.确定SVC的核函数k(x,x′)=(xx′+c)d、惩罚系数、gamma值等参数范围,所述惩罚系数范围为[0,1];S6.1. Determine the parameter ranges of the kernel function k(x, x′)=(xx′+c) d , the penalty coefficient, the gamma value, etc. of the SVC, wherein the penalty coefficient range is [0,1];
S6.2.基于确定参数的多项式核函数,将输入的特征表示{t1,t2,...tn}final及对应的类标签{t′1,t′1,...t′n}进行空间映射,得到高维数据{k1,k2,...kn}及对应的类标签{k′1,k′1,...k′n},通过最优化目标函数得到决策边界及分类结果,根据分类结果以十折交叉验证方式计算分类器精确度;S6.2. Based on the polynomial kernel function with determined parameters, the input feature representation {t 1 , t 2 , ... t n } final and the corresponding class labels {t′ 1 , t′ 1 , ... t′ n } are spatially mapped to obtain high-dimensional data {k 1 , k 2 , ... k n } and the corresponding class labels {k′ 1 , k′ 1 , ... k′ n }. By optimizing the objective function Obtain the decision boundary and classification results, and calculate the classifier accuracy based on the classification results using a ten-fold cross validation method;
S6.3.采用改进的网格搜索方法,根据指定的参数范围对参数取值排列组合,得到M个参数结果。将M个参数结果分为m组,每组个参数,对每个组采样k次重复步骤S6.2,计算平均精确度并降序排列取前t个组;对t个组采样2k次重复步骤S6.2计算平均精确度并降序排列取前个组,直到得到最优参数,并输出最终识别分类结果Rj。S6.3. Use the improved grid search method to arrange and combine the parameter values according to the specified parameter range to obtain M parameter results. Divide the M parameter results into m groups, each group parameters, sample each group k times and repeat step S6.2, calculate the average accuracy and sort them in descending order to take the top t groups; sample t groups 2k times and repeat step S6.2 to calculate the average accuracy and sort them in descending order to take the top groups until the optimal parameters are obtained, and the final recognition and classification result R j is output.
在本示例的实施例中,基于多项式核函数的SVC使用k(x,x′)=(xx′+c)d作为其核函数,通过升维可以解决非线性问题,并允许主观设置幂数。如图3C所示,其计算过程为:In the embodiment of this example, the polynomial kernel function based SVC uses k(x, x′)=(xx′+c) d as its kernel function, which can solve nonlinear problems by increasing the dimension and allows subjective setting of the power. As shown in FIG3C , the calculation process is:
(a)确定SVC的惩罚系数、gamma值等参数范围。惩罚系数可以平衡错分样本和分类间隔,控制惩罚力度,范围为[0,1],越大的惩罚系数表示该分类器对边界内噪声点具有越小的容忍度。(a) Determine the range of parameters such as the penalty coefficient and gamma value of SVC. The penalty coefficient can balance the misclassified samples and the classification interval and control the penalty intensity. The range is [0,1]. The larger the penalty coefficient, the less tolerance the classifier has for noise points within the boundary.
(b)基于确定参数的多项式核函数,将输入的特征表示{t1,t2,...tn}final及对应的类标签{t′1,t′1,...t′n}进行空间映射,得到高维数据{k1,k2,...kn}及对应的类标签{k′1,k′1,...k′n},通过最优化目标函数得到决策边界及分类结果,根据分类结果以十折交叉验证方式计算分类器精确度。(b) Based on the polynomial kernel function with determined parameters, the input feature representation {t 1 , t 2 , ... t n } final and the corresponding class labels {t′ 1 , t′ 1 , ... t′ n } are spatially mapped to obtain high-dimensional data {k 1 , k 2 , ... k n } and the corresponding class labels {k′ 1 , k′ 1 , ... k′ n }. By optimizing the objective function The decision boundary and classification results are obtained, and the classifier accuracy is calculated based on the classification results using a ten-fold cross validation method.
(c)采用改进的网格搜索方法,根据指定的参数范围对各个可能的参数取值排列组合,得到M个参数结果。将M个参数结果分为m组,每组个参数。对每个组采样k次重复步骤(b),计算平均精确度并降序排列取前t个组;对t个组采样2k次重复步骤(b)计算平均精确度并降序排列取前个组,直到得到最优参数,并输出最终识别分类结果Rj。(c) Using the improved grid search method, the possible parameter values are arranged and combined according to the specified parameter range to obtain M parameter results. The M parameter results are divided into m groups, each group Parameters. Sample each group k times and repeat step (b), calculate the average accuracy and sort in descending order to take the top t groups; sample t groups 2k times and repeat step (b) to calculate the average accuracy and sort in descending order to take the top groups until the optimal parameters are obtained, and the final recognition and classification result R j is output.
在本示例的实施例中,所述方法中基于Sigmoid核函数的SVC法还包括:In the embodiment of this example, the SVC method based on the Sigmoid kernel function in the method further includes:
S7.1.确定SVC的核函数k(x,x′)=tanh(kxx′+c)、惩罚系数、gamma值等参数范围,所述惩罚系数范围为[0,1];S7.1. Determine the parameter ranges of the kernel function k(x, x′)=tanh(kxx′+c), penalty coefficient, gamma value, etc. of the SVC, wherein the penalty coefficient range is [0,1];
S7.2.基于确定参数的基于Sigmoid核函数,将输入的特征表示{t1,t2,...tn}final及对应的类标签{t′1,t′2,...t′n}进行空间映射,得到高维数据{k1,k2,...kn}及对应的类标签{k′1,k′1,...k′n},通过最优化目标函数得到决策边界及分类结果,根据分类结果以十折交叉验证方式计算分类器精确度;S7.2. Based on the Sigmoid kernel function with determined parameters, the input feature representation {t 1 , t 2 , ... t n } final and the corresponding class labels {t′ 1 , t′ 2 , ... t′ n } are spatially mapped to obtain high-dimensional data {k 1 , k 2 , ... k n } and the corresponding class labels {k′ 1 , k′ 1 , ... k′ n }, and the objective function is optimized. Obtain the decision boundary and classification results, and calculate the classifier accuracy based on the classification results using a ten-fold cross validation method;
S7.3.采用改进的网格搜索方法,根据指定的参数范围对参数取值排列组合,得到M个参数结果。将M个参数结果分为m组,每组个参数,对每个组采样k次重复步骤S7.2,计算平均精确度并降序排列取前t个组;对t个组采样2k次重复步骤S7.2计算平均精确度并降序排列取前个组,直到得到最优参数,并输出最终识别分类结果Rk。S7.3. Use the improved grid search method to arrange and combine the parameter values according to the specified parameter range to obtain M parameter results. Divide the M parameter results into m groups, each group parameters, sample each group k times and repeat step S7.2, calculate the average accuracy and sort them in descending order to take the top t groups; sample t groups 2k times and repeat step S7.2 to calculate the average accuracy and sort them in descending order to take the top groups until the optimal parameters are obtained, and the final recognition and classification result R k is output.
在本示例的实施例中,基于Sigmoid核函数的SVC使用k(x,x′)=tanh(kxx′+c)作为其核函数,对于未知样本具有良好的泛化能力,可以减少过学习现象的发生。如图3D所示,其计算过程为:In the embodiment of this example, the SVC based on the Sigmoid kernel function uses k(x, x′)=tanh(kxx′+c) as its kernel function, which has good generalization ability for unknown samples and can reduce the occurrence of over-learning. As shown in Figure 3D, its calculation process is:
(a)确定SVC的惩罚系数、gamma值等参数范围。惩罚系数可以平衡错分样本和分类间隔,控制惩罚力度,范围为[0,1],越大的惩罚系数表示该分类器对边界内噪声点具有越小的容忍度。(a) Determine the range of parameters such as the penalty coefficient and gamma value of SVC. The penalty coefficient can balance the misclassified samples and the classification interval and control the penalty intensity. The range is [0,1]. The larger the penalty coefficient, the less tolerance the classifier has for noise points within the boundary.
(b)基于确定参数的Sigmoid核函数,将输入的特征表示{t1,t2,...tn}final及对应的类标签{t′1,t′2,...t′n}进行空间映射,得到高维数据{k1,k2,...kn}及对应的类标签{k′1,k′1,...k′n},通过最优化目标函数得到决策边界及分类结果,根据分类结果以十折交叉验证方式计算分类器精确度。(b) Based on the Sigmoid kernel function with determined parameters, the input feature representation {t 1 , t 2 , ... t n } final and the corresponding class labels {t′ 1 , t′ 2 , ... t′ n } are spatially mapped to obtain high-dimensional data {k 1 , k 2 , ... k n } and the corresponding class labels {k′ 1 , k′ 1 , ... k′ n }. By optimizing the objective function The decision boundary and classification results are obtained, and the classifier accuracy is calculated based on the classification results using a ten-fold cross validation method.
(c)采用改进的网格搜索方法,根据指定的参数范围对各个可能的参数取值排列组合,得到M个参数结果。将M个参数结果分为m组,每组个参数。对每个组采样k次重复步骤(b),计算平均精确度并降序排列取前t个组;对t个组采样2k次重复步骤(b)计算平均精确度并降序排列取前个组,直到得到最优参数,并输出最终识别分类结果Rk。(c) Using the improved grid search method, the possible parameter values are arranged and combined according to the specified parameter range to obtain M parameter results. The M parameter results are divided into m groups, each group Parameters. Sample each group k times and repeat step (b), calculate the average accuracy and sort in descending order to take the top t groups; sample t groups 2k times and repeat step (b) to calculate the average accuracy and sort in descending order to take the top groups until the optimal parameters are obtained, and the final recognition and classification result R k is output.
在本示例的实施例中,所述方法还包括:In the embodiment of this example, the method further comprises:
所述识别分类模块接收到所述面部特征后,分别基于高斯核函数的SVC法、基于多项式核函数的SVC法、基于Sigmoid核函数的SVC法对所述面部特征进行并行识别分类处理,分别生成基于高斯核函数的SVC法分类结果Ri、基于多项式核函数的SVC法分类结果Rj、基于Sigmoid核函数的SVC法分类结果Rk;After receiving the facial features, the recognition and classification module performs parallel recognition and classification processing on the facial features based on the SVC method of Gaussian kernel function, the SVC method based on polynomial kernel function, and the SVC method based on Sigmoid kernel function, respectively, and generates classification results Ri of the SVC method based on Gaussian kernel function, Rj of the SVC method based on polynomial kernel function, and Rk of the SVC method based on Sigmoid kernel function;
若所述基于高斯核函数的SVC法分类结果Ri、基于多项式核函数的SVC法分类结果Rj、基于Sigmoid核函数的SVC法分类结果Rk相同,则以所述基于高斯核函数的SVC法分类结果Ri、基于多项式核函数的SVC法分类结果Rj、基于Sigmoid核函数的SVC法分类结果Rk中的任一个分类结果最为决策结果,完成人脸识别;If the classification result R i of the SVC method based on the Gaussian kernel function, the classification result R j of the SVC method based on the polynomial kernel function, and the classification result R k of the SVC method based on the Sigmoid kernel function are the same, any one of the classification results R i of the SVC method based on the Gaussian kernel function, the classification result R j of the SVC method based on the polynomial kernel function, and the classification result R k of the SVC method based on the Sigmoid kernel function is used as the decision result to complete face recognition;
若所述基于高斯核函数的SVC法分类结果Ri、基于多项式核函数的SVC法分类结果Rj、基于Sigmoid核函数的SVC法分类结果Rk中的两个分类结果相同,则基于聚合函数聚合相同的两个分类结果最为决策结果,完成人脸识别;If two of the classification results of the SVC method based on the Gaussian kernel function R i , the classification results of the SVC method based on the polynomial kernel function R j , and the classification results of the SVC method based on the Sigmoid kernel function R k are the same, the two same classification results are aggregated as the decision result based on the aggregation function to complete the face recognition;
若所述基于高斯核函数的SVC法分类结果Ri、基于多项式核函数的SVC法分类结果Rj、基于Sigmoid核函数的SVC法分类结果Rk均不相同,则基于预设权重ωi,ωj,ωk,所述ωi+ωj+ωk=1,进行集中仲裁产生最终决策结果,完成人脸识别。If the SVC classification result R i based on the Gaussian kernel function, the SVC classification result R j based on the polynomial kernel function, and the SVC classification result R k based on the Sigmoid kernel function are all different, then based on the preset weights ω i , ω j , ω k , ω i +ω j +ω k = 1, centralized arbitration is performed to produce the final decision result, and face recognition is completed.
在本示例的实施例中,如图3D所示,决策中心的输入为基于高斯核函数SVC的输出结果Ri及对应权重ωi,基于多项式核函数SVC的输出结果Rj及对应权重ωj,和基于Sigmoid核函数SVC的输出结果Rk及对应权重ωk,按照以下判定规则进行决策。In the embodiment of this example, as shown in Figure 3D, the input of the decision center is the output result R i based on the Gaussian kernel function SVC and the corresponding weight ω i , the output result R j based on the polynomial kernel function SVC and the corresponding weight ω j , and the output result R k based on the Sigmoid kernel function SVC and the corresponding weight ω k , and the decision is made according to the following judgment rules.
如果Ri=Rj=Rk,即三类SVC输出结果一致,则直接输出Ri作为决策结果;如果输出结果中有两者一致,则首先使用聚合函数G聚合相同结果,完成投票,输出最多票数对应的决策结果;如果三类SVC输出结果均不同,则根据权重,取最大权重对应的输出为决策结果。If Ri = Rj = Rk , that is, the output results of the three types of SVCs are consistent, then Ri is directly output as the decision result; if two of the output results are consistent, then the aggregation function G is first used to aggregate the same results, complete the voting, and output the decision result corresponding to the largest number of votes; if the output results of the three types of SVCs are different, then according to the weights, the output corresponding to the maximum weight is taken as the decision result.
在本示例的实施例中,输出结果中心打印输出集成SVC的最终决策结果Ri或Rj或Rk。In the embodiment of this example, the output result center prints out the final decision result R i or R j or R k of the integrated SVC.
在本示例的实施例中,本公开解决了数据中心化受极端值影响的问题。对数据简单求和得到均值的方法容易收到极端值的影响,从而求得的均值不适合描述整体数据的集中情况,导致对数据进行中心化后提取的主成分特征解释性降低。对此,我们改进得到平均值的方式,对输入数据计算直方图,根据权重求得加权均值,极大的降低了极端值对均值的影响,使得后续数据中心化更加贴合实际数据,得到更为精准的特征脸。本公开解决了SVC依赖核函数选择的问题。训练SVC识别人脸图像,核函数的选择影响SVC算法效果,所以需要根据采集到的人脸图像特征,尝试多种核函数,此过程耗费时间精力。对此,我们提出一种集成SVC,结合高斯核函数、多项式核函数、Sigmoid核函数的识别分类结果,集中仲裁产生最终决策,实现分类器自动选择合适核函数的功能,减少核函数选择耗费的时间精力,提高人脸识别的精确度。本公开改进了网格搜索算法。在网格搜索确定最优参数组合的过程中,采用宽度优先的网格搜索策略,不断剪支缩小搜索范围,降低参数搜索复杂度。本公开加入数据增强模块,利用有限的训练样本数据生成更多样的接近真实分布的训练数据,促使模型学习更多鲁棒性特征,提高模型的泛化能力,进一步提高人脸识别的精确度。In the embodiment of this example, the present disclosure solves the problem that data centering is affected by extreme values. The method of simply summing the data to obtain the mean is easily affected by extreme values, so that the obtained mean is not suitable for describing the concentration of the overall data, resulting in reduced explanatory power of the main component features extracted after the data is centralized. In this regard, we improve the way to obtain the average value, calculate the histogram of the input data, and obtain the weighted mean according to the weight, which greatly reduces the influence of extreme values on the mean, making the subsequent data centering more in line with the actual data and obtaining more accurate feature faces. The present disclosure solves the problem that SVC depends on the selection of kernel functions. When training SVC to recognize face images, the choice of kernel function affects the effect of SVC algorithm, so it is necessary to try multiple kernel functions according to the features of the collected face images, and this process consumes time and energy. In this regard, we propose an integrated SVC, combining the recognition and classification results of Gaussian kernel function, polynomial kernel function, and Sigmoid kernel function, centralized arbitration to produce the final decision, realize the function of the classifier automatically selecting the appropriate kernel function, reduce the time and energy consumed by kernel function selection, and improve the accuracy of face recognition. The present disclosure improves the grid search algorithm. In the process of determining the optimal parameter combination through grid search, a width-first grid search strategy is adopted to continuously prune branches to narrow the search range and reduce the complexity of parameter search. The present invention adds a data enhancement module to generate more diverse training data close to the real distribution using limited training sample data, so as to prompt the model to learn more robust features, improve the generalization ability of the model, and further improve the accuracy of face recognition.
需要说明的是,尽管在附图中以特定顺序描述了本公开中方法的各个步骤,但是,这并非要求或者暗示必须按照该特定顺序来执行这些步骤,或是必须执行全部所示的步骤才能实现期望的结果。附加的或备选的,可以省略某些步骤,将多个步骤合并为一个步骤执行,以及/或者将一个步骤分解为多个步骤执行等。It should be noted that, although the steps of the method in the present disclosure are described in a specific order in the drawings, this does not require or imply that the steps must be performed in this specific order, or that all the steps shown must be performed to achieve the desired results. Additionally or alternatively, some steps may be omitted, multiple steps may be combined into one step, and/or one step may be decomposed into multiple steps, etc.
此外,在本示例实施例中,还提供了一种基于改进的PCA+SVC的人脸识别装置。参照图4所示,该一种基于改进的PCA+SVC的人脸识别装置400可以包括:输入人脸图像中心模块410、数据增强模块420、面部特征提取模块430、识别分类模块440以及输出识别结果中心模块450。In addition, in this exemplary embodiment, a face recognition device based on improved PCA+SVC is also provided. Referring to FIG4 , the
其中:in:
输入人脸图像中心模块410,用于接收人脸图像信息,将所述人脸图像信息输入人脸图像中心以二维灰度矩阵的形式读取图像,以将所述人脸图像信息转化为一维数据序列,并将所述一维数据序列发送至数据增强模块;The face image
数据增强模块420,用于接收到一维数据序列后,将所述一维数据序列随机进行随机裁剪、颜色改变、水平翻转、竖直翻转、加入噪声、仿射变换处理,生成数据增强的一维数据序列,并将所述数据增强的一维数据序列发送至面部特征提取模块;The
面部特征提取模块430,用于接收到所述一维数据序列或所述数据增强的一维数据序列后,基于PCA法对所述一维数据序列或所述数据增强的一维数据序列进行数据特征提取,生成面部特征,并将所述面部特征发送至识别分类模块;A facial
识别分类模块440,用于接收到所述面部特征后,分别基于高斯核函数的SVC法、基于多项式核函数的SVC法、基于Sigmoid核函数的SVC法对所述面部特征进行并行识别分类处理,分别生成基于高斯核函数的SVC法分类结果、基于多项式核函数的SVC法分类结果、基于Sigmoid核函数的SVC法分类结果,并基于预设权重进行集中仲裁产生最终决策结果,完成人脸识别;The recognition and
输出识别结果中心模块450,用于接收所述识别分类模块发送的人脸识别结果,并将所述人脸识别结果打印输出。The output recognition result
上述中各一种基于改进的PCA+SVC的人脸识别装置模块的具体细节已经在对应的一种基于改进的PCA+SVC的人脸识别方法中进行了详细的描述,因此此处不再赘述。The specific details of each of the above-mentioned face recognition device modules based on improved PCA+SVC have been described in detail in the corresponding face recognition method based on improved PCA+SVC, so they will not be repeated here.
应当注意,尽管在上文详细描述中提及了一种基于改进的PCA+SVC的人脸识别装置400的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。It should be noted that, although several modules or units of a
此外,在本公开的示例性实施例中,还提供了一种能够实现上述方法的电子设备。In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
所属技术领域的技术人员能够理解,本发明的各个方面可以实现为系统、方法或程序产品。因此,本发明的各个方面可以具体实现为以下形式,即:完全的硬件实施例、完全的软件实施例(包括固件、微代码等),或硬件和软件方面结合的实施例,这里可以统称为“电路”、“模块”或“系统”。It will be appreciated by those skilled in the art that various aspects of the present invention may be implemented as systems, methods or program products. Therefore, various aspects of the present invention may be specifically implemented in the following forms, namely: complete hardware embodiments, complete software embodiments (including firmware, microcode, etc.), or embodiments combining hardware and software aspects, which may be collectively referred to herein as "circuits", "modules" or "systems".
下面参照图5来描述根据本发明的这种实施例的电子设备500。图5显示的电子设备500仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。The
如图5所示,电子设备500以通用计算设备的形式表现。电子设备500的组件可以包括但不限于:上述至少一个处理单元510、上述至少一个存储单元520、连接不同系统组件(包括存储单元520和处理单元510)的总线530、显示单元540。As shown in Fig. 5, the
其中,所述存储单元存储有程序代码,所述程序代码可以被所述处理单元510执行,使得所述处理单元510执行本说明书上述“示例性方法”部分中描述的根据本发明各种示例性实施例的步骤。例如,所述处理单元510可以执行如图1中所示的步骤S110至步骤S140。The storage unit stores a program code, which can be executed by the
存储单元520可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)5201和/或高速缓存存储单元5202,还可以进一步包括只读存储单元(ROM)5203。The
存储单元520还可以包括具有一组(至少一个)程序模块5203的程序/实用工具5204,这样的程序模块5205包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。The
总线550可以为表示几类总线结构中的一种或多种,包括存储单元总线或者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。
电子设备500也可以与一个或多个外部设备570(例如键盘、指向设备、蓝牙设备等)通信,还可与一个或者多个使得用户能与该电子设备500交互的设备通信,和/或与使得该电子设备500能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口550进行。并且,电子设备500还可以通过网络适配器560与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器560通过总线550与电子设备500的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备500使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。The
通过以上的实施例的描述,本领域的技术人员易于理解,这里描述的示例实施例可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本公开实施例的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、终端装置、或者网络设备等)执行根据本公开实施例的方法。Through the description of the above embodiments, it is easy for those skilled in the art to understand that the example embodiments described here can be implemented by software, or by software combined with necessary hardware. Therefore, the technical solution according to the embodiment of the present disclosure can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a USB flash drive, a mobile hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiment of the present disclosure.
在本公开的示例性实施例中,还提供了一种计算机可读存储介质,其上存储有能够实现本说明书上述方法的程序产品。在一些可能的实施例中,本发明的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当所述程序产品在终端设备上运行时,所述程序代码用于使所述终端设备执行本说明书上述“示例性方法”部分中描述的根据本发明各种示例性实施例的步骤。In an exemplary embodiment of the present disclosure, a computer-readable storage medium is also provided, on which a program product capable of implementing the above method of the present specification is stored. In some possible embodiments, various aspects of the present invention can also be implemented in the form of a program product, which includes a program code, and when the program product is run on a terminal device, the program code is used to enable the terminal device to perform the steps according to various exemplary embodiments of the present invention described in the above "Exemplary Method" section of the present specification.
参考图6所示,描述了根据本发明的实施例的用于实现上述方法的程序产品600,其可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在终端设备,例如个人电脑上运行。然而,本发明的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。Referring to FIG6 , a
所述程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。The program product may use any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above. More specific examples (non-exhaustive list) of readable storage media include: an electrical connection with one or more wires, a portable disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above.
计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。Computer readable signal media may include data signals propagated in baseband or as part of a carrier wave, which carry readable program code. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above. Readable signal media may also be any readable medium other than a readable storage medium, which may send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device.
可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。The program code embodied on the readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the foregoing.
可以以一种或多种程序设计语言的任意组合来编写用于执行本发明操作的程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。Program code for performing the operations of the present invention may be written in any combination of one or more programming languages, including object-oriented programming languages such as Java, C++, etc., and conventional procedural programming languages such as "C" or similar programming languages. The program code may be executed entirely on the user computing device, partially on the user device, as a separate software package, partially on the user computing device and partially on a remote computing device, or entirely on a remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (e.g., via the Internet using an Internet service provider).
此外,上述附图仅是根据本发明示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。In addition, the above-mentioned figures are only schematic illustrations of the processes included in the method according to an exemplary embodiment of the present invention, and are not intended to be limiting. It is easy to understand that the processes shown in the above-mentioned figures do not indicate or limit the time sequence of these processes. In addition, it is also easy to understand that these processes can be performed synchronously or asynchronously, for example, in multiple modules.
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其他实施例。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由权利要求指出。Those skilled in the art will readily appreciate other embodiments of the present disclosure after considering the specification and practicing the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the present disclosure that follow the general principles of the present disclosure and include common knowledge or customary technical means in the art that are not disclosed in the present disclosure. The specification and examples are to be considered exemplary only, and the true scope and spirit of the present disclosure are indicated by the claims.
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限。It should be understood that the present disclosure is not limited to the exact structures that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (11)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310070590.4A CN115995110A (en) | 2023-01-17 | 2023-01-17 | Face recognition method based on improved PCA+SVC |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310070590.4A CN115995110A (en) | 2023-01-17 | 2023-01-17 | Face recognition method based on improved PCA+SVC |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN115995110A true CN115995110A (en) | 2023-04-21 |
Family
ID=85990066
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310070590.4A Pending CN115995110A (en) | 2023-01-17 | 2023-01-17 | Face recognition method based on improved PCA+SVC |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115995110A (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040017932A1 (en) * | 2001-12-03 | 2004-01-29 | Ming-Hsuan Yang | Face recognition using kernel fisherfaces |
| CN104408440A (en) * | 2014-12-10 | 2015-03-11 | 重庆邮电大学 | Identification method for human facial expression based on two-step dimensionality reduction and parallel feature fusion |
| CN108932501A (en) * | 2018-07-13 | 2018-12-04 | 江苏大学 | A kind of face identification method being associated with integrated dimensionality reduction based on multicore |
| CN109387484A (en) * | 2018-10-24 | 2019-02-26 | 湖南农业大学 | A kind of ramee variety recognition methods of combination EO-1 hyperion and support vector cassification |
| CN113792678A (en) * | 2021-09-17 | 2021-12-14 | 华院分析技术(上海)有限公司 | Face recognition method, system, storage medium and device based on PCA and Relieff SVM |
-
2023
- 2023-01-17 CN CN202310070590.4A patent/CN115995110A/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040017932A1 (en) * | 2001-12-03 | 2004-01-29 | Ming-Hsuan Yang | Face recognition using kernel fisherfaces |
| CN104408440A (en) * | 2014-12-10 | 2015-03-11 | 重庆邮电大学 | Identification method for human facial expression based on two-step dimensionality reduction and parallel feature fusion |
| CN108932501A (en) * | 2018-07-13 | 2018-12-04 | 江苏大学 | A kind of face identification method being associated with integrated dimensionality reduction based on multicore |
| CN109387484A (en) * | 2018-10-24 | 2019-02-26 | 湖南农业大学 | A kind of ramee variety recognition methods of combination EO-1 hyperion and support vector cassification |
| CN113792678A (en) * | 2021-09-17 | 2021-12-14 | 华院分析技术(上海)有限公司 | Face recognition method, system, storage medium and device based on PCA and Relieff SVM |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Tsai et al. | Mice: Mixture of contrastive experts for unsupervised image clustering | |
| Wang et al. | Depth pooling based large-scale 3-d action recognition with convolutional neural networks | |
| Xiao et al. | Robust kernel low-rank representation | |
| CN109300121A (en) | Method and system for constructing a diagnostic model of cardiovascular disease and the diagnostic model | |
| CN111898703B (en) | Multi-label video classification method, model training method, device and medium | |
| Jiang et al. | Variational deep embedding: A generative approach to clustering | |
| CN112395979A (en) | Image-based health state identification method, device, equipment and storage medium | |
| CN105469063B (en) | The facial image principal component feature extracting method and identification device of robust | |
| CN113435335B (en) | Microscopic expression recognition method and device, electronic equipment and storage medium | |
| CN115100709B (en) | Feature separation image face recognition and age estimation method | |
| CN118378128A (en) | Multi-mode emotion recognition method based on staged attention mechanism | |
| CN117994550B (en) | Incomplete multi-view large-scale animal image clustering method based on depth online anchor subspace learning | |
| CN114998602A (en) | Domain-adaptive learning method and system based on low-confidence sample contrast loss | |
| CN112036520A (en) | Panda age identification method and device based on deep learning and storage medium | |
| CN109886281A (en) | One kind is transfinited learning machine color image recognition method based on quaternary number | |
| CN113065520B (en) | A remote sensing image classification method for multimodal data | |
| CN111611852A (en) | A method, device and equipment for training an expression recognition model | |
| CN114743058A (en) | Width learning image classification method and device based on mixed norm regularity constraints | |
| Jadhav et al. | HDL-PI: hybrid DeepLearning technique for person identification using multimodal finger print, iris and face biometric features | |
| CN113643283A (en) | Method, device, equipment and storage medium for detecting aging condition of human body | |
| CN109508640A (en) | Crowd emotion analysis method and device and storage medium | |
| Korde et al. | Training of generative adversarial networks with hybrid evolutionary optimization technique | |
| CN118470553A (en) | Hyperspectral remote sensing image processing method based on spatial spectral attention mechanism | |
| CN118230219A (en) | Intelligent urban green road environment monitoring method and device | |
| Ishfaq et al. | TVAE: Deep metric learning approach for variational autoencoder |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |