[go: up one dir, main page]

CN110097022A - 2DPCA facial image recognition method based on the enhancing of two-way interpolation - Google Patents

2DPCA facial image recognition method based on the enhancing of two-way interpolation Download PDF

Info

Publication number
CN110097022A
CN110097022A CN201910389944.5A CN201910389944A CN110097022A CN 110097022 A CN110097022 A CN 110097022A CN 201910389944 A CN201910389944 A CN 201910389944A CN 110097022 A CN110097022 A CN 110097022A
Authority
CN
China
Prior art keywords
vector
2dpca
interpolation
projection
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910389944.5A
Other languages
Chinese (zh)
Inventor
文成林
牛冰川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201910389944.5A priority Critical patent/CN110097022A/en
Publication of CN110097022A publication Critical patent/CN110097022A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及一种基于双向插值增强的2DPCA人脸图像识别方法,本发明先把把ORL人脸库分为训练样本和测试样本;然后分别用PCA方法、2DPCA方法、(2D)2PCA方法对训练样本提取特征值和特征向量。再使用插值的方法对所提取的特征向量进行插值。最后采用范数距离方法和支持向量机方法进行识别。本发明通过在高价值的特征向量之间插入新的向量,以期望提高特征信息的显示度,在不增加更大计算复杂度的前提下来提高了图像的识别精度。

The present invention relates to a kind of 2DPCA face image recognition method based on two-way interpolation enhancement, the present invention first divides ORL face storehouse into training sample and test sample; Then use PCA method, 2DPCA method, (2D) 2 PCA method to Extract eigenvalues and eigenvectors from the training samples. Then use the interpolation method to interpolate the extracted feature vectors. Finally, the norm distance method and the support vector machine method are used for identification. The present invention inserts new vectors between high-value feature vectors in order to increase the display degree of feature information, and improves image recognition accuracy without increasing computational complexity.

Description

基于双向插值增强的2DPCA人脸图像识别方法2DPCA Face Image Recognition Method Based on Bidirectional Interpolation Enhancement

技术领域technical field

本发明属于图像处理领域,涉及一种基于双向插值增强的2DPCA人脸图像识别方法。The invention belongs to the field of image processing and relates to a 2DPCA human face image recognition method based on bidirectional interpolation enhancement.

背景技术Background technique

人脸识别是模式识别领域内的一个活跃的研究问题,在快速发展的智能信息时代,人脸识别的作用越来越大。人脸识别的方法有很多种,主成分分析(PCA)是提取特征脸的主要方法之一。Face recognition is an active research problem in the field of pattern recognition. In the rapidly developing intelligent information age, face recognition plays an increasingly important role. There are many methods for face recognition, principal component analysis (PCA) is one of the main methods to extract eigenfaces.

PCA算法提取的各主成分之间相互正交,可消除原始数据成分间的相互影响,且思想简单,易于在计算机上实现。但是它需要将图像矩阵转化为一维向量,导致协方差矩阵维数过大,计算量太大,并且没有利用人脸图像的对称性。The principal components extracted by the PCA algorithm are orthogonal to each other, which can eliminate the mutual influence between the original data components, and the idea is simple and easy to implement on the computer. But it needs to convert the image matrix into a one-dimensional vector, resulting in too large covariance matrix dimension, too much calculation, and does not take advantage of the symmetry of the face image.

接着2DPCA被提出来,与PCA方法基于一维向量不同,2DPCA利用原始图像矩阵直接构造协方差矩阵,并提取出主要特征向量,大大提高了识别效率。尽管2DPCA比PCA具有更高的识别精度,但2DPCA的一个重要问题是它需要比PCA更多的系数来表示图像。2DPCA基本上是在图像的列方向上工作的,行方向上的维数没有减少,复杂度仍然很高,因此推广了同时考虑行和列方向的2DPCA,即(2D)2PCA。虽然行和列方向上的同时压缩,提高了识别速度,但精度有不同程度的降低。分析表明,一方面是由于信息压缩过大导致,另一方面,行与行之间、列与列之间都是正交的,没有冗余,使得很难表示出投影特征向量的最大投影方向。Then 2DPCA was proposed. Unlike the PCA method based on one-dimensional vectors, 2DPCA uses the original image matrix to directly construct the covariance matrix and extract the main feature vectors, which greatly improves the recognition efficiency. Although 2DPCA has higher recognition accuracy than PCA, an important problem with 2DPCA is that it requires more coefficients than PCA to represent the image. 2DPCA basically works in the column direction of the image, the dimensionality in the row direction is not reduced, and the complexity is still high, so the 2DPCA that considers the row and column directions at the same time is promoted, that is, (2D) 2 PCA. Although the simultaneous compression in the row and column directions improves the recognition speed, but the accuracy is reduced to varying degrees. The analysis shows that, on the one hand, it is caused by excessive information compression; on the other hand, rows and rows, columns and columns are all orthogonal, and there is no redundancy, making it difficult to express the maximum projection direction of the projected feature vector .

发明内容Contents of the invention

本发明的目的针对现有技术存在的不足,提出了一种基于双向插值增强的2DPCA人脸图像识别方法。关键点在于本发明以标价理论为基础,通过在高价值的特征向量之间插入新的向量,以期望提高特征信息的显示度,在不增加更大计算复杂度的前提下来提高图像的识别精度。Purpose of the present invention aims at the deficiency that prior art exists, has proposed a kind of 2DPCA people's face image recognition method based on two-way interpolation enhancement. The key point is that the present invention is based on the pricing theory. By inserting new vectors between high-value feature vectors, it is expected to improve the display of feature information and improve the recognition accuracy of images without increasing the computational complexity. .

本发明的方法包括以下几个步骤:Method of the present invention comprises the following steps:

步骤1、把ORL人脸库分为训练样本和测试样本。Step 1. Divide the ORL face database into training samples and test samples.

步骤2、用PCA方法对步骤1中的训练样本提取特征值和特征向量Step 2, use the PCA method to extract eigenvalues and eigenvectors from the training samples in step 1

步骤3、步骤2所述的提取特征值和特征向量的方法需要将图像矩阵转化为一维向量,导致协方差矩阵维数过大,计算量太大,为解决该问题,采用2DPCA对步骤1中的训练样本提取特征值和特征向量。Step 3 and the method of extracting eigenvalues and eigenvectors described in step 2 need to convert the image matrix into a one-dimensional vector, resulting in too large a dimension of the covariance matrix and a large amount of calculation. In order to solve this problem, 2DPCA is used for step 1 Extract eigenvalues and eigenvectors from the training samples in .

步骤4、步骤3所述的提取特征值和特征向量的方法是在图像的列方向上工作的,行方向上的维数没有减少,复杂度仍然很高,为解决该问题,使用(2D)2PCA对步骤1中的训练样本提取特征值和特征向量。The methods for extracting eigenvalues and eigenvectors described in step 4 and step 3 work in the column direction of the image, the dimensionality in the row direction is not reduced, and the complexity is still very high. To solve this problem, use (2D) 2 PCA extracts eigenvalues and eigenvectors from the training samples in step 1.

步骤5、使用插值的方法对步骤2,步骤3,步骤4所提取的特征向量进行插值。Step 5, using an interpolation method to interpolate the feature vectors extracted in steps 2, 3 and 4.

其中所述插值方式具体为:The specific interpolation methods are as follows:

设u1和u2为两个投影轴,即两个特征向量;V向量是坐标轴上任一向量,设V向量与投影轴u1间夹角为α;G点是V向量上任意一点,由G点向两个投影轴u1,u2做投影,投影长度分别为a,b,假设a>b;在V向量和投影轴u1之间任意插入向量W,设V向量与W向量间的夹角为β,则β<α;由G点向W向量上做投影,显然投影长度大于a,随着插入的向量W与V向量之间夹角α的逐渐减小,投影长度逐渐增大,所隐含的特征逐渐凸显出来。Let u1 and u2 be two projection axes, that is, two eigenvectors; V vector is any vector on the coordinate axis, and the angle between V vector and projection axis u1 is α; G point is any point on V vector, from G point Project to two projection axes u1, u2, the projection lengths are a, b, assuming a>b; insert a vector W between the V vector and the projection axis u1, and set the angle between the V vector and the W vector to be β , then β<α; the projection from point G to vector W is obviously longer than a, and as the angle α between the inserted vector W and vector V gradually decreases, the projection length gradually increases, implying that features gradually emerged.

步骤6采用范数距离方法或支持向量机方法进行识别。Step 6 uses the norm distance method or the support vector machine method to identify.

本发明的有益效果:本发明通过在高价值的特征向量之间插入新的向量,以期望提高特征信息的显示度,在不增加更大计算复杂度的前提下来提高了图像的识别精度。Beneficial effects of the present invention: the present invention inserts new vectors between high-valued feature vectors in order to increase the display degree of feature information and improve image recognition accuracy without increasing computational complexity.

附图说明Description of drawings

图1:插值法原理图;Figure 1: Schematic diagram of interpolation method;

图2:支持向量机分类原理图;Figure 2: Schematic diagram of support vector machine classification;

图3:本发明流程图。Figure 3: Flowchart of the present invention.

具体实施方式Detailed ways

下面结合附图对本发明进行进一步的说明。The present invention will be further described below in conjunction with the accompanying drawings.

如图3所示,本发明具先分别通过主成分分析(PCA)、二维主成分分析(2DPCA)和双向的二维主成分分析((2D2)PCA)提取特征值和特征向量。然后利用图1所示的插值法对特征向量进行增强,最后通过范数距离和图2所示的支持向量机方法进行识别,具体包括以下步骤:As shown in FIG. 3 , the present invention extracts eigenvalues and eigenvectors through principal component analysis (PCA), two-dimensional principal component analysis (2DPCA) and two-way two-dimensional principal component analysis ((2D 2 )PCA). Then use the interpolation method shown in Figure 1 to enhance the feature vector, and finally identify it through the norm distance and the support vector machine method shown in Figure 2, specifically including the following steps:

步骤1把ORL人脸库分为训练样本和测试样本。Step 1 divides the ORL face database into training samples and test samples.

步骤2提取特征值和特征向量Step 2 extract eigenvalues and eigenvectors

步骤2.1用PCA方法对步骤1中的训练样本提取特征值和特征向量,算法如下:Step 2.1 Use the PCA method to extract eigenvalues and eigenvectors from the training samples in step 1, the algorithm is as follows:

设一个人脸库中有N个人的人脸图像,每个人有ni个不同视角的图像,记为Suppose there are face images of N people in a face database, and each person has n i images from different perspectives, denoted as

Aij∈Rm×n,i=1,2,…,N;j=1,2,…,ni (1)A ij ∈ R m×n , i=1,2,…,N; j=1,2,…,n i (1)

其中R代表集合实数集,人脸图像矩阵大小为m×n。m和n分别代表人脸图像矩阵的行数和列数。Among them, R represents the set of real numbers, and the size of the face image matrix is m×n. m and n represent the number of rows and columns of the face image matrix, respectively.

从每个人的图像中选择前ri个图像作为训练,余下的图像作为测试。将每一张人脸图像Aij按照相同的排列顺序转化为一维向量aij∈R1×mn。每个向量为1行mn列。From the images of each person, the first r i images are selected for training and the remaining images for testing. Transform each face image A ij into a one-dimensional vector a ij ∈ R 1×mn in the same order. Each vector is 1 row mn columns.

aij=[aij(1),aij(2),…,aij(mn)] (2)a ij =[a ij (1),a ij (2),...,a ij (mn)] (2)

这样每个人的训练样本集记为:In this way, each person's training sample set is recorded as:

训练样本集记为:The training sample set is recorded as:

训练样本中人脸图像转化为如式(2)的一维向量的均值记为:The mean value of the face image in the training sample converted into a one-dimensional vector such as formula (2) is recorded as:

训练样本的均值矩阵记为:The mean matrix of the training samples is recorded as:

其中in

矩阵复制mn列记为矩阵。Bundle Matrix copy mn columns denoted as matrix.

偏差矩阵记为:The deviation matrix is written as:

训练样本集的协方差矩阵记为:The covariance matrix of the training sample set is recorded as:

求取协方差矩阵的前d个特征值(λ1≥λ2≥…≥λd)和对应的特征向量(u1,u2,…,ud)。Find the first d eigenvalues (λ 1 ≥λ 2 ≥…≥λ d ) and corresponding eigenvectors (u 1 ,u 2 ,…,u d ) of the covariance matrix.

转换到低维的矩阵记为:The matrix converted to low dimension is recorded as:

Yi=[yi1,yi2,…,yik,…,yij],j=1,2,…,ni (10)Y i =[y i1 ,y i2 ,...,y ik ,...,y ij ], j=1,2,...,n i (10)

其中in

yij=UTaij (11)y ij = U T a ij (11)

U=[u1,u2,…,ur]T (12)U=[u 1 ,u 2 ,…,u r ] T (12)

步骤2.2用2DPCA对步骤1中的训练样本提取特征值和特征向量。Step 2.2 Use 2DPCA to extract eigenvalues and eigenvectors from the training samples in step 1.

设x是列方向是最佳投影方向,任意一个样本图像A向x投影后,得投影后的特征向量。Let x be the column direction is the best projection direction, after any sample image A is projected to x, the projected feature vector is obtained.

记为:Recorded as:

yij=Aijx,i=1,2,…,N;j=1,2,…,ri (13)y ij =A ij x, i=1,2,...,N; j=1,2,...,r i (13)

现在来确定最佳投影轴,引入样本图像A的协方差矩阵Gt,投影特征向量Y的协方差矩阵Sx,矩阵Sx的迹tr(Sx)。通过求迹的最大值,来求最佳投影方向。其准则为:Now to determine the optimal projection axis, introduce the covariance matrix G t of the sample image A, the covariance matrix S x of the projected feature vector Y, and the trace tr(S x ) of the matrix S x . Find the best projection direction by finding the maximum value of the trace. Its guidelines are:

J(X)=tr(Sx) (14)J(X)=tr(S x ) (14)

令:make:

把公式15代入公式14得:Substituting Equation 15 into Equation 14 gives:

令:make:

把公式17代入公式16得:Substituting Equation 17 into Equation 16 gives:

J(X)=xTGtx (20)J(X)=x T G t x (20)

该准则为总离散度准则。This criterion is the total dispersion criterion.

对Gt做奇异值分解,得到特征值λi(i=1,2,…,n),且λ1≥λ2≥…≥λn,奇异向量为ui(i=1,2,…,n),U=[u1,u2,…,un]。所以Perform singular value decomposition on G t to obtain eigenvalues λ i (i=1,2,…,n), and λ 1 ≥λ 2 ≥…≥λ n , and the singular vector is u i (i=1,2,… ,n), U=[u 1 ,u 2 ,…,u n ]. so

把公式18代入公式17得:Substituting Equation 18 into Equation 17 gives:

一般来说,取一个最佳投影轴是不够的,所以选择前d个主要特征,奇异向量ud(i=1,2,…,d),特征值λd(i=1,2,…,d),所以Ud=[u1,u2,…,ud],d≤n。In general, it is not enough to take an optimal projection axis, so choose the first d dominant features, singular vector u d (i=1,2,…,d), eigenvalues λ d (i=1,2,… ,d), so U d =[u 1 ,u 2 ,…,u d ], d≤n.

在公式20中只有当特征值λi取最大值时,对应的特征向量ui取得最大值,ui在X上的投影才取得最大值,使得tr(Sx)取得最大值。In formula 20, only when the eigenvalue λ i takes the maximum value, the corresponding eigenvector u i takes the maximum value, and the projection of u i on X takes the maximum value, so that tr(S x ) takes the maximum value.

求取最佳投影轴以后,把任一样本图像A在投影轴上进行投影After finding the best projection axis, project any sample image A on the projection axis

yk=Axk,k=1,2,…,d (24)y k =Ax k , k=1,2,...,d (24)

其中,x1,x2,…,xd为最佳投影轴;投影特征向量y1,y2,…,yd称为样本图像A的主成分向量。Among them, x 1 , x 2 , ..., x d are the optimal projection axes; the projection feature vectors y 1 , y 2 , ..., y d are called principal component vectors of the sample image A.

步骤2.3用(2D)2PCA方法对步骤1中的训练样本提取特征值和特征向量。Step 2.3 Use the (2D) 2 PCA method to extract eigenvalues and eigenvectors from the training samples in step 1.

列映射公式:Column mapping formula:

yij=Aijxk,k=1,2,…,d (25)y ij =A ij x k , k=1,2,...,d (25)

其中i=1,2,…,N;j=1,2,…,ri,yij∈Rm×dwhere i=1,2,...,N; j=1,2,...,r i , y ij ∈R m×d .

行映射公式:Row mapping formula:

cij=vTAij,i=1,2,…,N;j=1,2,…,ri (26)c ij =v T A ij ,i=1,2,…,N; j=1,2,…,r i (26)

联合映射公式:Joint mapping formula:

zij=vTAijx,i=1,2,…,N;j=1,2,…,ri (27)z ij = v T A ij x, i = 1, 2, ..., N; j = 1, 2, ..., r i (27)

步骤3插值法增强Step 3 Interpolation Enhancement

步骤3.1插值法增强的方法如下所述:Step 3.1 The method of interpolation enhancement is as follows:

在图1中,是30°角的向量,u1和u2为两个投影轴。点z和点o分别为向量上G点到两投影轴上的投影点,投影点到原点的长度分别为5, In Figure 1, is a vector at an angle of 30°, and u1 and u2 are two projection axes. Point z and point o are respectively The projection points from point G on the vector to the two projection axes, the lengths from the projection point to the origin are 5, remember

Assume

上的投影为 but exist The projection on

为45°角的向量,与向量之间的夹角为15°,凸显了隐藏的特征。 is a vector at an angle of 45°, and the vector The angle between them is 15°, highlighting hidden features.

若在间继续插值,记为 if in and Continue to interpolate between them, denoted as

上的投影为 but exist The projection on

的夹角为22.5°,的夹角为7.5°。这样可使隐藏的特征又进一步的凸显出来。随着插值的密度增加,使得投影方向逐步向待投影方向逼近,因此,特征凸显程度越来越明显。 and The included angle is 22.5°, and The included angle is 7.5°. This can further highlight hidden features. As the density of interpolation increases, the projection direction gradually approaches the direction to be projected, so the feature prominence becomes more and more obvious.

设任意的投影特征向量ui,记为:Let any projected feature vector u i be recorded as:

ui=(a,b)T,i=1,…,d (32)u i =(a,b) T ,i=1,...,d (32)

通过求s值来确定最大的投影方向。所以在平面上任意组合之间插入相应的向量,可以在不同程度上凸显隐藏的特征。Determine the maximum projection direction by calculating the value of s. Therefore, inserting the corresponding vector between any combination on the plane can highlight hidden features to varying degrees.

步骤3.2采用步骤3.1所述的插值的方法分别对公式11、公式24和公式27进行插值。Step 3.2 uses the interpolation method described in step 3.1 to interpolate formula 11, formula 24 and formula 27 respectively.

步骤4识别Step 4 Identify

使用支持向量机或范数距离的方法进行识别。Use support vector machines or norm distance methods for identification.

4.1范数距离法识别,原理如下:4.1 Norm distance method identification, the principle is as follows:

范数距离定义为:The norm distance is defined as:

a′表示两个投影特征矩阵间的差。为步骤1中提取的训练样本投影矩阵,为步骤1中测试样本投影矩阵。当m=1,时,D为PCA投影矩阵;当m=2时,D代表2DPCA投影矩阵;当m=3时,D为(2D)2PCA投影矩阵。a' represents the difference between the two projected feature matrices. is the projection matrix for the training samples extracted in step 1, Projection matrix for the test sample in step 1. When m=1, D is a PCA projection matrix; when m=2, D represents a 2DPCA projection matrix; when m=3, D is a (2D)2PCA projection matrix.

4.2支持向量机原理如图2所示4.2 The principle of support vector machine is shown in Figure 2

model=svmtrain(train_label,train_data,options) (36)model=svmtrain(train_label, train_data, options) (36)

[predict,accuracy]=svmpredict(test_label,test_data,model) (37)[predict,accuracy]=svmpredict(test_label,test_data,model) (37)

调用svmtrain和svmpredict函数。train_label为训练样本标签,train_data为训练样本数据;test_label为测试样本标签,test_data为测试样本数据。predict为预测的类别,accuracy为预测精度。Call the svmtrain and svmpredict functions. train_label is the training sample label, train_data is the training sample data; test_label is the test sample label, and test_data is the test sample data. predict is the predicted category, and accuracy is the prediction accuracy.

综上:本发明可以在不增加更大计算复杂度的前提下来提高图像的识别精度,具有重要的实用价值。To sum up: the present invention can improve image recognition accuracy without increasing computational complexity, and has important practical value.

Claims (1)

1. the 2DPCA facial image recognition method based on the enhancing of two-way interpolation, it is characterised in that method includes the following steps:
ORL face database is divided into training sample and test sample by step 1;
Step 2 extracts characteristic value and feature vector to the training sample in step 1 with PCA method;
Step 3 extracts characteristic value and feature vector to the training sample in step 1 with 2DPCA method;
Step 4, with (2D)2PCA method extracts characteristic value and feature vector to the training sample in step 1;
Step 5, using interpolation method respectively to step 2, step 3, the extracted feature vector of step 4 carries out interpolation;
Step 6 carries out facial image identification using norm distance method or support vector machine method;
The wherein interpolation method specifically:
If u1 and u2 is two axis of projections, i.e. two feature vectors;V vector is any vector in reference axis, if V vector and projection Angle is α between axis u1;G point is any point on V vector, is projected from G point to two axis of projection u1, u2, projected length difference For a, b, it is assumed that a > b;It is arbitrarily inserted into vector W between V vector sum axis of projection u1, if the angle between V vector and W vector is β, then β<α;It is projected from G point on W vector, it is clear that projected length is greater than a, with angle α between vector W and the V vector of insertion It is gradually reduced, projected length is gradually increased, and the feature implied gradually highlights.
CN201910389944.5A 2019-05-10 2019-05-10 2DPCA facial image recognition method based on the enhancing of two-way interpolation Pending CN110097022A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910389944.5A CN110097022A (en) 2019-05-10 2019-05-10 2DPCA facial image recognition method based on the enhancing of two-way interpolation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910389944.5A CN110097022A (en) 2019-05-10 2019-05-10 2DPCA facial image recognition method based on the enhancing of two-way interpolation

Publications (1)

Publication Number Publication Date
CN110097022A true CN110097022A (en) 2019-08-06

Family

ID=67447726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910389944.5A Pending CN110097022A (en) 2019-05-10 2019-05-10 2DPCA facial image recognition method based on the enhancing of two-way interpolation

Country Status (1)

Country Link
CN (1) CN110097022A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832467A (en) * 2020-07-09 2020-10-27 杭州电子科技大学 A face recognition method based on joint feature enhancement and network parameter optimization

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030165260A1 (en) * 2002-03-04 2003-09-04 Samsung Electronics Co, Ltd. Method and apparatus of recognizing face using 2nd-order independent component analysis (ICA)/principal component analysis (PCA)
CN101482917A (en) * 2008-01-31 2009-07-15 重庆邮电大学 Human face recognition system and method based on second-order two-dimension principal component analysis
CN103390154A (en) * 2013-07-31 2013-11-13 中国人民解放军国防科学技术大学 Face recognition method based on evolutionary multi-feature extraction
CN104951774A (en) * 2015-07-10 2015-09-30 浙江工业大学 Palm vein feature extracting and matching method based on integration of two sub-spaces
CN108564061A (en) * 2018-04-28 2018-09-21 河南工业大学 A kind of image-recognizing method and system based on two-dimensional principal component analysis

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030165260A1 (en) * 2002-03-04 2003-09-04 Samsung Electronics Co, Ltd. Method and apparatus of recognizing face using 2nd-order independent component analysis (ICA)/principal component analysis (PCA)
CN101482917A (en) * 2008-01-31 2009-07-15 重庆邮电大学 Human face recognition system and method based on second-order two-dimension principal component analysis
CN103390154A (en) * 2013-07-31 2013-11-13 中国人民解放军国防科学技术大学 Face recognition method based on evolutionary multi-feature extraction
CN104951774A (en) * 2015-07-10 2015-09-30 浙江工业大学 Palm vein feature extracting and matching method based on integration of two sub-spaces
CN108564061A (en) * 2018-04-28 2018-09-21 河南工业大学 A kind of image-recognizing method and system based on two-dimensional principal component analysis

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832467A (en) * 2020-07-09 2020-10-27 杭州电子科技大学 A face recognition method based on joint feature enhancement and network parameter optimization
CN111832467B (en) * 2020-07-09 2022-06-14 杭州电子科技大学 A face recognition method based on joint feature enhancement and network parameter optimization

Similar Documents

Publication Publication Date Title
Wang et al. Robust 3D face recognition by local shape difference boosting
US10198623B2 (en) Three-dimensional facial recognition method and system
CN104318219B (en) Face recognition method based on combination of local features and global features
CN102902979B (en) A kind of method of synthetic-aperture radar automatic target detection
CN108681721A (en) Face identification method based on the linear correlation combiner of image segmentation two dimension bi-directional data
CN105469117B (en) A kind of image-recognizing method and device extracted based on robust features
CN101930537A (en) Three-dimensional face recognition method and system based on bending invariant correlation features
CN106503633A (en) The method for building up in face characteristic storehouse in a kind of video image
CN107918761A (en) A kind of single sample face recognition method based on multiple manifold kernel discriminant analysis
CN105184281A (en) Face feature library building method based on high-dimensional manifold learning
CN107025444A (en) Piecemeal collaboration represents that embedded nuclear sparse expression blocks face identification method and device
CN104616000A (en) Human face recognition method and apparatus
CN118823856A (en) A facial expression recognition method based on multi-scale and deep fine-grained feature enhancement
CN111274883A (en) A synthetic sketch face recognition method based on multi-scale HOG features and deep features
CN109948652A (en) A plant species identification method based on local discriminative CCA based on leaf-flower fusion
CN106056131A (en) Image feature extraction method based on LRR-LDA
Wenjing et al. Face recognition based on the fusion of wavelet packet sub-images and fisher linear discriminant
CN111488840A (en) Human behavior classification method based on multi-task learning model
CN112001231B (en) Method, system and medium for 3D face recognition based on weighted multi-task sparse representation
CN110097022A (en) 2DPCA facial image recognition method based on the enhancing of two-way interpolation
Lee et al. Face image retrieval using sparse representation classifier with gabor-lbp histogram
CN110276263B (en) Face recognition system and recognition method
CN112329698A (en) Face recognition method and system based on intelligent blackboard
Gong et al. Person re-identification based on two-stream network with attention and pose features
Kaur et al. Comparative study of facial expression recognition techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190806

WD01 Invention patent application deemed withdrawn after publication