[go: up one dir, main page]

CN107145826B - Person Re-identification Method Based on Double Constraint Metric Learning and Sample Reordering - Google Patents

Person Re-identification Method Based on Double Constraint Metric Learning and Sample Reordering Download PDF

Info

Publication number
CN107145826B
CN107145826B CN201710213894.6A CN201710213894A CN107145826B CN 107145826 B CN107145826 B CN 107145826B CN 201710213894 A CN201710213894 A CN 201710213894A CN 107145826 B CN107145826 B CN 107145826B
Authority
CN
China
Prior art keywords
camera
candidate
matrix
cross
pedestrian
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710213894.6A
Other languages
Chinese (zh)
Other versions
CN107145826A (en
Inventor
于慧敏
谢奕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201710213894.6A priority Critical patent/CN107145826B/en
Publication of CN107145826A publication Critical patent/CN107145826A/en
Application granted granted Critical
Publication of CN107145826B publication Critical patent/CN107145826B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于双约束度量学习和样本重排序的行人再识别方法,包括训练和测试两个阶段;所述训练阶段包括如下步骤:建立跨摄像机关联约束;建立同摄像机关联约束;度量矩阵求解;所述测试阶段包括以下步骤:利用度量矩阵进行特征空间投影;计算查询图片和候选图片在特征空间中的欧式距离;计算候选图片初始排序;选取排序队列中前K张候选图片;利用前K张候选图片在特征空间中的关联性构建概率超图;基于概率超图计算重排序结果;返回候选图片最终排序。本发明同时考虑训练样本的两种关联约束,使学习得到的特征空间更适合于行人再识别,同时利用候选图片的关联性进行重排序,获得了更准确的行人再识别结果。

Figure 201710213894

The invention discloses a pedestrian re-identification method based on double-constrained metric learning and sample reordering, which includes two stages of training and testing; the training stage includes the following steps: establishing cross-camera association constraints; establishing same-camera association constraints; measuring Matrix solution; the test phase includes the following steps: using the metric matrix to project the feature space; calculating the Euclidean distance between the query picture and the candidate picture in the feature space; calculating the initial sorting of the candidate pictures; selecting the top K candidate pictures in the sorting queue; The correlation of the first K candidate images in the feature space constructs a probabilistic hypergraph; calculates the reordering result based on the probabilistic hypergraph; returns the final ranking of the candidate images. The invention considers two kinds of association constraints of training samples at the same time, so that the learned feature space is more suitable for pedestrian re-identification, and at the same time, it uses the relevance of candidate pictures to reorder, and obtains a more accurate pedestrian re-identification result.

Figure 201710213894

Description

Pedestrian re-identification method based on double-constraint metric learning and sample reordering
Technical Field
The invention relates to a method in the technical field of video image processing, in particular to a pedestrian re-identification method based on double-constraint metric learning and sample reordering.
Background
Video monitoring provides a rich information source for safety early warning, investigation and evidence collection, suspect tracking and other works. However, the monitoring range of a single camera is very limited, so that it is impossible to monitor a large or complex scene (e.g. train station, airport, campus, etc.) in all directions. In order to capture more comprehensive and extensive information in a public area, a large number of monitoring cameras are often required to work in concert. The traditional video processing technology is mainly designed for a single camera, and when a pedestrian target moves out of a current video, the direction of the target cannot be judged. Therefore, how to re-identify pedestrians in the monitoring network according to the query picture of the pedestrian target and establish identity association of the pedestrian target under different cameras becomes a core problem in the field of intelligent video monitoring.
For the pedestrian re-identification problem, the traditional method is mainly based on the appearance characteristics of the pedestrian image, such as extracting the characteristics of color, shape, texture and the like, and then the pedestrian re-identification result is obtained according to the characteristic similarity. However, the illumination, the viewing angle difference and the posture change of the pedestrian between different cameras can significantly change the appearance of the same pedestrian, and the satisfactory accuracy of pedestrian re-identification cannot be obtained only by means of similarity matching of the appearance characteristics of the pedestrian pictures. The introduction of measurement learning provides an important means for relieving the influence of cross-camera difference on pedestrian re-identification, and the measurement learning learns a measurement matrix through a training set, so that a pedestrian picture can be projected to a new feature space, the feature distance between the same pedestrian pictures is smaller, and the feature distance between different pedestrian pictures is larger. However, in the existing metric learning algorithm, only cross-camera correlation information between pedestrian pictures of different cameras is considered in the training process, and the correlation between different pedestrian pictures in the same camera is ignored. Meanwhile, the metric learning algorithm is easy to generate an overfitting phenomenon on a training set, and a suboptimal pedestrian re-identification result can be obtained by completely depending on a distance metric matrix obtained by learning to perform similarity sequencing in a testing stage.
Aiming at the defects and shortcomings of the existing pedestrian re-identification method based on metric learning, the dual-constraint metric learning technology provided by the invention can simultaneously consider the associated information of the same camera and the cross camera between training samples in the metric learning process, and learn to obtain a feature space projection matrix with stronger discriminability. In addition, by introducing a reordering technology in the test stage, the method can effectively relieve the influence of the over-fitting phenomenon in metric learning by utilizing the associated information among the candidate pictures, and obtain a candidate picture ordering result which is more stable and accurate than the existing pedestrian re-identification technology.
Disclosure of Invention
The invention provides a pedestrian re-identification method based on double-constraint metric learning and sample reordering to solve the problems in the prior art, so that the accuracy and the stability of the existing pedestrian re-identification method based on metric learning are improved.
In order to achieve the purpose, the invention discloses a pedestrian re-identification method based on double-constraint metric learning and sample reordering, which comprises two stages of training and testing;
the training phase comprises the steps of:
step 1, establishing cross-camera association constraint: forming a cross-camera sample pair by using pedestrian pictures from different cameras in the training set, and establishing a constraint item to ensure that the characteristic distance between the cross-camera positive sample pair is smaller than the characteristic distance between the cross-camera negative sample pair;
step 2, establishing a camera association constraint: forming a same-camera sample pair by using pedestrian pictures from the same camera in the training set, and establishing a constraint item to ensure that the characteristic distance between the same-camera negative sample pair is greater than the characteristic distance between the camera-crossing positive sample pair;
step 3, solving a measurement matrix: obtaining a target function of double-constraint metric learning by combining the two constraint terms in the step 1 and the step 2, solving a semi-positive definite metric matrix M which minimizes the target function to obtain a training result of metric learning, and ending the training stage;
the testing phase comprises the following steps:
and 4, performing characteristic space projection by using the measurement matrix: according to the semi-positive nature of the measurement matrix M, the characteristics of the measurement matrix M are decomposed into M-PTP, utilizing the matrix P to search the feature vector x of the picture in the test stagepAnd feature vectors of candidate sets
Figure BDA0001261699130000021
Projecting the images to a new feature space in a unified manner, wherein N is the total number of the candidate concentrated images in the testing stage;
step 5, calculating Euclidean distances of the query picture and the candidate pictures in the feature space: respectively calculating the Euclidean distance between the query picture and each candidate picture in the new feature space:
Figure BDA0001261699130000022
step 6, calculating the initial sequence of the candidate pictures: sorting the candidate pictures according to the Euclidean distances calculated in the step 5, wherein the candidate pictures with smaller Euclidean distances to the query picture can obtain a more front sorting position;
step 7, selecting the first K candidate pictures in the sorting queue: selecting K candidate pictures with the top ranking from the candidate picture ranking queue obtained in the step 6;
step 8, constructing a probability hypergraph by using the relevance of the previous K candidate pictures in the feature space: taking the query picture and the K candidate pictures as vertexes of the probability hypergraph, generating hyperedges of the probability hypergraph through the relevance between the vertexes, and finally giving corresponding weight to each hyperedge;
step 9, calculating a reordering result based on the probability hypergraph: calculating a Laplace matrix of the probability hypergraph, establishing a target function by combining with the experience loss of the initial label, calculating the ranking score of the candidate pictures according to the target function, and reordering the K candidate pictures from large to small according to the ranking score;
step 10, returning the final ordering of the candidate pictures: and (4) replacing the sorting positions of the previous K pictures in the sorting queue in the step 6 with the re-sorting results of the K candidate pictures in the step 9, and returning the whole candidate set sorting queue as the final result of pedestrian re-identification.
Further: the establishment of the cross-camera association constraint in the step 1 comprises the following steps:
step 1.1, respectively defining training pictures from different cameras as a query set
Figure BDA0001261699130000031
And candidate set
Figure BDA0001261699130000032
Wherein xiAnd yjIs a feature vector of a pedestrian picture, and
Figure BDA0001261699130000033
and
Figure BDA0001261699130000034
the number of the pictures in the search set is n, and the number of the pictures in the candidate set is m;
step 1.2, defining sample pairs (x) composed of pedestrian pictures from different camerasi,yj) Is a cross-camera sample pair; when x isiAnd yjWhen belonging to the same pedestrian, i.e.
Figure BDA0001261699130000035
Scale (x)i,yj) For a cross-camera positive sample pair, and define z ij1 is ═ 1; when in
Figure BDA0001261699130000036
When it comes to (x)i,yj) For pairs of negative samples across the camera, and set zij=-1;
Step 1.3, constraining any cross-camera positive sample pair (x) in the training seti,yj) Is less than the negative sample pair (x) across the camerai,yk) The distance between:
Figure BDA0001261699130000037
wherein d isM(-) is the mahalanobis distance metric function to be learned, expressed as follows:
Figure BDA0001261699130000038
in the above formula, M is a semi-positive measurement matrix, i.e. the target of measurement learning;
step 1.4, performing equivalent transformation on the constraint in step 1.3, wherein the distance between any cross-camera positive sample pair in the constraint training set is smaller than a threshold ξ, and the distance between any cross-camera negative sample pair in the training set is larger than a threshold ξ, so as to obtain the following loss function:
Figure BDA0001261699130000041
Figure BDA0001261699130000042
wherein
Figure BDA0001261699130000047
Is a logistic regression function; ep(M) is a loss function across the camera positive sample pair, Ed(M) is a loss function of the cross-camera negative sample pairs, ξ takes on all cross-camera sample pairs (x)i,yj) And with camera sample pair (y)j,yk) The average distance of (c).
Further: the establishment of the camera association constraint in the step 2 comprises the following steps:
step 2.1, define candidate set
Figure BDA0001261699130000048
Picture of different pedestrians in middlejAnd ykPairs of constituent samples (y)j,yk) For the same camera negative sample pair, and set label zjk=-1;
Step 2.2, constraining any cross-camera positive sample pair (x) in the training seti,yj) Is less than the same camera negative sample pair (y)j,yk) The distance between:
Figure BDA0001261699130000043
step 2.3, since step 1.4 already constrains the distances between all pairs of cross-camera positive samples to be less than the threshold ξ, the constraint in step 2.2 is equivalently converted into any pair of same-camera negative samples (y) in the constraint training setj,yk) Greater than ξ, the following loss function is obtained:
Figure BDA0001261699130000044
wherein Es(M) is the loss function of the same camera negative sample pair.
Further: the solving of the measurement matrix in the step 3 specifically comprises the following steps:
step 3.1, jointly considering the loss functions in step 1.4 and step 2.3, obtaining a target function of the double-constraint distance metric learning:
Φ(M)=Ep(M)+Ed(M)+Es(M)
step 3.2, give weight w to the sample pairs in the objective functionijAnd WjkAnd simplifying the objective function expression in the step 3.1 to obtain:
Figure BDA0001261699130000045
wherein
Figure BDA0001261699130000046
When z isijWhen 1 is equal to wij=1/NposIn which N isposThe total number of pairs of positive samples across cameras in the training set; when z isijWhen is-1 time wijIs set to be 1/NnegIn which N isnegThe total number of all cross-camera and same-camera negative sample pairs in the training set; at the same time, since there is no same-camera positive sample pair, w will bejkAre uniformly set to 1/Nneg
Step 3.3, defining the dual constraint metric learning as the following optimization problem:
Figure BDA0001261699130000051
and 3.4, solving the optimization problem in the step 3.3 to obtain a semi-positive definite metric matrix M.
Further: the constructing of the probability hypergraph by using the relevance of the previous K candidate pictures in the feature space in the step 8 specifically comprises the following steps:
step 8.1, first queryMerging the pictures and K candidate pictures to obtain a vertex set of the probability hypergraph
Figure BDA0001261699130000052
Step 8.2, in
Figure BDA0001261699130000057
Each vertex v iniAs central node, by connection viGenerating three super edges by 5, 15 and 25 vertexes which are closest to each other in the projection feature space, and adding the three super edges into a super edge set epsilon of the probability hypergraph, so that the set epsilon contains 3 x (K +1) super edges in total;
step 8.3, for each super edge e in the super edge set epsiloniAssigning a non-negative weight value wh(ei) When the super edge takes the query picture as the central node, a weighted value is distributed to the super edge
Figure BDA0001261699130000053
When the super edge takes the candidate picture as the central node, the weight value is distributed to the super edge
Figure BDA0001261699130000054
Step 8.4, according to
Figure BDA00012616991300000511
The subordination relation between the middle vertex and the epsilon middle transfinite has a structure size of
Figure BDA0001261699130000058
The element of the incidence matrix H is defined as:
Figure BDA0001261699130000055
wherein A (v)i,ej) Representing a vertex viBelonging to the super edge ejIs calculated by the following formula:
Figure BDA0001261699130000056
wherein v isjIs a super edge ejσ is the average distance between all vertices in the projection feature space; final completion probability hypergraph
Figure BDA0001261699130000059
And obtaining a correlation matrix H.
Further: in step 9, a reordering result is calculated based on the probabilistic hypergraph, which specifically includes the following substeps:
step 9.1, based on the correlation matrix H, calculate the degree d (v) of each vertex and the degree δ (e) of each superedge in the probability hypergraph, where d (v) Σe∈εwh(e) H (v, e), and
Figure BDA00012616991300000510
defining a diagonal matrix DvMaking the elements on the diagonal line correspond to the degree of each vertex in the probability hypergraph; defining a diagonal matrix DeMaking the elements on the diagonal line correspond to the degree of each hyper-edge in the probability hyper-graph; defining diagonal matrix W to make its diagonal elements correspond to weight W of each superedgeh(e);
Step 9.2, utilizing incidence matrix H and vertex degree matrix DvOvercritical matrix DeAnd the Laplace matrix L of the probability hypergraph is calculated together with the hyperedge weight matrix W:
Figure BDA0001261699130000061
wherein I is a size of
Figure BDA0001261699130000064
The identity matrix of (1);
step 9.3, simultaneously considering Laplacian constraint and initial label experience loss of the probability hypergraph by utilizing a normalization framework, and defining an objective function of sample reordering as follows:
Figure BDA0001261699130000062
wherein f represents a reordering score vector needing to be learned, r represents an initial label vector, the label of a query picture in r is set to be 1, the labels of all candidate pictures are set to be 0, and μ >0 is a normalization parameter used for weighing the importance between a first item and a second item in an objective function; the first item in the target function restrains the peaks sharing more hyper-edges in the probability hypergraph to obtain similar reordering scores, and the second item in the target function restrains the reordering scores to be close to the initial label information;
step 9.4, by making the first derivative of the objective function in step 9.3 with respect to f zero, an optimal solution to the reordering problem can be obtained quickly:
Figure BDA0001261699130000063
and 9.5, reordering the K candidate pictures from large to small according to the reordering scores of the candidate pictures in the vector f.
Compared with the prior art, the invention adopting the technical scheme has the following beneficial effects:
1) compared with the existing pedestrian re-identification method based on metric learning, which only considers the cross-camera association constraint of the training samples, the method provided by the invention simultaneously considers the same-camera and cross-camera association information among the training samples in the process of metric learning, so that the learned metric matrix has stronger discriminability;
2) according to the method, the probability hypergraph is constructed by using the associated information among different candidate pictures, the similarity sorting result in the test stage is reordered, the influence of an over-fitting phenomenon in metric learning is effectively relieved, and a more stable and accurate candidate picture sorting result is obtained;
3) according to the invention, only K candidate pictures with the front initial ordering positions are considered during reordering, and compared with the reordering of the whole candidate set, the calculation complexity of probability hypergraph construction is reduced on the basis of ensuring the ordering accuracy, so that the reordering speed is accelerated.
Drawings
FIG. 1 is a schematic overall flow chart of the present invention.
Detailed Description
The technical solution of the present invention will be further described in detail with reference to the following specific examples.
The following examples are carried out on the premise of the technical scheme of the invention, and detailed embodiments and specific operation processes are given, but the scope of the invention is not limited to the following examples.
Examples
In the embodiment of the invention, pedestrian pictures shot by different cameras are processed, a metric matrix is learned through a training set, a query picture of a certain pedestrian target is used in a test stage, and correct matching of the pedestrian target is found in candidate sets shot by different cameras, referring to fig. 1, in the embodiment of the invention, the method comprises two stages of training and testing;
the training phase comprises the steps of:
step 1, establishing cross-camera association constraint: the method comprises the following steps of forming a cross-camera sample pair by using pedestrian pictures from different cameras in a training set, establishing a constraint term to enable the characteristic distance between the cross-camera positive sample pair to be smaller than the characteristic distance between the cross-camera negative sample pair, and specifically comprising the following sub-steps:
step 1.1, respectively defining training pictures from different cameras as a query set
Figure BDA0001261699130000071
And candidate set
Figure BDA0001261699130000072
Wherein xiAnd yjIs a feature vector of a pedestrian picture, and
Figure BDA0001261699130000073
and
Figure BDA0001261699130000074
the number of the pictures in the search set is n, and the number of the pictures in the candidate set is m;
step 1.2, defining sample pairs (x) composed of pedestrian pictures from different camerasi,yj) Is a cross-camera sample pair; when x isiAnd yjWhen belonging to the same pedestrian, i.e.
Figure BDA0001261699130000075
Scale (x)i,yj) For a cross-camera positive sample pair, and define z ij1 is ═ 1; when in
Figure BDA0001261699130000076
When it comes to (x)i,yj) For pairs of negative samples across the camera, and set zij=-1;
Step 1.3, constraining any cross-camera positive sample pair (x) in the training seti,yj) Is less than the negative sample pair (x) across the camerai,yk) The distance between:
Figure BDA0001261699130000077
wherein d isM(-) is the mahalanobis distance metric function to be learned, expressed as follows:
Figure BDA0001261699130000078
in the above formula, M is a semi-positive measurement matrix, i.e. the target of measurement learning;
step 1.4, performing equivalent transformation on the constraint in the step 1.3, wherein the distance between any cross-camera positive sample pair in the constraint training set is smaller than a threshold value ξ, and any cross-camera negative sample pair in the training set
The distance between is greater than the threshold ξ, the following loss function is obtained:
Figure BDA0001261699130000086
Figure BDA0001261699130000082
wherein
Figure BDA0001261699130000087
Is a logistic regression function; ep(M) is a loss function across the camera positive sample pair, Ed(M) is a loss function of the cross-camera negative sample pairs, ξ takes on all cross-camera sample pairs (x)i,yj) And with camera sample pair (y)j,yk) The average distance of (c).
Step 2, establishing a camera association constraint: the method comprises the following steps of forming a same-camera sample pair by using pedestrian pictures from the same camera in a training set, establishing a constraint term to enable the characteristic distance between the same-camera negative sample pair to be larger than the characteristic distance between a cross-camera positive sample pair, and specifically comprising the following substeps:
step 2.1, define candidate set
Figure BDA0001261699130000088
Picture of different pedestrians in middlejAnd ykPairs of constituent samples (y)j,yk) For the same camera negative sample pair, and set label zjk=-1;
Step 2.2, constraining any cross-camera positive sample pair (x) in the training seti,yj) Is less than the same camera negative sample pair (y)j,yk) The distance between:
Figure BDA0001261699130000083
step 2.3, since step 1.4 already constrains the distances between all pairs of cross-camera positive samples to be less than the threshold ξ, the constraint in step 2.2 is equivalently converted into any pair of same-camera negative samples (y) in the constraint training setj,yk) Greater than ξ, the following loss function is obtained:
Figure BDA0001261699130000084
step 3, solving a measurement matrix: obtaining a target function of double-constraint metric learning by combining the two constraint terms in the step 1 and the step 2, and solving a semi-positive definite metric matrix M which minimizes the target function to obtain a training result of metric learning, wherein the training result specifically comprises the following substeps:
step 3.1, jointly considering the loss functions in step 1.4 and step 2.3, obtaining a target function of the double-constraint distance metric learning:
Φ(M)=Ep(M)+Ed(M)+Es(M)
step 3.2, giving weight to the sample pairs in the objective function, and simplifying the objective function expression in step 3.1 to obtain:
Figure BDA0001261699130000085
wherein
Figure BDA0001261699130000091
When z isijWhen 1 is equal to wij=1/NposIn which N isposThe total number of pairs of positive samples across cameras in the training set; when z isijWhen is-1 time wijIs set to be 1/NnegIn which N isnegThe total number of all cross-camera and same-camera negative sample pairs in the training set; at the same time, since there is no same-camera positive sample pair, w will bejkAre uniformly set to 1/Nneg
Step 3.3, defining the dual constraint metric learning as the following optimization problem:
Figure BDA0001261699130000092
step 3.4, solving the optimization problem in the step 3.3 to obtain a semi-positive definite metric matrix M; in this embodiment, first, a matrix X and a matrix Y are defined, where the matrix X and the matrix Y respectively store a query set
Figure BDA0001261699130000093
Middle n pictures and candidate set
Figure BDA0001261699130000094
Feature vectors of the middle m pictures; then, X and Y are combined into a matrix C ═ X, Y]And use in combination of ciRepresents the ith column of matrix C; by assuming when yjAnd ykZ is the same candidate picturejk0 and wjkThe objective function in step 3.2 can be changed to 0:
Figure BDA0001261699130000095
the gradient of the objective function with respect to the matrix M is found as follows:
Figure BDA0001261699130000096
finally, iteratively solving a measurement matrix M which minimizes the objective function by using a gradient descent method;
the testing phase comprises the following steps:
and 4, performing characteristic space projection by using the measurement matrix: according to the semi-positive nature of the measurement matrix M, the characteristics of the measurement matrix M are decomposed into M-PTP, utilizing the matrix P to search the feature vector x of the picture in the test stagepAnd feature vectors of candidate sets
Figure BDA0001261699130000097
Projecting the images to a new feature space in a unified manner, wherein N is the total number of the candidate concentrated images in the testing stage;
step 5, calculating Euclidean distances of the query picture and the candidate pictures in the feature space: respectively calculating the Euclidean distance between the query picture and each candidate picture in the new feature space:
Figure BDA0001261699130000098
step 6, calculating the initial sequence of the candidate pictures: sorting the candidate pictures according to the Euclidean distances calculated in the step 5, wherein the candidate pictures with smaller Euclidean distances to the query picture can obtain a more front sorting position;
step 7, selecting the first K candidate pictures in the sorting queue: selecting K candidate pictures with the top ranking from the candidate picture ranking queue obtained in the step 6, wherein K is 100 in the embodiment;
step 8, constructing a probability hypergraph by using the relevance of the previous K candidate pictures in the feature space: taking the query picture and the K candidate pictures as vertexes of the probability hypergraph, generating hyperedges of the probability hypergraph through the relevance between the vertexes, and finally giving corresponding weight to each hyperedge; the method specifically comprises the following substeps:
step 8.1, firstly, merging the query picture and K candidate pictures to obtain a vertex set of the probability hypergraph
Figure BDA0001261699130000101
Step 8.2, in
Figure BDA0001261699130000102
Each vertex v iniAs central node, by connection viGenerating three super edges by 5, 15 and 25 vertexes which are closest to each other in the projection feature space, and adding the three super edges into a super edge set epsilon of the probability hypergraph, so that the set epsilon contains 3 x (K +1) super edges in total;
step 8.3, for each super edge e in the super edge set epsiloniAssigning a non-negative weight value wh(ei) When the super edge takes the query picture as the central node, a larger weight value is distributed to the super edge
Figure BDA0001261699130000103
Emphasizing the role of the query picture in reordering; when the super edge takes the candidate picture as the central node, a smaller weight value is distributed to the super edge
Figure BDA0001261699130000104
In this example take
Figure BDA0001261699130000105
Step 8.4, according to
Figure BDA0001261699130000106
The subordination relation between the middle vertex and the epsilon middle transfinite has a structure size of
Figure BDA0001261699130000107
The element of the incidence matrix H is defined as:
Figure BDA0001261699130000108
wherein A (v)i,ej) Representing a vertex viBelonging to the super edge ejIs calculated by the following formula:
Figure BDA0001261699130000109
wherein v isjIs a super edge ejσ is the average distance between all vertices in the projection feature space; final completion probability hypergraph
Figure BDA00012616991300001010
And obtaining a correlation matrix H;
step 9, calculating a reordering result based on the probability hypergraph: calculating a Laplace matrix of the probability hypergraph, establishing a target function by combining with the experience loss of the initial label, calculating the ranking score of the candidate pictures according to the target function, and reordering the K candidate pictures from large to small according to the ranking score; the method specifically comprises the following substeps:
step 9.1, based on the correlation matrix H, calculate the degree d (v) of each vertex and the degree δ (e) of each superedge in the probability hypergraph, where d (v) Σe∈εwh(e) H (v, e), and
Figure BDA00012616991300001011
defining a diagonal matrix DvMaking the elements on the diagonal line correspond to the degree of each vertex in the probability hypergraph; defining a diagonal matrix DeMaking the elements on the diagonal line correspond to the degree of each hyper-edge in the probability hyper-graph; defining diagonal matrix W to make its diagonal elements correspond to weight W of each superedgeh(e);
Step 9.2, utilizing incidence matrix H and vertex degree matrix DvOvercritical matrix DeAnd the Laplace matrix L of the probability hypergraph is calculated together with the hyperedge weight matrix W:
Figure BDA0001261699130000111
wherein I is a size of
Figure BDA0001261699130000114
The identity matrix of (1);
step 9.3, simultaneously considering Laplacian constraint and initial label experience loss of the probability hypergraph by utilizing a normalization framework, and defining an objective function of sample reordering as follows:
Figure BDA0001261699130000112
wherein f represents a reordering score vector needing to be learned, r represents an initial label vector, the label of a query picture in r is set to be 1, the labels of all candidate pictures are set to be 0, and μ >0 is a normalization parameter used for weighing the importance between a first item and a second item in an objective function; the first item in the target function restrains the peaks sharing more super edges in the hypergraph to obtain similar reordering scores, and the second item in the target function restrains the reordering scores to be close to the initial label information; μ in this example is 0.01;
step 9.4, by making the first derivative of the objective function in step 9.3 with respect to f zero, an optimal solution to the reordering problem can be obtained quickly:
Figure BDA0001261699130000113
step 9.5, reordering the K candidate pictures from large to small according to the reordering scores of the candidate pictures in the vector f;
step 10, returning the final ordering of the candidate pictures: and (4) replacing the sorting positions of the previous K pictures in the sorting queue in the step 6 with the re-sorting results of the K candidate pictures in the step 9, and returning the whole candidate set sorting queue as the final result of pedestrian re-identification.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (6)

1.一种基于双约束度量学习和样本重排序的行人再识别方法,其特征在于,包括训练和测试两个阶段;1. a pedestrian re-identification method based on double constraint metric learning and sample reordering, is characterized in that, comprises two stages of training and testing; 所述训练阶段包括以下步骤:The training phase includes the following steps: 步骤1,建立跨摄像机关联约束:利用训练集中来自于不同摄像机的行人图片组成跨摄像机样本对,建立约束项使跨摄像机正样本对之间的特征距离小于跨摄像机负样本对之间的特征距离;Step 1. Establish cross-camera association constraints: Use pedestrian images from different cameras in the training set to form cross-camera sample pairs, and establish constraints so that the feature distance between cross-camera positive sample pairs is smaller than the cross-camera negative sample pair. Feature distance between pairs ; 步骤2,建立同摄像机关联约束:利用训练集中来自于同一摄像机的行人图片组成同摄像机样本对,建立约束项使同摄像机负样本对之间的特征距离大于跨摄像机正样本对之间的特征距离;Step 2, establish the same-camera association constraint: use the pedestrian pictures from the same camera in the training set to form the same-camera sample pair, and establish a constraint item to make the feature distance between the same-camera negative sample pair greater than the cross-camera positive sample pair. Feature distance ; 步骤3,度量矩阵求解:通过联合步骤1和步骤2中的两个约束项得到双约束度量学习的目标函数,求使目标函数最小化的半正定度量矩阵M,得到度量学习的训练结果,结束训练阶段;Step 3, metric matrix solution: obtain the objective function of double-constrained metric learning by combining the two constraints in step 1 and step 2, find the semi-definite metric matrix M that minimizes the objective function, obtain the training result of metric learning, end training phase; 所述测试阶段包含以下步骤:The testing phase includes the following steps: 步骤4,利用度量矩阵进行特征空间投影:根据度量矩阵M的半正定性,将其特征分解为M=PTP,利用矩阵P将测试阶段中查询图片的特征向量xp和候选集的特征向量
Figure FDA0002286140660000011
统一投影至一个新的特征空间,N为测试阶段候选集中图片总数;
Step 4, using the metric matrix to project the feature space: according to the positive semi-definiteness of the metric matrix M, decompose its features into M=P T P, and use the matrix P to convert the feature vector x p of the query image in the test phase and the feature of the candidate set. vector
Figure FDA0002286140660000011
Unified projection to a new feature space, N is the total number of pictures in the candidate set in the test phase;
步骤5,计算查询图片和候选图片在特征空间中的欧式距离:分别计算查询图片与每张候选图片在新特征空间中的欧式距离:Step 5: Calculate the Euclidean distance between the query image and the candidate image in the feature space: Calculate the Euclidean distance between the query image and each candidate image in the new feature space:
Figure FDA0002286140660000012
Figure FDA0002286140660000012
步骤6,计算候选图片初始排序:根据步骤5中计算得到的欧式距离对候选图片进行排序,与查询图片欧式距离越小的候选图片将获得更靠前的排序位置;Step 6, calculate the initial sorting of candidate pictures: sort the candidate pictures according to the Euclidean distance calculated in Step 5, and the candidate picture with the smaller Euclidean distance from the query picture will obtain a higher ranking position; 步骤7,选取排序队列中前K张候选图片:从步骤6中得到的候选图片排序队列中选取排序靠前的K张候选图片;Step 7, select the top K candidate pictures in the sorting queue: select the top K candidate pictures in the sorting queue from the candidate picture sorting queue obtained in step 6; 步骤8,利用前K张候选图片在特征空间中的关联性构建概率超图:以查询图片和K张候选图片作为概率超图的顶点,并通过顶点之间的关联性生成概率超图的超边,最后为每条超边赋予对应的权重;Step 8, using the correlation of the first K candidate pictures in the feature space to construct a probabilistic hypergraph: take the query picture and the K candidate pictures as the vertices of the probabilistic hypergraph, and generate the hypergraph of the probabilistic hypergraph through the correlation between the vertices. edge, and finally assign the corresponding weight to each hyperedge; 步骤9,基于概率超图计算重排序结果:计算概率超图的拉普拉斯矩阵,并结合初始标签的经验损失建立目标函数,根据目标函数计算得到候选图片的排序分数,按照排序分数从大到小对K张候选图片重新排序;Step 9: Calculate the reordering result based on the probabilistic hypergraph: Calculate the Laplacian matrix of the probabilistic hypergraph, and combine the empirical loss of the initial label to establish an objective function, and calculate the ranking score of the candidate pictures according to the objective function. Reorder the K candidate pictures to the smallest; 步骤10,返回候选图片最终排序:用步骤9中K张候选图片的重排序结果替换步骤6中排序队列里前K张图片的排序位置,并返回整个候选集排序队列作为行人再识别的最终结果。Step 10, return the final sorting of candidate pictures: replace the sorting positions of the first K pictures in the sorting queue in Step 6 with the reordering results of the K candidate pictures in Step 9, and return the entire candidate set sorting queue as the final result of pedestrian re-identification .
2.根据权利要求1所述的一种基于双约束度量学习和样本重排序的行人再识别方法,其特征在于:步骤1中所述的建立跨摄像机关联约束,包括以下步骤:2. a kind of pedestrian re-identification method based on double constraint metric learning and sample reordering according to claim 1, is characterized in that: establishing cross-camera association constraint described in step 1, comprises the following steps: 步骤1.1,将来自于不同摄像机的训练图片分别定义为查询集
Figure FDA0002286140660000021
以及候选集
Figure FDA0002286140660000022
其中xi和yj为行人图片的特征向量,而
Figure FDA0002286140660000023
Figure FDA0002286140660000024
为对应的行人身份标签,n为训练阶段查询集的图片总数,m为训练阶段候选集的图片总数;
Step 1.1, define training images from different cameras as query sets
Figure FDA0002286140660000021
and candidate set
Figure FDA0002286140660000022
where x i and y j are the feature vectors of pedestrian images, and
Figure FDA0002286140660000023
and
Figure FDA0002286140660000024
is the corresponding pedestrian identity label, n is the total number of pictures in the query set in the training stage, and m is the total number of pictures in the candidate set in the training stage;
步骤1.2,定义来自于不同摄像机的行人图片组成的样本对(xi,yj)为跨摄像机样本对;当xi和yj属于同一行人时,即
Figure FDA0002286140660000025
称(xi,yj)为跨摄像机正样本对,并定义zij=1;而当
Figure FDA0002286140660000026
时,称(xi,yj)为跨摄像机负样本对,并设zij=-1;
Step 1.2, define a sample pair (x i , y j ) composed of pedestrian images from different cameras as a cross-camera sample pair; when x i and y j belong to the same pedestrian, that is,
Figure FDA0002286140660000025
Call (x i , y j ) a cross-camera positive sample pair, and define zi ij =1; and when
Figure FDA0002286140660000026
When , call ( xi , y j ) a cross-camera negative sample pair, and set zi ij =-1;
步骤1.3,约束训练集中任意跨摄像机正样本对(xi,yj)之间的距离小于跨摄像机负样本对(xi,yk)之间的距离:Step 1.3, constrain the distance between any cross-camera positive sample pair ( xi , y j ) in the training set to be smaller than the distance between the cross-camera negative sample pair ( xi , y k ):
Figure FDA0002286140660000027
Figure FDA0002286140660000027
其中dM(·,·)为待学习的马氏距离度量函数,表达式如下:where d M ( , ) is the Mahalanobis distance metric function to be learned, and the expression is as follows:
Figure FDA0002286140660000028
Figure FDA0002286140660000028
上式中M为一个半正定的度量矩阵,即度量学习的目标;In the above formula, M is a semi-positive definite metric matrix, that is, the goal of metric learning; 步骤1.4,对步骤1.3中的约束进行等效转化,约束训练集中任意跨摄像机正样本对之间的距离小于阈值ξ,而训练集中任意跨摄像机负样本对之间的距离大于阈值ξ,得到下列损失函数:Step 1.4, equivalently transform the constraints in step 1.3, constrain the distance between any cross-camera positive sample pairs in the training set to be less than the threshold ξ, and the distance between any cross-camera negative sample pairs in the training set is greater than the threshold ξ, obtain the following Loss function:
Figure FDA0002286140660000029
Figure FDA0002286140660000029
Figure FDA00022861406600000210
Figure FDA00022861406600000210
其中
Figure FDA00022861406600000211
为逻辑回归函数;Ep(M)为跨摄像机正样本对的损失函数,Ed(M)为跨摄像机负样本对的损失函数;ξ的取值为所有跨摄像机样本对(xi,yj)和同摄像机样本对(yj,yk)的平均距离。
in
Figure FDA00022861406600000211
is the logistic regression function; E p (M) is the loss function of the cross-camera positive sample pair, E d (M) is the loss function of the cross-camera negative sample pair; the value of ξ is all cross-camera sample pairs (x i , y j ) and the average distance of the same-camera sample pair (y j , y k ).
3.根据权利要求2所述的一种基于双约束度量学习和样本重排序的行人再识别方法,其特征在于:步骤2中所述的建立同摄像机关联约束,包括以下步骤:3. a kind of pedestrian re-identification method based on double-constraint metric learning and sample reordering according to claim 2, is characterized in that: the establishment described in step 2 is associated with camera constraints, comprises the following steps: 步骤2.1,定义候选集
Figure FDA0002286140660000031
中不同行人图片yj和yk组成的样本对(yj,yk)为同摄像机负样本对,并设置标签zjk=-1;
Step 2.1, define the candidate set
Figure FDA0002286140660000031
The sample pair (y j , y k ) composed of different pedestrian pictures y j and y k in the above is the negative sample pair of the same camera, and the label z jk =-1 is set;
步骤2.2,约束训练集中任意跨摄像机正样本对(xi,yj)之间的距离小于同摄像机负样本对(yj,yk)之间的距离:Step 2.2, constrain the distance between any cross-camera positive sample pair (x i , y j ) in the training set to be smaller than the distance between the same-camera negative sample pair (y j , y k ):
Figure FDA0002286140660000032
Figure FDA0002286140660000032
步骤2.3,由于步骤1.4已经约束所有跨摄像机正样本对之间的距离小于阈值ξ,因此将步骤2.2中的约束等效转化为约束训练集中任意同摄像机负样本对(yj,yk)之间的距离大于ξ,得到如下损失函数:Step 2.3, since step 1.4 has already constrained the distance between all cross-camera positive sample pairs to be less than the threshold ξ, the constraint in step 2.2 is equivalently transformed into the constraint training set of any same-camera negative sample pair (y j , y k ). The distance between them is greater than ξ, and the following loss function is obtained:
Figure FDA0002286140660000033
Figure FDA0002286140660000033
其中Es(M)为同摄像机负样本对的损失函数。where E s (M) is the loss function of the same camera negative sample pair.
4.根据权利要求3所述的一种基于双约束度量学习和样本重排序的行人再识别方法,其特征在于:步骤3中所述的度量矩阵求解,具体包括以下步骤:4. a kind of pedestrian re-identification method based on double-constrained metric learning and sample reordering according to claim 3, is characterized in that: the metric matrix solution described in the step 3 specifically comprises the following steps: 步骤3.1,联合考虑步骤1.4和步骤2.3中的损失函数,得到双约束距离度量学习的目标函数:Step 3.1, jointly consider the loss function in step 1.4 and step 2.3, and obtain the objective function of double-constrained distance metric learning: Φ(M)=Ep(M)+Ed(M)+Es(M)Φ(M)=E p (M)+E d (M)+E s (M) 步骤3.2,为目标函数中的样本对赋予权重wij和wjk,并对步骤3.1中的目标函数表达式进行简化,得到:Step 3.2, assign weights w ij and w jk to the sample pairs in the objective function, and simplify the objective function expression in step 3.1 to obtain:
Figure FDA0002286140660000034
Figure FDA0002286140660000034
其中
Figure FDA0002286140660000035
当zij=1时wij=1/Npos,其中Npos为训练集中跨摄像机正样本对的总数;当zij=-1时wij被设为1/Nneg,其中Nneg为训练集中所有跨摄像机和同摄像机负样本对的总数;同时,由于不存在同摄像机正样本对,将wjk统一设为1/Nneg
in
Figure FDA0002286140660000035
w ij =1/N pos when zi ij =1, where N pos is the total number of cross-camera positive sample pairs in the training set; when zi ij =-1, w ij is set to 1/N neg , where N neg is the training set Set the total number of all cross-camera and same-camera negative sample pairs; at the same time, since there is no same-camera positive sample pair, set w jk uniformly to 1/N neg ;
步骤3.3,将双约束度量学习定义为如下优化问题:Step 3.3, define double-constrained metric learning as the following optimization problem:
Figure FDA0002286140660000041
Figure FDA0002286140660000041
步骤3.4,求解步骤3.3中的优化问题,得到半正定度量矩阵M。Step 3.4, solve the optimization problem in step 3.3, and obtain the semi-positive definite metric matrix M.
5.根据权利要求1所述的一种基于双约束度量学习和样本重排序的行人再识别方法,其特征在于:步骤8中所述的利用前K张候选图片在特征空间中的关联性构建概率超图,具体包括以下步骤:5. a kind of pedestrian re-identification method based on double constraint metric learning and sample reordering according to claim 1, is characterized in that: described in step 8 utilizes the correlation construction of the first K candidate pictures in the feature space Probabilistic hypergraph, which includes the following steps: 步骤8.1,首先将查询图片和K张候选图片进行合并,得到概率超图的顶点集合
Figure FDA0002286140660000042
Step 8.1, first merge the query image and K candidate images to obtain the vertex set of the probabilistic hypergraph
Figure FDA0002286140660000042
步骤8.2,以
Figure FDA00022861406600000412
中的每一个顶点vi作为中心节点,通过连接vi及其在投影特征空间中距离最近的5、15以及25个顶点,生成三条超边,并加入概率超图的超边集合ε中,因此集合ε中一共包含3*(K+1)条超边;
Step 8.2 to
Figure FDA00022861406600000412
Each vertex v i in is used as a central node, and three hyperedges are generated by connecting v i and its nearest 5, 15 and 25 vertices in the projected feature space, and they are added to the hyperedge set ε of the probabilistic hypergraph, Therefore, the set ε contains a total of 3*(K+1) hyperedges;
步骤8.3,为超边集合ε中的每条超边ei分配一个非负权重值wh(ei),当超边以查询图片作为中心节点时,为其分配一个权重值
Figure FDA0002286140660000043
而当超边以候选图片作为中心节点时,为其分配权重值
Figure FDA0002286140660000044
Step 8.3, assign a non-negative weight value w h ( ei ) to each hyperedge ei in the hyperedge set ε, when the hyperedge takes the query image as the central node, assign a weight value to it
Figure FDA0002286140660000043
And when the hyperedge takes the candidate image as the center node, it assigns a weight value to it
Figure FDA0002286140660000044
步骤8.4,根据
Figure FDA0002286140660000045
中顶点和ε中超边的从属关系,构造大小为
Figure FDA0002286140660000046
的关联矩阵H,其元素的定义为:
Step 8.4, according to
Figure FDA0002286140660000045
The membership of the mid-vertices and the hyper-edges in ε, the construction size is
Figure FDA0002286140660000046
The association matrix H of , whose elements are defined as:
Figure FDA0002286140660000047
Figure FDA0002286140660000047
其中A(vi,ej)表示顶点vi属于超边ej的概率,通过下式计算得到:where A(v i , e j ) represents the probability that the vertex v i belongs to the hyperedge e j , which is calculated by the following formula:
Figure FDA0002286140660000048
Figure FDA0002286140660000048
其中vj为超边ej的中心节点,σ为投影特征空间中所有顶点之间的平均距离;最终完成概率超图
Figure FDA0002286140660000049
的构建,并且得到关联矩阵H。
where v j is the center node of the hyperedge e j , σ is the average distance between all vertices in the projected feature space; the final completion probability hypergraph
Figure FDA0002286140660000049
, and get the correlation matrix H.
6.根据权利要求5所述的一种基于双约束度量学习和样本重排序的行人再识别方法,其特征在于:步骤9中基于概率超图计算重排序结果,其具体包含以下子步骤:6. a kind of pedestrian re-identification method based on double constraint metric learning and sample reordering according to claim 5, is characterized in that: in step 9, calculate reordering result based on probability hypergraph, it specifically comprises following substep: 步骤9.1,基于关联矩阵H,计算概率超图中每个顶点的度d(v)和每条超边的度δ(e),其中
Figure FDA00022861406600000410
Figure FDA00022861406600000411
定义对角矩阵Dv,使其对角线上的元素对应概率超图中每个顶点的度;定义对角矩阵De,使其对角线上的元素对应概率超图中每条超边的度;同时定义对角矩阵W,使其对角线上的元素对应每条超边的权重wh(e);
Step 9.1, based on the association matrix H, calculate the degree d(v) of each vertex in the probabilistic hypergraph and the degree δ(e) of each hyperedge, where
Figure FDA00022861406600000410
and
Figure FDA00022861406600000411
Define a diagonal matrix D v so that its elements on the diagonal correspond to the degree of each vertex in the probability hypergraph; define a diagonal matrix D e so that its elements on the diagonal correspond to each hyperedge in the probability hypergraph The degree of ; at the same time, define the diagonal matrix W, so that the elements on the diagonal correspond to the weights of each hyperedge w h (e);
步骤9.2,利用关联矩阵H、顶点度矩阵Dv、超边度矩阵De和超边权重矩阵W共同计算概率超图的拉普拉斯矩阵L:Step 9.2, use the correlation matrix H, the vertex degree matrix D v , the hyperedge degree matrix D e and the hyperedge weight matrix W to jointly calculate the Laplacian matrix L of the probabilistic hypergraph:
Figure FDA0002286140660000051
Figure FDA0002286140660000051
其中I为大小为
Figure FDA0002286140660000052
的单位矩阵;
where I is the size of
Figure FDA0002286140660000052
the identity matrix of ;
步骤9.3,利用归一化框架同时考虑概率超图的拉普拉斯约束和初始标签经验损失,定义样本重排序的目标函数为:Step 9.3, using the normalization framework to consider the Laplacian constraint of the probabilistic hypergraph and the initial label experience loss at the same time, define the objective function of sample reordering as:
Figure FDA0002286140660000053
Figure FDA0002286140660000053
其中f表示需要学习的重排序分数向量,r表示初始标签向量,r中查询图片的标签被设为1,所有候选图片的标签被设为0,μ>0为归一化参数,用于权衡目标函数中第一项与第二项之间的重要性;目标函数中的第一项约束概率超图中共享更多超边的顶点获得相似的重排序分数,而目标函数中的第二项则约束重排序分数接近初始标签信息;where f represents the reordering score vector to be learned, r represents the initial label vector, the label of the query image in r is set to 1, the labels of all candidate images are set to 0, and μ>0 is a normalization parameter used for trade-off The importance between the first term and the second term in the objective function; the first term in the objective function constrains the probability that vertices in the hypergraph share more hyperedges obtain similar reordering scores, while the second term in the objective function Then the constraint reordering score is close to the initial label information; 步骤9.4,通过使步骤9.3中的目标函数关于f的一阶导数为零,可以快速获得重排序问题的最优解:Step 9.4, by making the first derivative of the objective function in step 9.3 with respect to f zero, the optimal solution to the reordering problem can be quickly obtained:
Figure FDA0002286140660000054
Figure FDA0002286140660000054
步骤9.5,根据向量f中候选图片的重排序分数,按照排序分数从大到小对K张候选图片重新排序。Step 9.5, according to the reordering scores of the candidate pictures in the vector f, reorder the K candidate pictures in descending order according to the ordering scores.
CN201710213894.6A 2017-04-01 2017-04-01 Person Re-identification Method Based on Double Constraint Metric Learning and Sample Reordering Expired - Fee Related CN107145826B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710213894.6A CN107145826B (en) 2017-04-01 2017-04-01 Person Re-identification Method Based on Double Constraint Metric Learning and Sample Reordering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710213894.6A CN107145826B (en) 2017-04-01 2017-04-01 Person Re-identification Method Based on Double Constraint Metric Learning and Sample Reordering

Publications (2)

Publication Number Publication Date
CN107145826A CN107145826A (en) 2017-09-08
CN107145826B true CN107145826B (en) 2020-05-08

Family

ID=59773502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710213894.6A Expired - Fee Related CN107145826B (en) 2017-04-01 2017-04-01 Person Re-identification Method Based on Double Constraint Metric Learning and Sample Reordering

Country Status (1)

Country Link
CN (1) CN107145826B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729818B (en) * 2017-09-21 2020-09-22 北京航空航天大学 Multi-feature fusion vehicle re-identification method based on deep learning
CN107704824B (en) * 2017-09-30 2020-05-29 北京正安维视科技股份有限公司 Pedestrian re-identification method and equipment based on space constraint
CN108133192A (en) * 2017-12-26 2018-06-08 武汉大学 A kind of pedestrian based on Gauss-Laplace distribution statistics identifies again
CN109002792B (en) * 2018-07-12 2021-07-20 西安电子科技大学 SAR image change detection method based on hierarchical multi-model metric learning
CN109635686B (en) 2018-11-29 2021-04-23 上海交通大学 A Two-Stage Pedestrian Search Method Combining Face and Appearance
CN109711366B (en) * 2018-12-29 2021-04-23 浙江大学 A Pedestrian Re-identification Method Based on Group Information Loss Function
CN109784266B (en) * 2019-01-09 2021-12-03 江西理工大学应用科学学院 Handwritten Chinese character recognition algorithm of multi-model hypergraph
CN111291611A (en) * 2019-12-20 2020-06-16 长沙千视通智能科技有限公司 Pedestrian re-identification method and device based on Bayesian query expansion
CN111259786B (en) * 2020-01-14 2022-05-03 浙江大学 Pedestrian re-identification method based on synchronous enhancement of appearance and motion information of video
CN111476168B (en) * 2020-04-08 2022-06-21 山东师范大学 Cross-domain pedestrian re-identification method and system based on three stages
CN112651335B (en) * 2020-12-25 2024-05-07 深圳集智数字科技有限公司 Method, system, equipment and storage medium for identifying fellow persons

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103500345A (en) * 2013-09-29 2014-01-08 华南理工大学 Method for learning person re-identification based on distance measure
CN104268140A (en) * 2014-07-31 2015-01-07 浙江大学 Image retrieval method based on weight learning hypergraphs and multivariate information combination
US9141852B1 (en) * 2013-03-14 2015-09-22 Toyota Jidosha Kabushiki Kaisha Person detection and pose estimation system
US9436895B1 (en) * 2015-04-03 2016-09-06 Mitsubishi Electric Research Laboratories, Inc. Method for determining similarity of objects represented in images
CN105989369A (en) * 2015-02-15 2016-10-05 中国科学院西安光学精密机械研究所 Pedestrian Re-Identification Method Based on Metric Learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9396412B2 (en) * 2012-06-21 2016-07-19 Siemens Aktiengesellschaft Machine-learnt person re-identification
US20150206069A1 (en) * 2014-01-17 2015-07-23 Matthew BEERS Machine learning-based patent quality metric

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9141852B1 (en) * 2013-03-14 2015-09-22 Toyota Jidosha Kabushiki Kaisha Person detection and pose estimation system
CN103500345A (en) * 2013-09-29 2014-01-08 华南理工大学 Method for learning person re-identification based on distance measure
CN104268140A (en) * 2014-07-31 2015-01-07 浙江大学 Image retrieval method based on weight learning hypergraphs and multivariate information combination
CN105989369A (en) * 2015-02-15 2016-10-05 中国科学院西安光学精密机械研究所 Pedestrian Re-Identification Method Based on Metric Learning
US9436895B1 (en) * 2015-04-03 2016-09-06 Mitsubishi Electric Research Laboratories, Inc. Method for determining similarity of objects represented in images

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Learning Visual-Spatial Saliency for Multiple-Shot Person Re-identification;Yi Xie 等;《IEEE Signal Processing Letters》;20151130;第1854-1857页 *
Person Re-identification by Graph-based;Yi Xie 等;《Electronics Letters》;20160818;第1447-1449页 *
距离度量学习的摄像网络中行人重识别;章东平 等;《中国计量大学学报》;20161231;第424-428页 *

Also Published As

Publication number Publication date
CN107145826A (en) 2017-09-08

Similar Documents

Publication Publication Date Title
CN107145826B (en) Person Re-identification Method Based on Double Constraint Metric Learning and Sample Reordering
CN109948425B (en) A pedestrian search method and device based on structure-aware self-attention and online instance aggregation and matching
CN111160297B (en) Pedestrian Re-identification Method and Device Based on Residual Attention Mechanism Spatio-temporal Joint Model
CN109961051B (en) A Pedestrian Re-Identification Method Based on Clustering and Block Feature Extraction
CN108520226B (en) Pedestrian re-identification method based on body decomposition and significance detection
CN106778604B (en) Pedestrian re-identification method based on matching convolutional neural network
CN109711366B (en) A Pedestrian Re-identification Method Based on Group Information Loss Function
Liu et al. Bayesian model adaptation for crowd counts
CN108764308A (en) Pedestrian re-identification method based on convolution cycle network
CN110263697A (en) Pedestrian based on unsupervised learning recognition methods, device and medium again
CN110414368A (en) An unsupervised person re-identification method based on knowledge distillation
CN111325115A (en) Countermeasures cross-modal pedestrian re-identification method and system with triple constraint loss
CN105138998B (en) Pedestrian based on the adaptive sub-space learning algorithm in visual angle recognition methods and system again
WO2021218671A1 (en) Target tracking method and device, and storage medium and computer program
CN110516707B (en) An image tagging method, its device, and storage medium
CN107767416B (en) Method for identifying pedestrian orientation in low-resolution image
CN114359132B (en) Method for pedestrian search using text description to generate images
CN110321801A (en) A kind of change one's clothes pedestrian recognition methods and system again based on autoencoder network
CN109447123A (en) A kind of pedestrian's recognition methods again constrained based on tag compliance with stretching regularization dictionary learning
Nguyen et al. Combined YOLOv5 and HRNet for high accuracy 2D keypoint and human pose estimation
CN111680705A (en) MB-SSD Method and MB-SSD Feature Extraction Network for Object Detection
CN117197838A (en) Unsupervised cross-mode pedestrian re-identification method based on cluster optimization
CN118015662A (en) Transformer multi-head self-attention mechanism-based pedestrian re-recognition method crossing cameras
CN107563327B (en) A pedestrian re-identification method and system based on self-paced feedback
CN115050101A (en) Gait recognition method based on skeleton and contour feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200508