[go: up one dir, main page]

CN110232704A - Dimension self-adaption anti-shelter target tracking based on optimal characteristics - Google Patents

Dimension self-adaption anti-shelter target tracking based on optimal characteristics Download PDF

Info

Publication number
CN110232704A
CN110232704A CN201910520221.4A CN201910520221A CN110232704A CN 110232704 A CN110232704 A CN 110232704A CN 201910520221 A CN201910520221 A CN 201910520221A CN 110232704 A CN110232704 A CN 110232704A
Authority
CN
China
Prior art keywords
target
feature
tracking
features
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910520221.4A
Other languages
Chinese (zh)
Inventor
范剑英
姜瑞
谢寅凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN201910520221.4A priority Critical patent/CN110232704A/en
Publication of CN110232704A publication Critical patent/CN110232704A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/513Sparse representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

基于最优特征的尺度自适应抗遮挡目标跟踪方法。传统的压缩跟踪方法未能解决最优特征提取,尺度自适应以及遮挡判断问题。本发明的方法,通过引入遗传算法对压缩后的特征构造最优特征池并进行排序选取最优特征,并提取SURF特征构造参考点模型和目标特征库,通过仿射变换更新窗口尺度,同时设置警戒窗判断是否遮挡,对遮挡情况利用参考点模型推测目标位置,最后根据跟踪结果完成更新直至跟踪结束。本发明能够提高传统压缩跟踪方法的精确度以及解决尺度自适应及遮挡问题。

Scale-adaptive anti-occlusion object tracking method based on optimal features. Traditional compressed tracking methods fail to solve the problems of optimal feature extraction, scale adaptation and occlusion judgment. The method of the present invention constructs the optimal feature pool for the compressed features by introducing the genetic algorithm, sorts and selects the optimal features, extracts the SURF feature to construct the reference point model and the target feature library, updates the window scale through affine transformation, and simultaneously sets The warning window judges whether it is blocked, uses the reference point model to estimate the target position for the blocked situation, and finally completes the update according to the tracking result until the tracking ends. The invention can improve the accuracy of the traditional compressed tracking method and solve the problems of scale adaptation and occlusion.

Description

基于最优特征的尺度自适应抗遮挡目标跟踪方法Scale Adaptive Anti-Occlusion Target Tracking Method Based on Optimal Features

技术领域:Technical field:

本发明属于计算机视觉领域,具体为一种运动目标跟踪方法。The invention belongs to the field of computer vision, and specifically relates to a moving target tracking method.

背景技术:Background technique:

目标跟踪技术是智能视频监控的重要技术之一,无论在军事防御还是民生安全问题上都具有重要的研究意义和价值。在军事防御上,基于视频的人机交互和智能视觉导航技术一直是研究人员的重点,在民生安全问题上,小区等公共场合中的监控系统为人民生命财产的安全提供可靠的保障。街道上的智能交通系统保障着交通的顺畅和人民出行的安全,为人民提供全方位的保护。Target tracking technology is one of the important technologies of intelligent video surveillance, and it has important research significance and value no matter in military defense or people's livelihood security issues. In military defense, video-based human-computer interaction and intelligent visual navigation technology have always been the focus of researchers. In terms of people's livelihood and safety, monitoring systems in public places such as residential areas provide reliable protection for the safety of people's lives and properties. The intelligent transportation system on the street guarantees the smooth traffic and the safety of people's travel, and provides all-round protection for the people.

基于在线学习的目标跟踪技术在跟踪的同时训练更新分类器,可以更好地适应目标的变化,在线学习跟踪技术的优势明显,使其迅速成为跟踪领域的研究重点。随着科学技术的进步和学科间相互交叉应用也越来越普遍,信号处理领域中的压缩感知原理近几年被引入到了目标跟踪领域。通过压缩感知理论可以减少图像提取特征的冗余信息,对图像特征进行压缩,为在线学习算法实施更新分类器提供了基础。Kaihua Zhang等人提出的压缩跟踪算法直接应用经过系稀疏矩阵投影后的压缩矩阵对目标分类,优点明显,但也存在一些缺点。The target tracking technology based on online learning trains and updates the classifier while tracking, which can better adapt to the change of the target. The advantages of online learning tracking technology are obvious, making it quickly become the focus of research in the field of tracking. With the advancement of science and technology and the interdisciplinary applications between disciplines are becoming more and more common, the principle of compressed sensing in the field of signal processing has been introduced into the field of object tracking in recent years. The redundant information of image extraction features can be reduced through compressive sensing theory, and image features can be compressed, which provides a basis for implementing online learning algorithms to update classifiers. The compressed tracking algorithm proposed by Kaihua Zhang et al. directly applies the compressed matrix after the sparse matrix projection to classify the target. It has obvious advantages, but there are also some shortcomings.

发明内容:Invention content:

本发明的目的是提供一种基于最优特征的尺度自适应抗遮挡目标跟踪方法,用以解决现有的压缩跟踪方法特征有效性不足,目标框大小固定以及无法判断遮挡问题。The purpose of the present invention is to provide a scale-adaptive anti-occlusion target tracking method based on optimal features to solve the problems of insufficient feature effectiveness, fixed target frame size and inability to judge occlusion in existing compression tracking methods.

本发明的目的可以通过如下技术方案达到:The purpose of the present invention can be achieved through the following technical solutions:

一种基于最优特征的尺度自适应抗遮挡目标跟踪方法,所述的跟踪方法包括下列步骤:A scale-adaptive anti-occlusion object tracking method based on optimal features, said tracking method comprising the following steps:

步骤一:对初始帧标定目标区域,生成一系列正负样本计算压缩特征训练概率分布,提取SURF特征随机选取固定数目特征点作为目标特征库并构造参考点模型;Step 1: Calibrate the target area on the initial frame, generate a series of positive and negative samples to calculate the probability distribution of compressed feature training, extract SURF features and randomly select a fixed number of feature points as the target feature library and construct a reference point model;

步骤二:输入第t帧图像,根据上一帧结果在周围一定搜索半径内查找所有可能目标框并提取压缩特征,通过遗传算法构造最优特征池进行排序;Step 2: Input the t-th frame image, find all possible target frames within a certain search radius according to the results of the previous frame and extract compressed features, and construct the optimal feature pool by genetic algorithm for sorting;

步骤三:选取特征池前n组特征送入朴素贝叶斯分类器计算响应值,取最大值为当前帧跟踪结果,提取SURF特征点与目标特征库进行匹配,利用匹配特征点求解目标仿射变换参数,更新跟踪窗口;Step 3: Select the first n groups of features from the feature pool and send them to the Naive Bayesian classifier to calculate the response value, take the maximum value as the tracking result of the current frame, extract SURF feature points to match with the target feature library, and use the matching feature points to solve the target affine Transform parameters and update the tracking window;

步骤四:以目标窗口外扩8个像素设置警戒窗分为左右对称的两个部分,计算两个部分颜色直方图并进行归一化处理判断是否遮挡,若满足遮挡条件采用SURF参考点模型推测目标位置,利用最终跟踪结果进行更新,重复以上步骤,直至视频结束。Step 4: Expand the target window by 8 pixels to set the warning window and divide it into two symmetrical parts. Calculate the color histogram of the two parts and perform normalization processing to determine whether it is blocked. If the blocking condition is met, use the SURF reference point model to infer The target position is updated with the final tracking result, and the above steps are repeated until the end of the video.

进一步的,所述步骤一过程如下:Further, the process of step one is as follows:

步骤一所述的目标区域在起始帧手动标定,生成的正负样本采用V=PX 压缩特征,P表示压缩特征维数的稀疏矩阵,V和X分别表示高维和低维压缩特征,P(v=1),P(y=0)分别表示目标或样本的先验概率,条件概率分布定义为分别代表特征在目标样本计算特征值符合正态分布的均值和标准差,代表特征在背景样本计算特征值符合正态分布的均值和标准差,提取SURF特征,随机选取4个特征点作为目标特征库,随机选取距离目标中心一定距离的特征点作为参考点集DB及投票集DVThe target area described in step 1 is manually calibrated at the initial frame, and the generated positive and negative samples use V=PX compressed features, P represents the sparse matrix of compressed feature dimensions, V and X represent high-dimensional and low-dimensional compressed features, respectively, P( v=1), P(y=0) represent the prior probability of the target or sample respectively, and the conditional probability distribution is defined as and Represents the mean and standard deviation of the features calculated in the target sample according to the normal distribution, and Representative features Calculate the mean and standard deviation of the feature values conforming to the normal distribution in the background sample, extract SURF features, randomly select 4 feature points as the target feature library, and randomly select feature points at a certain distance from the target center as the reference point set DB and Voting set D V .

进一步的,所述步骤二过程如下:Further, the process of the second step is as follows:

输入第t帧图像,采用Dγ={Z||l(z)-lt-1||<γ}搜索目标候选框,其中γ为搜索半径,lt-1为上一帧目标位置,采用遗传算法构造特征池并排序,具体步骤为:Input the t-th frame image, use D γ ={Z||l(z)-l t-1 ||<γ} to search for the target candidate frame, where γ is the search radius, l t-1 is the target position in the previous frame, The feature pool is constructed and sorted by genetic algorithm, and the specific steps are as follows:

Step1:随机产生N组压缩特征,每次用作分类器的特征个数为n;Step1: Randomly generate N sets of compressed features, and the number of features used as a classifier each time is n;

Step2:引入选择算子对特征有效性进行排序,选择前n组作为分类器并直接保留到下一代,其他N-n组特征清零;Step2: Introduce a selection operator to sort the effectiveness of features, select the first n groups as classifiers and directly retain them for the next generation, and clear the other N-n groups of features;

Step3:引入交叉算子随机选取2s个优秀个体进行交叉,产生2s个交叉个体纳入特征池;Step3: Introduce a crossover operator to randomly select 2s excellent individuals for crossover, and generate 2s crossover individuals to be included in the feature pool;

Step4:引入变异算子选取m个优秀个体随机产生变异纳入特征池;Step4: Introduce a mutation operator to select m excellent individuals to randomly generate mutations and include them in the feature pool;

Step5:引入迁徙算子重新随机生成mi个个体纳入特征池;Step5: Introduce the migration operator to re-randomly generate mi individuals into the feature pool;

采用公式use the formula

对正态分布距离排序,距离越大有效性越好。For normal distribution distance sorting, the larger the distance, the better the validity.

进一步的,所述步骤三过程如下:Further, the process of step three is as follows:

输入的n组优秀特征采用计算响应值, P(vi|y=1)和P(vi|y=0)分别表示正负类条件概率,提取SURF特征点与目标特征库匹配得到4组匹配特征,采用公式The input of n sets of excellent features adopts Calculate the response value, P(v i |y=1) and P(v i |y=0) represent the conditional probability of positive and negative classes respectively, extract SURF feature points and match the target feature library to get 4 sets of matching features, using the formula

It=H×It-1 I t =H×I t-1

计算相邻两帧仿射变换参数更新跟踪窗口,H为仿射变换矩阵,ρ为尺度因子,θ为旋转角度。Calculate the affine transformation parameters of two adjacent frames to update the tracking window, H is the affine transformation matrix, ρ is the scale factor, and θ is the rotation angle.

进一步的,所述步骤四过程如下:Further, the process of step four is as follows:

在目标框外扩8个像素设置一个稍大警戒窗分为左右两部分,左右跟踪窗分别表示为T-W(k),k=1,2;左右警戒窗分别表示为A-W(k),k=1,2,1代表左半部分,2代表右半部分,计算t时刻两部分颜色直方图并归一化处理,采用 Bhattacharyya系数衡量颜色直方图相似度判断遮挡,公式如下:Expand 8 pixels outside the target frame to set a slightly larger warning window, which is divided into left and right parts. The left and right tracking windows are respectively expressed as T-W(k), k=1, 2; the left and right warning windows are respectively expressed as A-W(k), k= 1, 2, 1 represent the left half, 2 represents the right half, calculate the color histogram of the two parts at time t and normalize it, use the Bhattacharyya coefficient to measure the similarity of the color histogram to judge the occlusion, the formula is as follows:

若连续三帧满足ρ2(k,t)>ρ1(k,t),标记为Ture认为遮挡,False则离开遮挡,满足特征点纳入参考集,满足 纳入投票集,d为特征f的SURF描述符,将满足参考集的特征点按P(X|I)∞S=∑f∈FP(X|f)P(X|I)投票并累加结果,极大值点处为推测位置,在跟踪结果一定半径搜索所需正样本,远离目标区域提取负样本,根据公式If three consecutive frames satisfy ρ 2 (k, t) > ρ 1 (k, t), mark it as True to consider occlusion, and False to leave occlusion and satisfy The feature points are included in the reference set, satisfying Include the voting set, d is the SURF descriptor of the feature f, vote the feature points satisfying the reference set according to P(X|I)∞S=∑ f∈F P(X|f)P(X|I) and accumulate the results , the maximum point is the inferred position, search for the required positive samples within a certain radius of the tracking results, and extract negative samples away from the target area, according to the formula

完成目标和背景区域的概率分布更新。Probability distribution updates for target and background regions are done.

本发明的有益效果是:The beneficial effects of the present invention are:

1、引入遗传算法的最优搜索策略,通过选择、交叉、变异及迁徙算子仿真生物体的进化过程,通过适应度函数表示每个个体对当前环境的适应能力,构造目标特征的最优特征池并实时更新,保证特征的有效性,解决由于特征选取不稳定造成的分类器稳定性差问题。1. Introduce the optimal search strategy of genetic algorithm, simulate the evolution process of organisms through selection, crossover, mutation and migration operators, express the adaptability of each individual to the current environment through the fitness function, and construct the optimal characteristics of the target characteristics The pool is updated in real time to ensure the validity of the features and solve the problem of poor stability of the classifier caused by unstable feature selection.

2、提取目标的SURF特征构造目标特征库,通过相邻帧之间的仿射变换更新跟踪框口大小,解决跟踪框口固定带来的目标变化时样本提取不正确问题。2. Extract the SURF feature of the target to construct the target feature library, update the size of the tracking frame through the affine transformation between adjacent frames, and solve the problem of incorrect sample extraction when the target changes caused by the fixed tracking frame.

3、加入遮挡判断机制,通过设置警戒窗检测遮挡发生以及是否离开遮挡,提高跟踪的有效性。3. Add an occlusion judgment mechanism, and improve the effectiveness of tracking by setting a warning window to detect the occurrence of occlusion and whether to leave the occlusion.

4、通过提取的SURF特征建立与目标具有一定固定关系的参考点模型,当发生遮挡时利用参考点推测目标位置,提高了遮挡情况下的鲁棒性。4. Establish a reference point model with a certain fixed relationship with the target through the extracted SURF features. When occlusion occurs, the reference point is used to infer the target position, which improves the robustness under occlusion.

附图说明:Description of drawings:

图1为本发明的流程图;Fig. 1 is a flowchart of the present invention;

图2为遮挡判断警戒窗示意图。Fig. 2 is a schematic diagram of an occlusion judgment warning window.

具体实施方式:Detailed ways:

下面结合附图,对本发明做进一步的说明。Below in conjunction with accompanying drawing, the present invention will be further described.

如图1-2所示,基于最优特征的尺度自适应抗遮挡目标跟踪方法,所述方法具体步骤如下:As shown in Figure 1-2, the scale-adaptive anti-occlusion target tracking method based on optimal features, the specific steps of the method are as follows:

步骤一:对初始帧标定目标区域,生成一系列正负样本计算压缩特征训练概率分布,提取SURF特征随机选取固定数目特征点作为目标特征库并构造参考点模型。Step 1: Calibrate the target area on the initial frame, generate a series of positive and negative samples to calculate the probability distribution of compressed feature training, extract SURF features, randomly select a fixed number of feature points as the target feature library, and construct a reference point model.

在具体实施方式中,步骤一包括:In a specific embodiment, step one includes:

步骤一所述的目标区域在起始帧手动标定,生成的正负样本采用V=PX 压缩特征,P表示压缩特征维数的稀疏矩阵,V和X分别表示高维和低维压缩特征,P(y=1),P(y=0)分别表示目标或样本的先验概率,条件概率分布定义为分别代表特征在目标样本计算特征值符合正态分布的均值和标准差,代表特征在背景样本计算特征值符合正态分布的均值和标准差,提取SURF特征,随机选取4个特征点作为目标特征库,随机选取距离目标中心一定距离的特征点作为参考点集DB及投票集DVThe target area described in step 1 is manually calibrated at the initial frame, and the generated positive and negative samples use V=PX compressed features, P represents the sparse matrix of compressed feature dimensions, V and X represent high-dimensional and low-dimensional compressed features, respectively, P( y=1), P(y=0) represent the prior probability of the target or sample respectively, and the conditional probability distribution is defined as and Represents the mean and standard deviation of the features calculated in the target sample according to the normal distribution, and Representative features Calculate the mean and standard deviation of the feature values conforming to the normal distribution in the background sample, extract SURF features, randomly select 4 feature points as the target feature library, and randomly select feature points at a certain distance from the target center as the reference point set DB and Voting set D V .

步骤二:输入第t帧图像,根据上一帧结果在周围一定搜索半径内查找所有可能目标框并提取压缩特征,通过遗传算法构造最优特征池进行排序。Step 2: Input the t-th frame of image, find all possible target frames within a certain search radius according to the results of the previous frame and extract compressed features, and construct the optimal feature pool by genetic algorithm for sorting.

在具体实施方式中,步骤二包括:In a specific embodiment, step 2 includes:

输入第t帧图像,采用Dγ={Z丨||l(z)-lt-1||<γ}搜索目标候选框,其中γ为搜索半径,lt-1为上一帧目标位置,采用遗传算法构造特征池并排序,具体步骤为:Input the t-th frame image, use D γ = {Z丨||l(z)-l t-1 ||<γ} to search for the target candidate frame, where γ is the search radius, and l t-1 is the target position in the previous frame , using the genetic algorithm to construct and sort the feature pool, the specific steps are:

Step1:随机产生N组压缩特征,每次用作分类器的特征个数为n;Step1: Randomly generate N sets of compressed features, and the number of features used as a classifier each time is n;

Step2:引入选择算子对特征有效性进行排序,选择前n组作为分类器并直接保留到下一代,其他N-n组特征清零;Step2: Introduce a selection operator to sort the effectiveness of features, select the first n groups as classifiers and directly retain them for the next generation, and clear the other N-n groups of features;

Step3:引入交叉算子随机选取2s个优秀个体进行交叉,产生2s个交叉个体纳入特征池;Step3: Introduce a crossover operator to randomly select 2s excellent individuals for crossover, and generate 2s crossover individuals to be included in the feature pool;

Step4:引入变异算子选取m个优秀个体随机产生变异纳入特征池;Step4: Introduce a mutation operator to select m excellent individuals to randomly generate mutations and include them in the feature pool;

Step5:引入迁徙算子重新随机生成mi个个体纳入特征池;Step5: Introduce the migration operator to re-randomly generate mi individuals into the feature pool;

采用公式use the formula

对正态分布距离排序,距离越大有效性越好。For normal distribution distance sorting, the larger the distance, the better the validity.

步骤三:选取特征池前n组特征送入朴素贝叶斯分类器计算响应值,取最大值为当前帧跟踪结果,提取SURF特征点与目标特征库进行匹配,利用匹配特征点求解目标仿射变换参数,更新跟踪窗口。Step 3: Select the first n groups of features from the feature pool and send them to the Naive Bayesian classifier to calculate the response value, take the maximum value as the tracking result of the current frame, extract SURF feature points to match with the target feature library, and use the matching feature points to solve the target affine Transform parameters and update the tracking window.

在具体的实施方式中,步骤三包括:In a specific embodiment, step three includes:

输入的n组优秀特征采用计算响应值, P(vi|y=1)和P(vi|y=0)分别表示正负类条件概率,提取SURF特征点与目标特征库匹配得到4组匹配特征,采用公式The input of n sets of excellent features adopts Calculate the response value, P(v i |y=1) and P(v i |y=0) represent the conditional probability of positive and negative classes respectively, extract SURF feature points and match the target feature library to get 4 sets of matching features, using the formula

It=H×It-1 I t =H×I t-1

计算相邻两帧仿射变换参数更新跟踪窗口,H为仿射变换矩阵,ρ为尺度因子,θ为旋转角度。Calculate the affine transformation parameters of two adjacent frames to update the tracking window, H is the affine transformation matrix, ρ is the scale factor, and θ is the rotation angle.

步骤四:以目标窗口外扩8个像素设置警戒窗分为左右对称的两个部分,计算两个部分颜色直方图并进行归一化处理判断是否遮挡,若满足遮挡条件采用SURF参考点模型推测目标位置,利用最终跟踪结果进行更新,重复以上步骤,直至视频结束。Step 4: Expand the target window by 8 pixels to set the warning window and divide it into two symmetrical parts. Calculate the color histogram of the two parts and perform normalization processing to determine whether it is blocked. If the blocking condition is met, use the SURF reference point model to infer The target position is updated with the final tracking result, and the above steps are repeated until the end of the video.

在具体的实施方式中,步骤四包括:In a specific embodiment, step 4 includes:

在目标框外扩8个像素设置一个稍大警戒窗分为左右两部分,左右跟踪窗分别表示为T-W(k),k=1,2;左右警戒窗分别表示为A-W(k),k=1,2,1代表左半部分,2代表右半部分,计算t时刻两部分颜色直方图并归一化处理,采用 Bhattacharyya系数衡量颜色直方图相似度判断遮挡,公式如下:Expand 8 pixels outside the target frame to set a slightly larger warning window, which is divided into left and right parts. The left and right tracking windows are respectively expressed as T-W(k), k=1, 2; the left and right warning windows are respectively expressed as A-W(k), k= 1, 2, 1 represent the left half, 2 represents the right half, calculate the color histogram of the two parts at time t and normalize it, use the Bhattacharyya coefficient to measure the similarity of the color histogram to judge the occlusion, the formula is as follows:

若连续三帧满足ρ2(k,t)>ρ1(k,t),标记为Ture认为遮挡,False则离开遮挡,满足特征点纳入参考集,满足 纳入投票集,d为特征f的SURF描述符,将满足参考集的特征点按P(X|I)∞S=∑f∈FP(X|f)P(X|I)投票并累加结果,极大值点处为推测位置,在跟踪结果一定半径搜索所需正样本,远离目标区域提取负样本,根据公式If three consecutive frames satisfy ρ 2 (k, t) > ρ 1 (k, t), mark it as True to consider occlusion, and False to leave occlusion and satisfy The feature points are included in the reference set, satisfying Include the voting set, d is the SURF descriptor of the feature f, vote the feature points satisfying the reference set according to P(X|I)∞S=∑ f∈F P(X|f)P(X|I) and accumulate the results , the maximum point is the inferred position, search for the required positive samples within a certain radius of the tracking results, and extract negative samples away from the target area, according to the formula

完成目标和背景区域的概率分布更新。Probability distribution updates for target and background regions are done.

上面结合附图对本发明的具体实施方式作了详细说明,但是本发明并不限于上述实施方式,在本领域普通技术人员所具备的知识范围内,还可以在不脱离本发明宗旨的前提下作出各种变化。The specific implementation of the present invention has been described in detail above in conjunction with the accompanying drawings, but the present invention is not limited to the above-mentioned implementation, within the knowledge of those of ordinary skill in the art, it can also be made without departing from the gist of the present invention. Variations.

Claims (5)

1. a kind of dimension self-adaption anti-shelter target tracking based on optimal characteristics, which comprises the following steps:
Step 1: to initial frame spotting region, a series of positive negative samples is generated and calculate compressive features training probability distribution, are mentioned It takes SURF feature to randomly select fixed number characteristic point as target feature library and constructs with reference to point model;
Step 2: input t frame image searches all possible target frames according to previous frame result around in certain search radius And compressive features are extracted, optimal characteristics pond is constructed by genetic algorithm and is ranked up;
Step 3: n group feature is sent into Naive Bayes Classifier and calculates response before selected characteristic pond, is maximized as present frame Tracking result is extracted SURF characteristic point and is matched with target feature library, solves target affine transformation using matching characteristic point and joins Number updates tracking window;
Step 4: 8 pixel setting warning windows are extended out with target window and are divided into symmetrical two parts, calculate two parts Color histogram is simultaneously normalized and judges whether to block, and speculates mesh with reference to point model using SURF if meeting obstruction conditions Cursor position is updated using final tracking result, repeats above step, until video terminates.
2. the dimension self-adaption anti-shelter target tracking according to claim 1 based on optimal characteristics, it is characterized in that: Target area described in step 1 is demarcated manually in start frame, and the positive negative sample of generation uses V=PX compressive features, and P indicates pressure The sparse matrix of contracting intrinsic dimensionality, V and X respectively indicate higher-dimension and low-dimensional compressive features, P (y=1), and P (y=0) respectively indicates mesh The prior probability of mark or sample, conditional probability distribution are defined as WithFeature is respectively represented in target Sample calculates the mean value and standard deviation that characteristic value meets normal distribution,WithIt represents feature and calculates characteristic value in background sample Meet the mean value and standard deviation of normal distribution, extracts SURF feature, randomly select 4 characteristic points as target feature library, at random The characteristic point of selected distance target's center certain distance collection D as a reference pointBAnd ballot collection DV
3. the dimension self-adaption anti-shelter target tracking according to claim 1 or 2 based on optimal characteristics, feature It is: input t frame image, using Dγ=Z | | l (z)-lt-1| | < γ } search target candidate frame, wherein γ is search radius, lt-1For previous frame target position, using genetic algorithm construction feature pond and sort, specific steps are as follows:
Step1: being randomly generated N group compressive features, and the Characteristic Number as classifier is n every time;
Step2: introducing selection operator and be ranked up to characteristic validity, selects preceding n group as classifier and directly remains into down A generation, other N-n group features are reset;
Step3: introducing crossover operator randomly selects 2s excellent individual and is intersected, and generates 2s intersection individual and is included in feature Pond;
Step4: introducing m excellent individual of mutation operator selection is randomly generated variation and is included in feature pool;
Step5: operator is migrated in introducing, and mi individual of generation is included in feature pool at random again;
Using formula
To normal distribution distance-taxis, the bigger validity of distance is better.
4. the dimension self-adaption anti-shelter target tracking according to claim 3 based on optimal characteristics, it is characterized in that: The outstanding feature of the n group of input usesCalculate response, P (vi| y=1) and P (vi|y =0) positive and negative class conditional probability is respectively indicated, SURF characteristic point is extracted and target signature storehouse matching obtains 4 groups of matching characteristics, use Formula
It=H × It-1
It calculates adjacent two frames affine transformation parameter and updates tracking window, H is affine transformation matrix, and ρ is scale factor, and θ is rotation Angle.
5. the dimension self-adaption anti-shelter target tracking according to claim 4 based on optimal characteristics, it is characterized in that: Expand 8 pixels one slightly larger warning windows of setting in target outer frame and be divided into left and right two parts, left and right track window is expressed as T-W (k), k=1,2;Left and right warning window is expressed as A-W (k), k=1,2, and 1 represents left-half, and 2 represent right half part, calculates T moment two parts color histogram and normalized are measured color histogram similarity using Bhattacharyya coefficient and are sentenced Disconnected to block, formula is as follows:
If continuous three frame meets ρ2(k, t) > ρ1(k, t) thinks to block labeled as Ture, and False then leaves and blocks, and meetsCharacteristic point is included in reference set, meets It is included in ballot collection, d is characterized the SURF descriptor of f, will meet the characteristic point of reference set by P (X | I) ∞ S=∑f∈FP (X | f) P (X | I) simultaneously accumulation result of voting, it is computed position at maximum point, is searched in tracking result certain radius Required positive sample extracts negative sample far from target area, according to the following formula
The probability distribution for completing target and background region updates.
CN201910520221.4A 2019-06-14 2019-06-14 Dimension self-adaption anti-shelter target tracking based on optimal characteristics Pending CN110232704A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910520221.4A CN110232704A (en) 2019-06-14 2019-06-14 Dimension self-adaption anti-shelter target tracking based on optimal characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910520221.4A CN110232704A (en) 2019-06-14 2019-06-14 Dimension self-adaption anti-shelter target tracking based on optimal characteristics

Publications (1)

Publication Number Publication Date
CN110232704A true CN110232704A (en) 2019-09-13

Family

ID=67859914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910520221.4A Pending CN110232704A (en) 2019-06-14 2019-06-14 Dimension self-adaption anti-shelter target tracking based on optimal characteristics

Country Status (1)

Country Link
CN (1) CN110232704A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275741A (en) * 2020-01-19 2020-06-12 北京迈格威科技有限公司 Target tracking method and device, computer equipment and storage medium
CN116596958A (en) * 2023-07-18 2023-08-15 四川迪晟新达类脑智能技术有限公司 Target tracking method and device based on online sample augmentation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102881022A (en) * 2012-07-20 2013-01-16 西安电子科技大学 Concealed-target tracking method based on on-line learning
US8996350B1 (en) * 2011-11-02 2015-03-31 Dub Software Group, Inc. System and method for automatic document management

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8996350B1 (en) * 2011-11-02 2015-03-31 Dub Software Group, Inc. System and method for automatic document management
CN102881022A (en) * 2012-07-20 2013-01-16 西安电子科技大学 Concealed-target tracking method based on on-line learning

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
吕韵秋: "基于压缩跟踪和遗传算法的实时跟踪方法", 《制导与引信》 *
姜超: "基于压缩感知的实时目标跟踪算法研究", 《中国优秀硕士学位论文全文数据库》 *
李博江: "均值平移算法在尺度和速度变化的目标跟踪中的应用研究", 《中国优秀硕士学位论文全文数据库》 *
李敏敏: "基于TLD模型的目标跟踪方法", 《中国优秀硕士学位论文全文数据库》 *
黄坤: "基于在线学习算法的目标跟踪技术研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275741A (en) * 2020-01-19 2020-06-12 北京迈格威科技有限公司 Target tracking method and device, computer equipment and storage medium
CN111275741B (en) * 2020-01-19 2023-09-08 北京迈格威科技有限公司 Target tracking method, device, computer equipment and storage medium
CN116596958A (en) * 2023-07-18 2023-08-15 四川迪晟新达类脑智能技术有限公司 Target tracking method and device based on online sample augmentation
CN116596958B (en) * 2023-07-18 2023-10-10 四川迪晟新达类脑智能技术有限公司 Target tracking method and device based on online sample augmentation

Similar Documents

Publication Publication Date Title
CN111611905B (en) Visible light and infrared fused target identification method
CN107705560B (en) Road congestion detection method integrating visual features and convolutional neural network
CN111783576B (en) Pedestrian re-identification method based on improved YOLOv3 network and feature fusion
CN107515895B (en) A visual target retrieval method and system based on target detection
CN108171184B (en) Method for re-identifying pedestrians based on Simese network
CN114023062B (en) Traffic flow information monitoring method based on deep learning and edge calculation
WO2019232894A1 (en) Complex scene-based human body key point detection system and method
Li et al. Traffic anomaly detection based on image descriptor in videos
CN110598535A (en) Face recognition analysis method used in monitoring video data
CN104239898A (en) Method for carrying out fast vehicle comparison and vehicle type recognition at tollgate
CN106934817B (en) Multi-attribute-based multi-target tracking method and device
TW202142431A (en) Driving assistance system based on deep learning and the method thereof
CN109325546B (en) Step-by-step footprint identification method combining features of step method
CN102169631A (en) Manifold-learning-based traffic jam event cooperative detecting method
CN107291936A (en) The hypergraph hashing image retrieval of a kind of view-based access control model feature and sign label realizes that Lung neoplasm sign knows method for distinguishing
CN111126303B (en) A Multi-Space Detection Method for Intelligent Parking
CN107659754A (en) Effective method for concentration of monitor video in the case of a kind of leaf disturbance
CN111242046B (en) Ground traffic sign recognition method based on image retrieval
CN106503748A (en) A kind of based on S SIFT features and the vehicle targets of SVM training aids
CN107315998A (en) Vehicle class division method and system based on lane line
CN110349179A (en) Visual tracking method and device outside a kind of visible red based on more adapters
CN113947570A (en) A crack identification method based on machine learning algorithm and computer vision
CN110232704A (en) Dimension self-adaption anti-shelter target tracking based on optimal characteristics
CN114170627A (en) Pedestrian detection method based on improved Faster RCNN
CN115690584A (en) SSD (solid State drive) -based improved power distribution room foreign matter detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190913

WD01 Invention patent application deemed withdrawn after publication