CN110232704A - Dimension self-adaption anti-shelter target tracking based on optimal characteristics - Google Patents
Dimension self-adaption anti-shelter target tracking based on optimal characteristics Download PDFInfo
- Publication number
- CN110232704A CN110232704A CN201910520221.4A CN201910520221A CN110232704A CN 110232704 A CN110232704 A CN 110232704A CN 201910520221 A CN201910520221 A CN 201910520221A CN 110232704 A CN110232704 A CN 110232704A
- Authority
- CN
- China
- Prior art keywords
- target
- feature
- tracking
- features
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
基于最优特征的尺度自适应抗遮挡目标跟踪方法。传统的压缩跟踪方法未能解决最优特征提取,尺度自适应以及遮挡判断问题。本发明的方法,通过引入遗传算法对压缩后的特征构造最优特征池并进行排序选取最优特征,并提取SURF特征构造参考点模型和目标特征库,通过仿射变换更新窗口尺度,同时设置警戒窗判断是否遮挡,对遮挡情况利用参考点模型推测目标位置,最后根据跟踪结果完成更新直至跟踪结束。本发明能够提高传统压缩跟踪方法的精确度以及解决尺度自适应及遮挡问题。
Scale-adaptive anti-occlusion object tracking method based on optimal features. Traditional compressed tracking methods fail to solve the problems of optimal feature extraction, scale adaptation and occlusion judgment. The method of the present invention constructs the optimal feature pool for the compressed features by introducing the genetic algorithm, sorts and selects the optimal features, extracts the SURF feature to construct the reference point model and the target feature library, updates the window scale through affine transformation, and simultaneously sets The warning window judges whether it is blocked, uses the reference point model to estimate the target position for the blocked situation, and finally completes the update according to the tracking result until the tracking ends. The invention can improve the accuracy of the traditional compressed tracking method and solve the problems of scale adaptation and occlusion.
Description
技术领域:Technical field:
本发明属于计算机视觉领域,具体为一种运动目标跟踪方法。The invention belongs to the field of computer vision, and specifically relates to a moving target tracking method.
背景技术:Background technique:
目标跟踪技术是智能视频监控的重要技术之一,无论在军事防御还是民生安全问题上都具有重要的研究意义和价值。在军事防御上,基于视频的人机交互和智能视觉导航技术一直是研究人员的重点,在民生安全问题上,小区等公共场合中的监控系统为人民生命财产的安全提供可靠的保障。街道上的智能交通系统保障着交通的顺畅和人民出行的安全,为人民提供全方位的保护。Target tracking technology is one of the important technologies of intelligent video surveillance, and it has important research significance and value no matter in military defense or people's livelihood security issues. In military defense, video-based human-computer interaction and intelligent visual navigation technology have always been the focus of researchers. In terms of people's livelihood and safety, monitoring systems in public places such as residential areas provide reliable protection for the safety of people's lives and properties. The intelligent transportation system on the street guarantees the smooth traffic and the safety of people's travel, and provides all-round protection for the people.
基于在线学习的目标跟踪技术在跟踪的同时训练更新分类器,可以更好地适应目标的变化,在线学习跟踪技术的优势明显,使其迅速成为跟踪领域的研究重点。随着科学技术的进步和学科间相互交叉应用也越来越普遍,信号处理领域中的压缩感知原理近几年被引入到了目标跟踪领域。通过压缩感知理论可以减少图像提取特征的冗余信息,对图像特征进行压缩,为在线学习算法实施更新分类器提供了基础。Kaihua Zhang等人提出的压缩跟踪算法直接应用经过系稀疏矩阵投影后的压缩矩阵对目标分类,优点明显,但也存在一些缺点。The target tracking technology based on online learning trains and updates the classifier while tracking, which can better adapt to the change of the target. The advantages of online learning tracking technology are obvious, making it quickly become the focus of research in the field of tracking. With the advancement of science and technology and the interdisciplinary applications between disciplines are becoming more and more common, the principle of compressed sensing in the field of signal processing has been introduced into the field of object tracking in recent years. The redundant information of image extraction features can be reduced through compressive sensing theory, and image features can be compressed, which provides a basis for implementing online learning algorithms to update classifiers. The compressed tracking algorithm proposed by Kaihua Zhang et al. directly applies the compressed matrix after the sparse matrix projection to classify the target. It has obvious advantages, but there are also some shortcomings.
发明内容:Invention content:
本发明的目的是提供一种基于最优特征的尺度自适应抗遮挡目标跟踪方法,用以解决现有的压缩跟踪方法特征有效性不足,目标框大小固定以及无法判断遮挡问题。The purpose of the present invention is to provide a scale-adaptive anti-occlusion target tracking method based on optimal features to solve the problems of insufficient feature effectiveness, fixed target frame size and inability to judge occlusion in existing compression tracking methods.
本发明的目的可以通过如下技术方案达到:The purpose of the present invention can be achieved through the following technical solutions:
一种基于最优特征的尺度自适应抗遮挡目标跟踪方法,所述的跟踪方法包括下列步骤:A scale-adaptive anti-occlusion object tracking method based on optimal features, said tracking method comprising the following steps:
步骤一:对初始帧标定目标区域,生成一系列正负样本计算压缩特征训练概率分布,提取SURF特征随机选取固定数目特征点作为目标特征库并构造参考点模型;Step 1: Calibrate the target area on the initial frame, generate a series of positive and negative samples to calculate the probability distribution of compressed feature training, extract SURF features and randomly select a fixed number of feature points as the target feature library and construct a reference point model;
步骤二:输入第t帧图像,根据上一帧结果在周围一定搜索半径内查找所有可能目标框并提取压缩特征,通过遗传算法构造最优特征池进行排序;Step 2: Input the t-th frame image, find all possible target frames within a certain search radius according to the results of the previous frame and extract compressed features, and construct the optimal feature pool by genetic algorithm for sorting;
步骤三:选取特征池前n组特征送入朴素贝叶斯分类器计算响应值,取最大值为当前帧跟踪结果,提取SURF特征点与目标特征库进行匹配,利用匹配特征点求解目标仿射变换参数,更新跟踪窗口;Step 3: Select the first n groups of features from the feature pool and send them to the Naive Bayesian classifier to calculate the response value, take the maximum value as the tracking result of the current frame, extract SURF feature points to match with the target feature library, and use the matching feature points to solve the target affine Transform parameters and update the tracking window;
步骤四:以目标窗口外扩8个像素设置警戒窗分为左右对称的两个部分,计算两个部分颜色直方图并进行归一化处理判断是否遮挡,若满足遮挡条件采用SURF参考点模型推测目标位置,利用最终跟踪结果进行更新,重复以上步骤,直至视频结束。Step 4: Expand the target window by 8 pixels to set the warning window and divide it into two symmetrical parts. Calculate the color histogram of the two parts and perform normalization processing to determine whether it is blocked. If the blocking condition is met, use the SURF reference point model to infer The target position is updated with the final tracking result, and the above steps are repeated until the end of the video.
进一步的,所述步骤一过程如下:Further, the process of step one is as follows:
步骤一所述的目标区域在起始帧手动标定,生成的正负样本采用V=PX 压缩特征,P表示压缩特征维数的稀疏矩阵,V和X分别表示高维和低维压缩特征,P(v=1),P(y=0)分别表示目标或样本的先验概率,条件概率分布定义为和分别代表特征在目标样本计算特征值符合正态分布的均值和标准差,和代表特征在背景样本计算特征值符合正态分布的均值和标准差,提取SURF特征,随机选取4个特征点作为目标特征库,随机选取距离目标中心一定距离的特征点作为参考点集DB及投票集DV。The target area described in step 1 is manually calibrated at the initial frame, and the generated positive and negative samples use V=PX compressed features, P represents the sparse matrix of compressed feature dimensions, V and X represent high-dimensional and low-dimensional compressed features, respectively, P( v=1), P(y=0) represent the prior probability of the target or sample respectively, and the conditional probability distribution is defined as and Represents the mean and standard deviation of the features calculated in the target sample according to the normal distribution, and Representative features Calculate the mean and standard deviation of the feature values conforming to the normal distribution in the background sample, extract SURF features, randomly select 4 feature points as the target feature library, and randomly select feature points at a certain distance from the target center as the reference point set DB and Voting set D V .
进一步的,所述步骤二过程如下:Further, the process of the second step is as follows:
输入第t帧图像,采用Dγ={Z||l(z)-lt-1||<γ}搜索目标候选框,其中γ为搜索半径,lt-1为上一帧目标位置,采用遗传算法构造特征池并排序,具体步骤为:Input the t-th frame image, use D γ ={Z||l(z)-l t-1 ||<γ} to search for the target candidate frame, where γ is the search radius, l t-1 is the target position in the previous frame, The feature pool is constructed and sorted by genetic algorithm, and the specific steps are as follows:
Step1:随机产生N组压缩特征,每次用作分类器的特征个数为n;Step1: Randomly generate N sets of compressed features, and the number of features used as a classifier each time is n;
Step2:引入选择算子对特征有效性进行排序,选择前n组作为分类器并直接保留到下一代,其他N-n组特征清零;Step2: Introduce a selection operator to sort the effectiveness of features, select the first n groups as classifiers and directly retain them for the next generation, and clear the other N-n groups of features;
Step3:引入交叉算子随机选取2s个优秀个体进行交叉,产生2s个交叉个体纳入特征池;Step3: Introduce a crossover operator to randomly select 2s excellent individuals for crossover, and generate 2s crossover individuals to be included in the feature pool;
Step4:引入变异算子选取m个优秀个体随机产生变异纳入特征池;Step4: Introduce a mutation operator to select m excellent individuals to randomly generate mutations and include them in the feature pool;
Step5:引入迁徙算子重新随机生成mi个个体纳入特征池;Step5: Introduce the migration operator to re-randomly generate mi individuals into the feature pool;
采用公式use the formula
对正态分布距离排序,距离越大有效性越好。For normal distribution distance sorting, the larger the distance, the better the validity.
进一步的,所述步骤三过程如下:Further, the process of step three is as follows:
输入的n组优秀特征采用计算响应值, P(vi|y=1)和P(vi|y=0)分别表示正负类条件概率,提取SURF特征点与目标特征库匹配得到4组匹配特征,采用公式The input of n sets of excellent features adopts Calculate the response value, P(v i |y=1) and P(v i |y=0) represent the conditional probability of positive and negative classes respectively, extract SURF feature points and match the target feature library to get 4 sets of matching features, using the formula
It=H×It-1 I t =H×I t-1
计算相邻两帧仿射变换参数更新跟踪窗口,H为仿射变换矩阵,ρ为尺度因子,θ为旋转角度。Calculate the affine transformation parameters of two adjacent frames to update the tracking window, H is the affine transformation matrix, ρ is the scale factor, and θ is the rotation angle.
进一步的,所述步骤四过程如下:Further, the process of step four is as follows:
在目标框外扩8个像素设置一个稍大警戒窗分为左右两部分,左右跟踪窗分别表示为T-W(k),k=1,2;左右警戒窗分别表示为A-W(k),k=1,2,1代表左半部分,2代表右半部分,计算t时刻两部分颜色直方图并归一化处理,采用 Bhattacharyya系数衡量颜色直方图相似度判断遮挡,公式如下:Expand 8 pixels outside the target frame to set a slightly larger warning window, which is divided into left and right parts. The left and right tracking windows are respectively expressed as T-W(k), k=1, 2; the left and right warning windows are respectively expressed as A-W(k), k= 1, 2, 1 represent the left half, 2 represents the right half, calculate the color histogram of the two parts at time t and normalize it, use the Bhattacharyya coefficient to measure the similarity of the color histogram to judge the occlusion, the formula is as follows:
若连续三帧满足ρ2(k,t)>ρ1(k,t),标记为Ture认为遮挡,False则离开遮挡,满足特征点纳入参考集,满足 纳入投票集,d为特征f的SURF描述符,将满足参考集的特征点按P(X|I)∞S=∑f∈FP(X|f)P(X|I)投票并累加结果,极大值点处为推测位置,在跟踪结果一定半径搜索所需正样本,远离目标区域提取负样本,根据公式If three consecutive frames satisfy ρ 2 (k, t) > ρ 1 (k, t), mark it as True to consider occlusion, and False to leave occlusion and satisfy The feature points are included in the reference set, satisfying Include the voting set, d is the SURF descriptor of the feature f, vote the feature points satisfying the reference set according to P(X|I)∞S=∑ f∈F P(X|f)P(X|I) and accumulate the results , the maximum point is the inferred position, search for the required positive samples within a certain radius of the tracking results, and extract negative samples away from the target area, according to the formula
完成目标和背景区域的概率分布更新。Probability distribution updates for target and background regions are done.
本发明的有益效果是:The beneficial effects of the present invention are:
1、引入遗传算法的最优搜索策略,通过选择、交叉、变异及迁徙算子仿真生物体的进化过程,通过适应度函数表示每个个体对当前环境的适应能力,构造目标特征的最优特征池并实时更新,保证特征的有效性,解决由于特征选取不稳定造成的分类器稳定性差问题。1. Introduce the optimal search strategy of genetic algorithm, simulate the evolution process of organisms through selection, crossover, mutation and migration operators, express the adaptability of each individual to the current environment through the fitness function, and construct the optimal characteristics of the target characteristics The pool is updated in real time to ensure the validity of the features and solve the problem of poor stability of the classifier caused by unstable feature selection.
2、提取目标的SURF特征构造目标特征库,通过相邻帧之间的仿射变换更新跟踪框口大小,解决跟踪框口固定带来的目标变化时样本提取不正确问题。2. Extract the SURF feature of the target to construct the target feature library, update the size of the tracking frame through the affine transformation between adjacent frames, and solve the problem of incorrect sample extraction when the target changes caused by the fixed tracking frame.
3、加入遮挡判断机制,通过设置警戒窗检测遮挡发生以及是否离开遮挡,提高跟踪的有效性。3. Add an occlusion judgment mechanism, and improve the effectiveness of tracking by setting a warning window to detect the occurrence of occlusion and whether to leave the occlusion.
4、通过提取的SURF特征建立与目标具有一定固定关系的参考点模型,当发生遮挡时利用参考点推测目标位置,提高了遮挡情况下的鲁棒性。4. Establish a reference point model with a certain fixed relationship with the target through the extracted SURF features. When occlusion occurs, the reference point is used to infer the target position, which improves the robustness under occlusion.
附图说明:Description of drawings:
图1为本发明的流程图;Fig. 1 is a flowchart of the present invention;
图2为遮挡判断警戒窗示意图。Fig. 2 is a schematic diagram of an occlusion judgment warning window.
具体实施方式:Detailed ways:
下面结合附图,对本发明做进一步的说明。Below in conjunction with accompanying drawing, the present invention will be further described.
如图1-2所示,基于最优特征的尺度自适应抗遮挡目标跟踪方法,所述方法具体步骤如下:As shown in Figure 1-2, the scale-adaptive anti-occlusion target tracking method based on optimal features, the specific steps of the method are as follows:
步骤一:对初始帧标定目标区域,生成一系列正负样本计算压缩特征训练概率分布,提取SURF特征随机选取固定数目特征点作为目标特征库并构造参考点模型。Step 1: Calibrate the target area on the initial frame, generate a series of positive and negative samples to calculate the probability distribution of compressed feature training, extract SURF features, randomly select a fixed number of feature points as the target feature library, and construct a reference point model.
在具体实施方式中,步骤一包括:In a specific embodiment, step one includes:
步骤一所述的目标区域在起始帧手动标定,生成的正负样本采用V=PX 压缩特征,P表示压缩特征维数的稀疏矩阵,V和X分别表示高维和低维压缩特征,P(y=1),P(y=0)分别表示目标或样本的先验概率,条件概率分布定义为和分别代表特征在目标样本计算特征值符合正态分布的均值和标准差,和代表特征在背景样本计算特征值符合正态分布的均值和标准差,提取SURF特征,随机选取4个特征点作为目标特征库,随机选取距离目标中心一定距离的特征点作为参考点集DB及投票集DV。The target area described in step 1 is manually calibrated at the initial frame, and the generated positive and negative samples use V=PX compressed features, P represents the sparse matrix of compressed feature dimensions, V and X represent high-dimensional and low-dimensional compressed features, respectively, P( y=1), P(y=0) represent the prior probability of the target or sample respectively, and the conditional probability distribution is defined as and Represents the mean and standard deviation of the features calculated in the target sample according to the normal distribution, and Representative features Calculate the mean and standard deviation of the feature values conforming to the normal distribution in the background sample, extract SURF features, randomly select 4 feature points as the target feature library, and randomly select feature points at a certain distance from the target center as the reference point set DB and Voting set D V .
步骤二:输入第t帧图像,根据上一帧结果在周围一定搜索半径内查找所有可能目标框并提取压缩特征,通过遗传算法构造最优特征池进行排序。Step 2: Input the t-th frame of image, find all possible target frames within a certain search radius according to the results of the previous frame and extract compressed features, and construct the optimal feature pool by genetic algorithm for sorting.
在具体实施方式中,步骤二包括:In a specific embodiment, step 2 includes:
输入第t帧图像,采用Dγ={Z丨||l(z)-lt-1||<γ}搜索目标候选框,其中γ为搜索半径,lt-1为上一帧目标位置,采用遗传算法构造特征池并排序,具体步骤为:Input the t-th frame image, use D γ = {Z丨||l(z)-l t-1 ||<γ} to search for the target candidate frame, where γ is the search radius, and l t-1 is the target position in the previous frame , using the genetic algorithm to construct and sort the feature pool, the specific steps are:
Step1:随机产生N组压缩特征,每次用作分类器的特征个数为n;Step1: Randomly generate N sets of compressed features, and the number of features used as a classifier each time is n;
Step2:引入选择算子对特征有效性进行排序,选择前n组作为分类器并直接保留到下一代,其他N-n组特征清零;Step2: Introduce a selection operator to sort the effectiveness of features, select the first n groups as classifiers and directly retain them for the next generation, and clear the other N-n groups of features;
Step3:引入交叉算子随机选取2s个优秀个体进行交叉,产生2s个交叉个体纳入特征池;Step3: Introduce a crossover operator to randomly select 2s excellent individuals for crossover, and generate 2s crossover individuals to be included in the feature pool;
Step4:引入变异算子选取m个优秀个体随机产生变异纳入特征池;Step4: Introduce a mutation operator to select m excellent individuals to randomly generate mutations and include them in the feature pool;
Step5:引入迁徙算子重新随机生成mi个个体纳入特征池;Step5: Introduce the migration operator to re-randomly generate mi individuals into the feature pool;
采用公式use the formula
对正态分布距离排序,距离越大有效性越好。For normal distribution distance sorting, the larger the distance, the better the validity.
步骤三:选取特征池前n组特征送入朴素贝叶斯分类器计算响应值,取最大值为当前帧跟踪结果,提取SURF特征点与目标特征库进行匹配,利用匹配特征点求解目标仿射变换参数,更新跟踪窗口。Step 3: Select the first n groups of features from the feature pool and send them to the Naive Bayesian classifier to calculate the response value, take the maximum value as the tracking result of the current frame, extract SURF feature points to match with the target feature library, and use the matching feature points to solve the target affine Transform parameters and update the tracking window.
在具体的实施方式中,步骤三包括:In a specific embodiment, step three includes:
输入的n组优秀特征采用计算响应值, P(vi|y=1)和P(vi|y=0)分别表示正负类条件概率,提取SURF特征点与目标特征库匹配得到4组匹配特征,采用公式The input of n sets of excellent features adopts Calculate the response value, P(v i |y=1) and P(v i |y=0) represent the conditional probability of positive and negative classes respectively, extract SURF feature points and match the target feature library to get 4 sets of matching features, using the formula
It=H×It-1 I t =H×I t-1
计算相邻两帧仿射变换参数更新跟踪窗口,H为仿射变换矩阵,ρ为尺度因子,θ为旋转角度。Calculate the affine transformation parameters of two adjacent frames to update the tracking window, H is the affine transformation matrix, ρ is the scale factor, and θ is the rotation angle.
步骤四:以目标窗口外扩8个像素设置警戒窗分为左右对称的两个部分,计算两个部分颜色直方图并进行归一化处理判断是否遮挡,若满足遮挡条件采用SURF参考点模型推测目标位置,利用最终跟踪结果进行更新,重复以上步骤,直至视频结束。Step 4: Expand the target window by 8 pixels to set the warning window and divide it into two symmetrical parts. Calculate the color histogram of the two parts and perform normalization processing to determine whether it is blocked. If the blocking condition is met, use the SURF reference point model to infer The target position is updated with the final tracking result, and the above steps are repeated until the end of the video.
在具体的实施方式中,步骤四包括:In a specific embodiment, step 4 includes:
在目标框外扩8个像素设置一个稍大警戒窗分为左右两部分,左右跟踪窗分别表示为T-W(k),k=1,2;左右警戒窗分别表示为A-W(k),k=1,2,1代表左半部分,2代表右半部分,计算t时刻两部分颜色直方图并归一化处理,采用 Bhattacharyya系数衡量颜色直方图相似度判断遮挡,公式如下:Expand 8 pixels outside the target frame to set a slightly larger warning window, which is divided into left and right parts. The left and right tracking windows are respectively expressed as T-W(k), k=1, 2; the left and right warning windows are respectively expressed as A-W(k), k= 1, 2, 1 represent the left half, 2 represents the right half, calculate the color histogram of the two parts at time t and normalize it, use the Bhattacharyya coefficient to measure the similarity of the color histogram to judge the occlusion, the formula is as follows:
若连续三帧满足ρ2(k,t)>ρ1(k,t),标记为Ture认为遮挡,False则离开遮挡,满足特征点纳入参考集,满足 纳入投票集,d为特征f的SURF描述符,将满足参考集的特征点按P(X|I)∞S=∑f∈FP(X|f)P(X|I)投票并累加结果,极大值点处为推测位置,在跟踪结果一定半径搜索所需正样本,远离目标区域提取负样本,根据公式If three consecutive frames satisfy ρ 2 (k, t) > ρ 1 (k, t), mark it as True to consider occlusion, and False to leave occlusion and satisfy The feature points are included in the reference set, satisfying Include the voting set, d is the SURF descriptor of the feature f, vote the feature points satisfying the reference set according to P(X|I)∞S=∑ f∈F P(X|f)P(X|I) and accumulate the results , the maximum point is the inferred position, search for the required positive samples within a certain radius of the tracking results, and extract negative samples away from the target area, according to the formula
完成目标和背景区域的概率分布更新。Probability distribution updates for target and background regions are done.
上面结合附图对本发明的具体实施方式作了详细说明,但是本发明并不限于上述实施方式,在本领域普通技术人员所具备的知识范围内,还可以在不脱离本发明宗旨的前提下作出各种变化。The specific implementation of the present invention has been described in detail above in conjunction with the accompanying drawings, but the present invention is not limited to the above-mentioned implementation, within the knowledge of those of ordinary skill in the art, it can also be made without departing from the gist of the present invention. Variations.
Claims (5)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910520221.4A CN110232704A (en) | 2019-06-14 | 2019-06-14 | Dimension self-adaption anti-shelter target tracking based on optimal characteristics |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910520221.4A CN110232704A (en) | 2019-06-14 | 2019-06-14 | Dimension self-adaption anti-shelter target tracking based on optimal characteristics |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN110232704A true CN110232704A (en) | 2019-09-13 |
Family
ID=67859914
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910520221.4A Pending CN110232704A (en) | 2019-06-14 | 2019-06-14 | Dimension self-adaption anti-shelter target tracking based on optimal characteristics |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN110232704A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111275741A (en) * | 2020-01-19 | 2020-06-12 | 北京迈格威科技有限公司 | Target tracking method and device, computer equipment and storage medium |
| CN116596958A (en) * | 2023-07-18 | 2023-08-15 | 四川迪晟新达类脑智能技术有限公司 | Target tracking method and device based on online sample augmentation |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102881022A (en) * | 2012-07-20 | 2013-01-16 | 西安电子科技大学 | Concealed-target tracking method based on on-line learning |
| US8996350B1 (en) * | 2011-11-02 | 2015-03-31 | Dub Software Group, Inc. | System and method for automatic document management |
-
2019
- 2019-06-14 CN CN201910520221.4A patent/CN110232704A/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8996350B1 (en) * | 2011-11-02 | 2015-03-31 | Dub Software Group, Inc. | System and method for automatic document management |
| CN102881022A (en) * | 2012-07-20 | 2013-01-16 | 西安电子科技大学 | Concealed-target tracking method based on on-line learning |
Non-Patent Citations (5)
| Title |
|---|
| 吕韵秋: "基于压缩跟踪和遗传算法的实时跟踪方法", 《制导与引信》 * |
| 姜超: "基于压缩感知的实时目标跟踪算法研究", 《中国优秀硕士学位论文全文数据库》 * |
| 李博江: "均值平移算法在尺度和速度变化的目标跟踪中的应用研究", 《中国优秀硕士学位论文全文数据库》 * |
| 李敏敏: "基于TLD模型的目标跟踪方法", 《中国优秀硕士学位论文全文数据库》 * |
| 黄坤: "基于在线学习算法的目标跟踪技术研究", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111275741A (en) * | 2020-01-19 | 2020-06-12 | 北京迈格威科技有限公司 | Target tracking method and device, computer equipment and storage medium |
| CN111275741B (en) * | 2020-01-19 | 2023-09-08 | 北京迈格威科技有限公司 | Target tracking method, device, computer equipment and storage medium |
| CN116596958A (en) * | 2023-07-18 | 2023-08-15 | 四川迪晟新达类脑智能技术有限公司 | Target tracking method and device based on online sample augmentation |
| CN116596958B (en) * | 2023-07-18 | 2023-10-10 | 四川迪晟新达类脑智能技术有限公司 | Target tracking method and device based on online sample augmentation |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111611905B (en) | Visible light and infrared fused target identification method | |
| CN107705560B (en) | Road congestion detection method integrating visual features and convolutional neural network | |
| CN111783576B (en) | Pedestrian re-identification method based on improved YOLOv3 network and feature fusion | |
| CN107515895B (en) | A visual target retrieval method and system based on target detection | |
| CN108171184B (en) | Method for re-identifying pedestrians based on Simese network | |
| CN114023062B (en) | Traffic flow information monitoring method based on deep learning and edge calculation | |
| WO2019232894A1 (en) | Complex scene-based human body key point detection system and method | |
| Li et al. | Traffic anomaly detection based on image descriptor in videos | |
| CN110598535A (en) | Face recognition analysis method used in monitoring video data | |
| CN104239898A (en) | Method for carrying out fast vehicle comparison and vehicle type recognition at tollgate | |
| CN106934817B (en) | Multi-attribute-based multi-target tracking method and device | |
| TW202142431A (en) | Driving assistance system based on deep learning and the method thereof | |
| CN109325546B (en) | Step-by-step footprint identification method combining features of step method | |
| CN102169631A (en) | Manifold-learning-based traffic jam event cooperative detecting method | |
| CN107291936A (en) | The hypergraph hashing image retrieval of a kind of view-based access control model feature and sign label realizes that Lung neoplasm sign knows method for distinguishing | |
| CN111126303B (en) | A Multi-Space Detection Method for Intelligent Parking | |
| CN107659754A (en) | Effective method for concentration of monitor video in the case of a kind of leaf disturbance | |
| CN111242046B (en) | Ground traffic sign recognition method based on image retrieval | |
| CN106503748A (en) | A kind of based on S SIFT features and the vehicle targets of SVM training aids | |
| CN107315998A (en) | Vehicle class division method and system based on lane line | |
| CN110349179A (en) | Visual tracking method and device outside a kind of visible red based on more adapters | |
| CN113947570A (en) | A crack identification method based on machine learning algorithm and computer vision | |
| CN110232704A (en) | Dimension self-adaption anti-shelter target tracking based on optimal characteristics | |
| CN114170627A (en) | Pedestrian detection method based on improved Faster RCNN | |
| CN115690584A (en) | SSD (solid State drive) -based improved power distribution room foreign matter detection method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190913 |
|
| WD01 | Invention patent application deemed withdrawn after publication |