CN107016412A - Adaptive template-updating strategy based on outward appearance and motion continuity cross validation - Google Patents
Adaptive template-updating strategy based on outward appearance and motion continuity cross validation Download PDFInfo
- Publication number
- CN107016412A CN107016412A CN201710198092.2A CN201710198092A CN107016412A CN 107016412 A CN107016412 A CN 107016412A CN 201710198092 A CN201710198092 A CN 201710198092A CN 107016412 A CN107016412 A CN 107016412A
- Authority
- CN
- China
- Prior art keywords
- target
- template
- appearance
- image
- tracking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to technical field of computer vision, more particularly to a kind of template renewal strategy for target following.A kind of Adaptive template-updating strategy based on outward appearance and motion continuity cross validation, its technical scheme is:Initially set up target appearance model and target movement model;Interframe template matches and conversion are completed using display model;Target trajectory is predicted and tracked using motion model.The error accumulation caused in order to avoid template renewal and the dynamic change of display model, the present invention propose the Adaptive template-updating strategy based on display model and motion model cross check.Tracking performance greatly improves the precision, reliability and continuation of target following with the decline of time during can effectively prevent vision tracking using this method.
Description
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a template updating strategy for target tracking.
Background
Visual tracking, also called template tracking, can be described as the basic process of determining any object in the video starting image as the tracking target (i.e. template), and then accurately positioning the target in the subsequent image sequence by using a visual tracking algorithm. Visual tracking is one of the hot subjects in the field of machine vision, and is widely applied to occasions such as security monitoring, event relay, unmanned platforms, human-computer interaction and the like.
In principle, a basic visual tracking process mainly includes feature extraction and expression of a target, feature matching and template transfer. The visual tracking is premised on the assumption that the appearance characteristics of target dimension, color, structural morphology and the like are basically kept unchanged in the tracking process, the assumption can be basically met in a short time range, but the appearance of the target inevitably changes along with the time and scene change, so that template expiration and mismatch are caused. In order to avoid template expiration and achieve target tracking in a longer time range, the target template should be updated continuously as the tracking process advances to compensate for the template changes caused by the continuous changes of the dynamic scene. The simplest template updating strategy is to use the tracked result of each frame as a reference template for target tracking in the next frame, i.e. each frame is updated with a template. However, the new problem brought by the updating strategy is that the tracking error of each frame of tracking result is gradually accumulated, and finally the tracking result is gradually deviated from the target, so that the template is shifted. Therefore, there is a contradiction between template updating in visual tracking-if the template updating frequency is too slow, the template expiration caused by scene change will result in tracking loss; if the frequency of template updates is too fast, template matching errors can accumulate quickly leading to template drift and thus also to loss of tracking. Therefore, although different visual tracking algorithms are proposed in recent years, it is still a recognized problem to achieve reliable visual tracking over a long time range and under dynamic scenes.
Disclosure of Invention
The purpose of the invention is: in order to avoid the attenuation and failure of tracking performance caused by drift error accumulation and target appearance template expiration in the target tracking process, an adaptive template updating strategy based on appearance and motion continuity cross validation is provided.
The technical scheme of the invention is as follows: an adaptive template updating strategy based on appearance and motion continuity cross validation comprises the following steps:
the method comprises the following steps: establishing and tracking a target appearance model;
1.1 appearance template feature extraction and expression
Setting I as a certain frame image, and determining an object in the image I by using a rectangular frame T ═ x, y, w, h, wherein (x, y) is an upper left point of the rectangular frame, and w, h are the width and height of the rectangular frame respectively; taking an image area in the rectangular frame T as a reference template and extracting feature points to obtain a feature point set S which is taken as a feature expression of the template;
S={pi|pi=(xi,yi),i=1,2,…} (1)
wherein p isiAny feature point in the image I;
1.2 appearance template feature matching
In a subsequent frame image I ', each feature point in the feature point set S is tracked to obtain a feature point set S ', S ═ p 'i|p'i=(x'i,y'i),i=1,2,…,N};
Wherein, p'iAny feature point in the image I';
matching corresponding characteristic points between the image I and the image I', and forming a sparse optical flow field between corresponding characteristic point pairs;
1.3 mapping solution and template appearance board transfer
Solving a geometric mapping model H of the target in the next two frame images I, I 'according to a sparse light field formed by matching feature point pairs between the two frame images I, I', wherein H is an affine transformation matrix, and any pair of matching feature points satisfies the following conditions:
α therein1,…,α6For affine transformation matrix parameters, the optimal solution of H is obtained by solving the extremum for the objective function of formula (4);
by usingMoment of alignmentMapping and transforming four vertexes of the frame T to obtain a position T 'where the target is located in the image I';
step two: establishing and updating a target motion model;
performing kinematic modeling and Kalman filtering on the matching result of the appearance target template in the first step: let TnFor the nth frame image In(ii) an appearance target template of (x)n,yn) As a template TnThe geometric center coordinates of the Kalman filtering system variables and the system equations are respectively,
Xn+1=AXn+w (6)
Yn+1=CXn+1+v (7)
wherein,andvelocity and acceleration of the target in the horizontal and vertical directions, respectively; w is the system noise; v is measurement noise, A and C are a Kalman filtering system state transition matrix and an observation matrix respectively:
where dt is the image update frequency (frame/time);
according to image InTarget of (1)And (4) tracking the result, and performing one-step prediction on the position of the target in the next frame image by using the formulas (6) and (7) to obtain an image In+1Predicted position of center of medium targetObtaining a moving object prediction template by successively bearing the size of the previous frame of objects
Step three: self-adaptive template updating strategy;
establishing a K-unit storage structure to store a historical tracking result;
Φm={Ti|m-K≤i≤m-1,i∈N*} (10)
where m denotes the current picture frame number, TiRepresenting the tracking result of the appearance template of the ith frame image; the tracking results of the last K frames in the tracking process are sequentially stored in the K-unit storage structure;
3.1 give a new frame of image ImAccording to the previous frame image Im-1Appearance tracking result of (1)m-1Using the step two pairs of images ImThe target in (1) is estimated to obtain a motion prediction result
3.2 selecting storage structures ΦmFirst element TiI-m-K as a reference template, pair I of images according to the proceduremPerforming template matching to obtain an appearance tracking result
3.3, cross checking the appearance continuity and the motion continuity of the appearance tracking result and the motion prediction result;
3.3.1 finding the appearance matching resultAnd motion prediction resultsThe rate of change in the scale of (2):
wherein area (. +) is a function of area ifThenIf the appearance continuity check is satisfied, setting the target as a candidate target and entering step 3.3.2, otherwise, making i equal to i +1 and returning to step 3.2, wherein f*Checking a threshold value for the scale;
3.3.2 motion continuity verification: establishing a check wave gate based on a probability data correlation filtering theory:
wherein g is a threshold value determined according to chi-squared distribution,is composed ofCenter of (A), PmState information generated for kalman filtering;
wherein S isw=E(w,wT) Is the system noise covariance;
if the center of the candidate target falls into the check wave gate, the motion continuity check is met, the step four is entered, otherwise, i is made to be i +1, and the target in the next unit in the storage structure is adopted as a reference template to perform target tracking on the image of the frame again;
3.3.3 if phimThe tracking templates generated by all the elements in the target can not pass 3.3.1 to meet the requirement of appearance continuity check, and at the moment, the appearance of the target is changed violently, so that the motion prediction result is enabled to be obtainedAs a result of this frame trackingEntering the step four; if the target meets the 3.3.1 verification but cannot pass the 3.3.2 verification, averaging all results meeting the 3.3.1 verification to be used as the tracking result of the current frameEntering the step four;
step four: system updating;
4.1 based on the tracking resultsAnd (3) updating the system by using a Kalman filtering principle:
wherein, KmKalman gain, defined as:
Km=APm-1CT(CPm-1CT+Sv)-1(16)
wherein Sv=E(v,vT) Obtaining an image I from the filtered result for measuring the noise covariancemTarget final tracking template T in (1)m;
4.2 updating the K-cell storage Structure, deleting the first element T in the Structurem-KAnd will TmAnd adding the element to the tail of the storage structure, finishing the updating of the element of the storage unit, enabling m to be m +1, entering a step 3.1, and circulating the tracking process.
Further, in the step 1.1, the feature points are extracted by using a Shi-Tomasi algorithm.
In the step 1.2, each feature point in the feature point set S is tracked by using a Kanade-Lucas-Tomasi algorithm.
Has the advantages that: the invention simplifies the visual tracking process into a two-dimensional template tracking process with a variable appearance model and a certain motion uncertainty motion model. The appearance model completes interframe template matching and conversion through a Kanade-Lucas-Tomasi coefficient optical flow theory; and the motion model predicts and tracks the target track through Kalman filtering. In order to avoid error accumulation and dynamic change of the appearance model caused by updating the template, the invention provides an adaptive template updating strategy based on cross check of the appearance model and the motion model. The invention effectively avoids the decline of the tracking performance along with time in the visual tracking process, and greatly improves the precision, reliability and continuity of target tracking.
Drawings
FIG. 1 is a flow chart of the operation of the present invention.
Detailed Description
Referring to the attached drawings, an adaptive template updating strategy based on appearance and motion continuous cross validation comprises the following steps:
the method comprises the following steps: establishing and tracking a target appearance model;
1.1 appearance template feature extraction and expression
Setting I as a certain frame image, and determining an object in the image I by using a rectangular frame T ═ x, y, w, h, wherein (x, y) is an upper left point of the rectangular frame, and w, h are the width and height of the rectangular frame respectively; in order to extract the rectangular frame in a certain frame image I' (not necessarily adjacent to I time sequence) in the following, firstly, taking an image area in the rectangular frame T as a reference template and extracting a corner point, namely a characteristic point according to Shi-Tomasi algorithm to obtain a characteristic point set S and taking the characteristic point set S as a characteristic expression of the template;
S={pi|pi=(xi,yi),i=1,2,…} (1)
wherein p isiAny feature point in the image I;
1.2 appearance template feature matching
In a subsequent frame image I ', the feature points in the feature point set S are tracked by using the Kanade-Lucas-Tomasi algorithm, and a feature point set S', S ═ p 'is obtained'i|p'i=(x'i,y'i),i=1,2,…,N};
Wherein, p'iAny feature point in the image I';
the characteristic points between the image I and the image I' are in one-to-one correspondence to form a sparse optical flow field;
1.3 mapping solution and template appearance board transfer
Solving a geometric mapping model H of the target in the next two frame images I, I 'according to a sparse light field formed by matching feature point pairs between the two frame images I, I', wherein H is an affine transformation matrix, and any pair of matching feature points satisfies the following conditions:
α therein1,…,α6For affine transformation matrix parameters, the optimal solution of H is obtained by solving the extremum for the objective function of formula (4);
by usingMapping and transforming four vertexes of the rectangular frame T to obtain a position T 'where the target is located in the image I';
step two: establishing and updating a target motion model;
based on the assumption of continuity of target motion, performing kinematic modeling and Kalman filtering on the matching result of the appearance target template in the step one: let TnFor the nth frame image In(ii) an appearance target template of (x)n,yn) As a template TnThe geometric center coordinates of the Kalman filtering system variables and the system equations are respectively,
Xn+1=AXn+w (6)
Yn+1=CXn+1+v (7)
wherein,andvelocity and acceleration of the target in the horizontal and vertical directions, respectively; w is the system noise; v is measurement noise, A and C are a Kalman filtering system state transition matrix and an observation matrix respectively:
where dt is the image update frequency (frame/time);
according to image InThe target tracking result in (1) is obtained by predicting the position of the target in the next frame image in one step by using the formulas (6) and (7) to obtain an image In+1Predicted position of center of medium targetObtaining a moving object prediction template by successively bearing the size of the previous frame of objects
Step three: self-adaptive template updating strategy;
in order to enable each frame of image to obtain the optimal frame interval in the tracking process, the updating strategy selects a template with the interval of K with the current frame as a reference template when each frame of image starts to be matched, and then adjusts the frame interval on line by carrying out mutual analysis on motion continuity and appearance continuity on a matching result, so that a local suboptimal tracking result is obtained, and the long-range tracking effect is achieved. In order to realize a template updating strategy, a K-unit storage structure is established to store a historical tracking result;
Φm={Ti|m-K≤i≤m-1,i∈N*} (10)
where m denotes the current picture frame number, TiRepresenting the tracking result of the appearance template of the ith frame image; the tracking results of the last K frames in the tracking process are sequentially stored in the K-unit storage structure;
3.1 give a new frame of image ImAccording to the previous frame image Im-1Appearance tracking result of (1)m-1Using the step two pairs of images ImThe target in (1) is estimated to obtain a motion prediction result
3.2 selecting storage structures ΦmFirst element TiI-m-K as a reference template, pair I of images according to the proceduremPerforming template matching to obtain an appearance tracking result
3.3, cross checking the appearance continuity and the motion continuity of the appearance tracking result and the motion prediction result;
3.3.1 finding the appearance matching resultAnd motion prediction resultsThe rate of change in the scale of (2):
wherein area (. +) is a function of area ifThenIf the appearance continuity check is satisfied, setting the target as a candidate target and entering step 3.3.2, otherwise, making i equal to i +1 and returning to step 3.2, wherein f*Checking a threshold value for the scale;
3.3.2 motion continuity verification: establishing a check wave gate based on a probability data correlation filtering theory:
wherein g is a threshold value determined according to chi-squared distribution,is composed ofCenter of (A), PmState information generated for kalman filtering;
wherein S isw=E(w,wT) Is the system noise covariance;
if the center of the candidate target falls into the check wave gate, the motion continuity check is met, the step four is entered, otherwise, i is made to be i +1, and the target in the next unit in the storage structure is adopted as a reference template to perform target tracking on the image of the frame again;
3.3.3 if phimThe tracking templates generated by all the elements in the target can not pass 3.3.1 to meet the requirement of appearance continuity check, and at the moment, the appearance of the target is changed violently, so that the motion prediction result is enabled to be obtainedAs a result of this frame trackingEntering the step four; if the target meets the 3.3.1 verification but cannot pass the 3.3.2 verification, averaging all results meeting the 3.3.1 verification to be used as the tracking result of the current frameEntering the step four;
step four: system updating;
4.1 based on the tracking resultsAnd (3) updating the system by using a Kalman filtering principle:
wherein, KmKalman gain, defined as:
Km=APm-1CT(CPm-1CT+Sv)-1(16)
wherein Sv=E(v,vT) Obtaining an image I from the filtered result for measuring the noise covariancemTarget final tracking template T in (1)m;
4.2 updating the K-cell storage Structure, deleting the first element T in the Structurem-KAnd will TmAnd adding the element to the tail of the storage structure, finishing the updating of the element of the storage unit, enabling m to be m +1, entering a step 3.1, and circulating the tracking process.
Claims (3)
1. An adaptive template updating strategy based on appearance and motion continuity cross validation is characterized by comprising the following steps:
the method comprises the following steps: establishing and tracking a target appearance model;
1.1 appearance template feature extraction and expression
Setting I as a certain frame image, and determining an object in the image I by using a rectangular frame T ═ x, y, w, h, wherein (x, y) is an upper left point of the rectangular frame, and w, h are the width and height of the rectangular frame respectively; taking an image area in the rectangular frame T as a reference template and extracting feature points to obtain a feature point set S which is taken as a feature expression of the template;
S={pi|pi=(xi,yi),i=1,2,…} (1)
wherein p isiAny feature point in the image I;
1.2 appearance template feature matching
In a subsequent frame image I ', each feature point in the feature point set S is tracked to obtain a feature point set S ', S ═ p 'i|p′i=(x′i,y′i),i=1,2,…,N};
Wherein, p'iAny feature point in the image I';
the characteristic points between the image I and the image I' are in one-to-one correspondence to form a sparse optical flow field;
1.3 mapping solution and template appearance board transfer
Solving a geometric mapping model H of the target in the next two frame images I, I 'according to a sparse light field formed by matching feature point pairs between the two frame images I, I', wherein H is an affine transformation matrix, and any pair of matching feature points satisfies the following conditions:
α therein1,…,α6For affine transformation matrix parameters, the optimal solution of H is obtained by solving the extremum for the objective function of formula (4);
by usingMapping and transforming four vertexes of the rectangular frame T to obtain a position T 'where the target is located in the image I';
step two: establishing and updating a target motion model;
performing kinematic modeling and Kalman filtering on the matching result of the appearance target template in the first step: let TnFor the nth frame image In(ii) an appearance target template of (x)n,yn) As a template TnThe geometric center coordinates of the Kalman filtering system variables and the system equations are respectively,
Xn+1=AXn+w (6)
Yn+1=CXn+1+v (7)
wherein,andvelocity and acceleration of the target in the horizontal and vertical directions, respectively; w is the system noise; v is measurement noise, A and C are a Kalman filtering system state transition matrix and an observation matrix respectively:
where dt is the image update frequency (frame/time);
according to image InThe target tracking result in (1) is obtained by predicting the position of the target in the next frame image in one step by using the formulas (6) and (7) to obtain an image In+1Predicted position of center of medium targetObtaining a moving object prediction by relaying the size of the previous frame objectForm panel
Step three: self-adaptive template updating strategy;
establishing a K-unit storage structure to store a historical tracking result;
Φm={Ti|m-K≤i≤m-1,i∈N*} (10)
where m denotes the current picture frame number, TiRepresenting the tracking result of the appearance template of the ith frame image; the tracking results of the last K frames in the tracking process are sequentially stored in the K-unit storage structure;
3.1 give a new frame of image ImAccording to the previous frame image Im-1Appearance tracking result of (1)m-1Using the step two pairs of images ImThe target in (1) is estimated to obtain a motion prediction result
3.2 selecting storage structures ΦmFirst element TiI-m-K as a reference template, pair I of images according to the proceduremPerforming template matching to obtain an appearance tracking result
3.3, cross checking the appearance continuity and the motion continuity of the appearance tracking result and the motion prediction result;
3.3.1 finding the appearance matching resultAnd motion prediction resultsThe rate of change in the scale of (2):
wherein area (. +) is a function of area ifThenIf the appearance continuity check is satisfied, setting the target as a candidate target and entering step 3.3.2, otherwise, making i equal to i +1 and returning to step 3.2, wherein f*Checking a threshold value for the scale;
3.3.2 motion continuity verification: establishing a check wave gate based on a probability data correlation filtering theory:
wherein g is a threshold value determined according to chi-squared distribution,is composed ofCenter of (A), PmState information generated for kalman filtering;
wherein S isw=E(w,wT) Is the system noise covariance;
if the center of the candidate target falls into the check wave gate, the motion continuity check is met, the step four is entered, otherwise, i is made to be i +1, and the target in the next unit in the storage structure is adopted as a reference template to perform target tracking on the image of the frame again;
3.3.3 if phimThe tracking templates generated by all the elements in the target can not pass 3.3.1 to meet the requirement of appearance continuity check, and at the moment, the appearance of the target is changed violently, so that the motion prediction result is enabled to be obtainedAs a result of this frame trackingEntering the step four; if the target meets the 3.3.1 verification but cannot pass the 3.3.2 verification, averaging all results meeting the 3.3.1 verification to be used as the tracking result of the current frameEntering the step four;
step four: system updating;
4.1 based on the tracking resultsAnd (3) updating the system by using a Kalman filtering principle:
wherein, KmKalman gain, defined as:
Km=APm-1CT(CPm-1CT+Sv)-1(16)
wherein Sv=E(v,vT) Obtaining an image I from the filtered result for measuring the noise covariancemTarget final tracking template T in (1)m;
4.2 updating the K-cell storage Structure, deleting the first element T in the Structurem-KAnd will TmAnd adding the element to the tail of the storage structure, finishing the updating of the element of the storage unit, enabling m to be m +1, entering a step 3.1, and circulating the tracking process.
2. An adaptive template update strategy based on appearance and motion continuity cross validation as claimed in claim 1 wherein: in the step 1.1, the feature points are extracted by using Shi-Tomasi algorithm.
3. An adaptive template update strategy based on appearance and motion continuity cross validation as claimed in claim 1 or 2 wherein: in the step 1.2, each feature point in the feature point set S is tracked by using a Kanade-Lucas-Tomasi algorithm.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710198092.2A CN107016412A (en) | 2017-03-29 | 2017-03-29 | Adaptive template-updating strategy based on outward appearance and motion continuity cross validation |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710198092.2A CN107016412A (en) | 2017-03-29 | 2017-03-29 | Adaptive template-updating strategy based on outward appearance and motion continuity cross validation |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN107016412A true CN107016412A (en) | 2017-08-04 |
Family
ID=59445186
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201710198092.2A Pending CN107016412A (en) | 2017-03-29 | 2017-03-29 | Adaptive template-updating strategy based on outward appearance and motion continuity cross validation |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN107016412A (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109598746A (en) * | 2018-12-26 | 2019-04-09 | 成都纵横自动化技术股份有限公司 | A kind of method and device tracking image template generation |
| CN111667512A (en) * | 2020-05-28 | 2020-09-15 | 浙江树人学院(浙江树人大学) | Multi-target vehicle track prediction method based on improved Kalman filtering |
| CN111739053A (en) * | 2019-03-21 | 2020-10-02 | 四川大学 | An online multi-pedestrian detection and tracking method in complex scenes |
| CN113449544A (en) * | 2020-03-24 | 2021-09-28 | 华为技术有限公司 | Image processing method and system |
| CN114066951A (en) * | 2021-11-18 | 2022-02-18 | Oppo广东移动通信有限公司 | Image registration method, device, storage medium and electronic device |
| CN116152189A (en) * | 2023-01-31 | 2023-05-23 | 华纺股份有限公司 | Pattern fabric flaw detection method, system and detection terminal |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130266184A1 (en) * | 2009-12-23 | 2013-10-10 | General Electric Company | Methods for Automatic Segmentation and Temporal Tracking |
| CN106447696A (en) * | 2016-09-29 | 2017-02-22 | 郑州轻工业学院 | Bidirectional SIFT (scale invariant feature transformation) flow motion evaluation-based large-displacement target sparse tracking method |
-
2017
- 2017-03-29 CN CN201710198092.2A patent/CN107016412A/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130266184A1 (en) * | 2009-12-23 | 2013-10-10 | General Electric Company | Methods for Automatic Segmentation and Temporal Tracking |
| CN106447696A (en) * | 2016-09-29 | 2017-02-22 | 郑州轻工业学院 | Bidirectional SIFT (scale invariant feature transformation) flow motion evaluation-based large-displacement target sparse tracking method |
Non-Patent Citations (3)
| Title |
|---|
| BAOFENG WANG 等: "Motion-Based Feature Selection and Adaptive Template Update Strategy for Robust Visual Tracking", 《2016 3RD INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING》 * |
| BAOFENG WANG 等: "Multi-vehicle detection with identity awareness using cascade Adaboost and Adaptive Kalman filter for driver assistant system", 《PLOS ONE》 * |
| YAAKOV BAR-SHALOM 等: "The probabilistic data association filter", 《IEEE CONTROL SYSTEMS MAGAZINE》 * |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109598746A (en) * | 2018-12-26 | 2019-04-09 | 成都纵横自动化技术股份有限公司 | A kind of method and device tracking image template generation |
| CN109598746B (en) * | 2018-12-26 | 2021-10-22 | 成都纵横自动化技术股份有限公司 | Method and device for generating tracking image template |
| CN111739053A (en) * | 2019-03-21 | 2020-10-02 | 四川大学 | An online multi-pedestrian detection and tracking method in complex scenes |
| CN111739053B (en) * | 2019-03-21 | 2022-10-21 | 四川大学 | An online multi-pedestrian detection and tracking method in complex scenes |
| CN113449544A (en) * | 2020-03-24 | 2021-09-28 | 华为技术有限公司 | Image processing method and system |
| US12347164B2 (en) | 2020-03-24 | 2025-07-01 | Shenzhen Yinwang Intelligent Technologies Co., Ltd. | Image processing method and system for updating template library based on life value of facial template images |
| CN113449544B (en) * | 2020-03-24 | 2025-08-05 | 深圳引望智能技术有限公司 | Image processing method and system |
| CN111667512A (en) * | 2020-05-28 | 2020-09-15 | 浙江树人学院(浙江树人大学) | Multi-target vehicle track prediction method based on improved Kalman filtering |
| CN111667512B (en) * | 2020-05-28 | 2024-04-09 | 浙江树人学院(浙江树人大学) | Multi-target vehicle track prediction method based on improved Kalman filtering |
| CN114066951A (en) * | 2021-11-18 | 2022-02-18 | Oppo广东移动通信有限公司 | Image registration method, device, storage medium and electronic device |
| CN116152189A (en) * | 2023-01-31 | 2023-05-23 | 华纺股份有限公司 | Pattern fabric flaw detection method, system and detection terminal |
| CN116152189B (en) * | 2023-01-31 | 2023-12-19 | 华纺股份有限公司 | Pattern fabric flaw detection method, system and detection terminal |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107016412A (en) | Adaptive template-updating strategy based on outward appearance and motion continuity cross validation | |
| CN109949375B (en) | Mobile robot target tracking method based on depth map region of interest | |
| CN107292911B (en) | Multi-target tracking method based on multi-model fusion and data association | |
| CN107563313B (en) | Multi-target pedestrian detection and tracking method based on deep learning | |
| CN108010067B (en) | A kind of visual target tracking method based on combination determination strategy | |
| Tissainayagam et al. | Object tracking in image sequences using point features | |
| US6999599B2 (en) | System and method for mode-based multi-hypothesis tracking using parametric contours | |
| JP4181473B2 (en) | Video object trajectory synthesis apparatus, method and program thereof | |
| CN116309731A (en) | A Multi-Target Dynamic Tracking Method Based on Adaptive Kalman Filter | |
| CN110490907B (en) | Moving target tracking method based on multi-target feature and improved correlation filter | |
| CN110473231B (en) | A target tracking method using twin fully convolutional networks with a predictive learning update strategy | |
| CN112884816A (en) | Vehicle feature deep learning recognition track tracking method based on image system | |
| CN105913028A (en) | Face tracking method and face tracking device based on face++ platform | |
| CN107967692A (en) | A kind of target following optimization method based on tracking study detection | |
| KR100994367B1 (en) | Moving target motion tracking method of video tracking device | |
| CN106296729A (en) | The REAL TIME INFRARED THERMAL IMAGE imaging ground moving object tracking of a kind of robust and system | |
| CN110660084A (en) | Multi-target tracking method and device | |
| CN116778410A (en) | A method for detecting and tracking coal mine workers based on deep learning | |
| JP2010244207A (en) | Moving object tracking device, moving object tracking method, and moving object tracking program | |
| CN114924285B (en) | Improved laser radar vehicle tracking method, system and medium based on L-shaped model | |
| CN115861386A (en) | UAV multi-target tracking method and device through divide-and-conquer association | |
| CN112200831B (en) | Dynamic template-based dense connection twin neural network target tracking method | |
| Stumper et al. | Offline object extraction from dynamic occupancy grid map sequences | |
| Wang et al. | Improving target detection by coupling it with tracking | |
| CN115358941B (en) | Real-time semantic vSLAM algorithm based on depth map restoration |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| TA01 | Transfer of patent application right |
Effective date of registration: 20180817 Address after: 102206 Beijing Changping District Shahe Town North Street five home area 2 Building 6 level 4 units 637 Applicant after: Beijing Liu Ma Chi Chi Technology Co., Ltd. Address before: 101102 6 floor, 7 Building 7, 28 street, Jingsheng South Street, Tongzhou District, Beijing. Applicant before: Beijing Bei ang Technology Co., Ltd. |
|
| TA01 | Transfer of patent application right | ||
| WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170804 |
|
| WD01 | Invention patent application deemed withdrawn after publication |