CN104200485A - Video-monitoring-oriented human body tracking method - Google Patents
Video-monitoring-oriented human body tracking method Download PDFInfo
- Publication number
- CN104200485A CN104200485A CN201410328405.8A CN201410328405A CN104200485A CN 104200485 A CN104200485 A CN 104200485A CN 201410328405 A CN201410328405 A CN 201410328405A CN 104200485 A CN104200485 A CN 104200485A
- Authority
- CN
- China
- Prior art keywords
- target
- human body
- area
- kalman filter
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a video-monitoring-oriented human body tracking method. Specific to the high instantaneity and scene complexity in video monitoring, a kernel-function-based adaptive human body shape change tracking method which combines mean shift and Kalman filtering is adopted. The method comprises the following steps: acquiring a tracking target through a background difference method; establishing a human body tracking template and a color histogram, and initializing the state of a Kalman filter; acquiring the position of the target by using a kernel-function-based mean shift method in a predicted range in a next frame video image of a video according to the range of a motion target predicted according to the Kalman filter; calculating the size of a target region through a projection drawing of the histogram of a target color; performing Kalman filtering correction to obtain a final position.
Description
Technical field
The present invention relates to the method for technical field of image processing, specifically a kind of human body tracing method of facing video monitoring.
Background technology
Along with society, constantly development is with progressive, and the large common people are more and more higher for the security requirement of personal property, because video monitoring has intuitively, conveniently and be not subject to be more and more subject to apart from supremacy clauses such as the restrictions of discrete time people's favor.And in video monitoring for the detection of moving object, identification and tracking are the popular research directions of current intelligent video monitoring research field always, tracking common in video sequence has: the snake algorithm of the tracking based on profile, particle filter algorithm based on motion model, and meanshift algorithm based on color probability.Because the calculating of meanshift algorithm is simple, real-time is good, can be applicable to real-time video monitoring.
Cheng has applied to meanshift the direction of image processing first.The meanshift that the people such as Comaniciu propose follows the tracks of overall framework and performing step, has proposed to utilize kernel function to carry out to target the color characteristic that the upper weighting of distance can better represent target.Bradski has proposed the camshift method of utilizing the hue component of hsv color space to follow the tracks of.
Meanshift algorithm Kernel Function window width is changeless, in the time that target exists certain dimensional variation, can cause target following inaccurate, especially target away from or approach process in this phenomenon more obvious.
Camshift algorithm, taking meanshift algorithm as basis, has proposed a kind of adaptive method of target sizes.The method has improved the robustness of meanshift, and the tracking effect under solid background is better, but for the tracking under complex background still exist follow the tracks of less than problem.
Two kinds of algorithms are all taking average drifting as basic following principle, mean that the speed of moving object can not be excessive, if mobile object speed, mean algorithm cannot search out accurately target in the drift number of times of regulation.
Human body in video monitoring is nonrigid moving object, in motion process, have major part deformation, and due to the out-of-shape of human body, in the time that being extracted, target easily brings the interference of background into, the progressively expanding of interference meeting of background colour in tracking afterwards.Therefore it is not ideal using separately two kinds of algorithm effects.
Summary of the invention
The present invention will overcome the above-mentioned shortcoming of prior art, provide a kind of find target accurately, the human body tracing method of the facing video monitoring of high robust.
The present invention is studying the advantage in conjunction with two kinds of algorithms after meanshift based on color and camshift algorithm, the average drifting function of utilization based on core determined the position of human body, utilize the perspective view of the H component in HSV space to calculate the size of human body, and further improved the robustness of track algorithm in conjunction with Kalman filtering.
Irregular due to body shape, edge background bring the easy histogram that affects human body color into, along with constantly carrying out of following the tracks of, background error can be increasing, therefore select kernel function, follow the tracks of different place, center for distance and carry out different weighted values, can make the template of target more concentrate on center, reduce the interference that edge background is set up body templates.Make target following more concentrated.
The meanshift method of syncaryon function is because the bandwidth of the kernel function of its tracking target is constant, therefore the size of the tracking to target can not change along with target deformation, the good adaptive change of size can not follow the tracks of frame with to(for) the human body just camera being seesawed.Therefore use the perspective view of the H component in the HSV space of image, calculate the length of tracking object and wide.Make also can adaptive tracking object along with the generation of human body deformation size.And set up template renewal strategy, made kernel function to change tracking bandwidth along with the big or small change of target.
Owing to calculating the position of human body and size not in a kind of color space, there is certain increase for calculated amount, therefore adopted the way of Kalman Prediction, in the environs of Kalman prediction position, human body target is searched for, reduce for the calculating without human space, increased the robustness of disturbing for Similar color simultaneously.
The present invention mainly comprises following content
1. a human body tracing method for facing video monitoring, comprises the following steps:
1) background modeling and target detection are extracted, first applied scene is carried out to background modeling, extract scene background picture from video monitoring image, carry out gray scale for static scene and be modeled as pFimage, after completing, modeling starts video to carry out the detection of moving object, current detection frame is pFrame, first pFrame is carried out to gaussian filtering with smoothed image, reduce the interference of noise, present frame and background subtraction value obtain the target to be tracked of motion, the target target to be tracked detecting is carried out to morphologic filtering and can remove noise, and fill hole and make the profile of object more approach real human body, for the variation of indoor light and entering of some non-human objects, adopt the context update strategy of turnover rate β, the formula wherein upgrading is
pFimage=pFimage+βpFrame
For detected object to be tracked, utilize the size exclusion of area to fall non-human moving target, obtain needing the human body target of tracking;
2) Kalman filter initialization and prediction
If target appears at F first
kframe, it appears at the upper p of being set to of image
k(x
k, y
k), according to detecting that the position of target is to the preliminary examination of Kalman filter, and utilize the state equation of Kalman filtering to dope the position (x of next frame target
p, y
p), for F
kframe is by predicted position (x
p, y
p) be made as the actual position of human body, and near region future position is set as estimation range;
3) initialization of human body target tracking
By F
kframe is transformed into the color distribution histogram hist of the hsv color space target body H component that also statistics obtains according to target detection from rgb space, calculate the back projection figure bp (x of estimation range according to hist
ij);
4) foundation of template and renewal
If target is to occur first, set up the feature templates model of human body at RGB color space, and the frame number that statistics has been followed the tracks of, if the frame number of statistics is the multiple that upgrades coefficient, think that template changes, recalculate body templates model with the target area of current tracking, the template using new template as search;
5) search of estimation range
In order to reduce calculated amount and to improve robustness, in the next frame of the frame that target occurs, near the position of searching for estimation range after target moves, searching method is at former frame target location (x
k, y
k) locate to choose the candidate target region the same with target sizes, centered by the position of former frame target, set up candidate target region model1, obtain the color probability distribution maximum position p of candidate target region
n;
6) searching position of judgement final goal
The method that determines whether optimal estimation is the color probability distribution maximum position p with twice target
n+1(x
n+1, y
n+1) and p
n(x
n, y
n) displacement | p
n+1-p
n| whether be less than threshold value, if be less than threshold value, the position p of new target
n+1(x
n+1, y
n+1) as the searching position of final goal, if be greater than threshold value, with new p
n+1(x
n+1, y
n+1) be core, To Template size re-establishes new candidate target region, and calculates the model1 of new candidate target region, returns to 5) recalculate the core position p of fresh target
n+2(x
n+2, y
n+2), until be less than threshold value or reached the number of times circulating, return to the target location p finally obtaining
n+m(x
n+m, y
n+m);
7) self-adaptation is calculated human region
According to 3) the target projection figure bp (x that obtains
ij) calculate the zeroth order square of target location, first moment, second moment meter and calculate length and the wide l of image
1and l
2and the angle tilting;
8) correction of Kalman filtering and renewal
When 5) position and 2 of the target that obtains of search) in the residual error of predicted position exceed after certain threshold value, select the position of prediction of Kalman filtering as the final position of target, otherwise use Kalman filter to proofread and correct position later as final position, the state of renewal carry out to(for) Kalman filter comprises the current state value of covariance and target, makes Kalman filter in next frame, can continue to predict the position of human body target.
2. step 2) middle according to detecting that the position of target is to the preliminary examination of Kalman filter, the state equation of Kalman filtering is
X(k)=A?X(k-1)+W(k)
Wherein X (k) is current state matrix, the state matrix that X (k-1) is previous moment, W (k) is system noise, its feature distributes and meets Gaussian distribution, A is the transition matrix of system, X (k) is the matrix of one 4 dimension, X (k)=(x
k, y
k, V
kx, V
ky) x wherein
k, y
krepresent initial position horizontal ordinate and the ordinate of object of which movement, V
kx, V
kyrepresent transverse velocity and the longitudinal velocity of the speed of object of which movement, original state x
k, y
kfor the position of target, the speed V of preliminary examination
kx, V
kybe 0; The prediction object of Kalman filtering when carrying out location finding for human body target and to calculate estimation range perspective view, is searched near Kalman prediction point, can reduce the scope of search, reduces calculated amount, the real-time of enhancing target following.
3. step 3) in the target body region that obtains according to target detection, the probability histogram hist of the color of object H component in statistics HSV space, the interval statistics value of each is
Wherein p
ijexpression pixel (i, j) is located the value of pixel, and u represents histogrammic u interval, and the scope of u is
l is the discrimination arranging, m, and n represents the horizontal number of target body area pixel point and longitudinally counts;
The back projection figure bp (x of estimation range
ij) formula as follows:
Wherein p
ijexpression pixel (i, j) is located the value of pixel, b (p
ij) be illustrated in the p on position (i, j)
ijbetween corresponding u Statistical Area of histogram, b
urepresent u the value between Statistical Area, m, n represents that in the horizontal number of estimation range pixel and above two formulas of longitudinal number, δ (x) function expression is as follows:
If belong to color of object in the perspective view obtaining, its value can be larger, do not belong to color of object value be 0.
4. the human body tracing method of a kind of facing video monitoring according to claim 1 is characterized in that: step 4) in set up human body feature templates be the target body of utilizing target detection to obtain first, be p in target location
k(x
k, y
k) locate to adopt the way of kernel density estimation to set up the model model of human body, model represents it is the probability density estimation of the estimation of u eigenwert in To Template, set up the triple channel model model of the RGB model of human body, model is a three-dimensional matrix, be used for representing the color characteristic of target, color of object is characterized as 16*16*16, and object module can be expressed as:
Wherein b (x
i) be x
ithe fiducial value of u feature of pixel at place, the value of u be 1...m}, δ is Kronecker function, h is kernel function bandwidth, the height of n target area, C is that normalization coefficient is expressed as:
K (x) Ye Panieqi Nico husband kernel function expression formula is:
The size that wherein cd is area of space, the dimension of d representation space;
5. the human body tracing method of a kind of facing video monitoring according to claim 1 is characterized in that: step 5) in the core calculations of candidate target region be the principle according to average drifting, set up the weight ratio value matrix ω of the object module based on kernel function
i
Wherein model is the object module in right 4, the model that model1 is candidate target region, and modeling pattern is identical with model, and the summation that m is target area point, when target former frame target location is p
k(x
k, y
k) time, the candidate target region of choosing due to present frame is taking former frame target location as core, and the expression formula of setting up candidate region model model1 is as follows
B (x
i) be x
ithe fiducial value of u feature of pixel at place, the value of u be 1...m}, δ is Kronecker function, h is kernel function bandwidth, the height that n is candidate target region, C is normalization coefficient, expression formula is identical with model model,
The core p of present frame candidate target region
nfor
Wherein g (x) is the derivative of kernel function K (x), and K (x) adopts Ye Panieqi Nico husband kernel function, the height that n is candidate target region, the bandwidth that h is kernel function, i.e. candidate target region width.
6. step 7) according to perspective view bp (x
ij) moment of the orign of target area, first order and second order moments, and calculate size and the following M of its computing method of angle of inclination of target
00for moment of the orign is M
10, M
01, M
11for first moment M
20, M
02for second moment
Wherein l
1, l
2the length in the region of expression human body target and wide, θ represents the angle that target tilts, w
irepresent perspective view bp (x
ij) upper point (x, y) value of locating.
7. step 8) in the renewal of Kalman filter comprise the current state value of covariance and target, wherein upgrade step 5 in observed reading observed reading Z (k) right 1 of Karman equation) in the position of the human body searching,
Current state value is made to optimized being estimated as follows:
X(k|k)=X(k|k-1)+Kg(k)(Z(k)-H?X(k|k-1))
The kalman gain (Kalman Gain) that wherein Kg (k) is current state, the optimal estimation of X (k|k) current state, X (k|k-1) is the optimal estimation to the k moment in the k-1 moment, and H is the given measurement matrix of initialization
Kg(k)=P(k|k-1)H
T/(H?P(k|k-1)H
T+R)
Wherein P (k|k-1) is illustrated in the covariance of the optimal estimation of k-1 moment to the k moment, upgrades P (k|k-1),
P(k|k-1)=A?P(k-1|k-1)A
T+Q
Wherein A is the transition matrix of state equation in right 2, finally needs to upgrade the covariance of X under k state (k|k)
P(k|k)=(I-Kg(k)H)P(k|k-1)
In above formula, Q and R represent respectively the covariance of motion artifacts and the measurement noise of system, are made as constant 1e-5 and 1e-1;
The correction of Kalman filter is the position (x (k) of the target that obtains of present frame searching algorithm, y (k)) and former frame Kalman prediction position (x (k-1), y (k-1)) residual error exceed after threshold value, select the position of prediction of Kalman filtering as the final position of target, wherein residual error is defined as:
Do not exceed threshold value and talk about the position adopting after Kalman filter is proofreaied and correct as final position.
Advantage of the present invention is: can adaptive calculating follow the tracks of the variation of human body size for the human body of following the tracks of 1.; 2. in the time that search needs the target of tracking, trace template can be according to following the tracks of the bandwidth of frame number to trace template self-adaptation adjustment kernel function; 3. predict the position that has adopted the forecasting characters of Kalman filtering to occur target, reduces the calculating in unnecessary region, has avoided the interference of other objects in guarded region, has improved real-time and the robustness of target following.
Brief description of the drawings
Fig. 1 is that target of the present invention is extracted binary map
Fig. 2 is the schematic diagram of kernel function of the present invention at higher dimensional space
Fig. 2 a is the schematic diagram of average kernel function of the present invention
Fig. 2 b is the schematic diagram of gaussian kernel function of the present invention
Fig. 3 is adaptive tracking method process flow diagram of the present invention
Fig. 4 is overall flow figure of the present invention
Embodiment
Just the concrete implementation process of the present invention is described in detail below, below setting parameter be the optimal value that test obtains under this experimental situation.Because color tracking algorithm has certain degree of association for light and surrounding environment, in other environment, also can further optimize and revise, example is below only an example of numerous experiments.The present invention, through multiple case verification, has proved its validity and practicality.
The test environment of this test is indoor environment, and human body tracking has selected single body to walk normally in laboratory, the fixing camera that camera is monocular, and human body is to advance in front and back, has certain human body deformation quantity.
1. a human body tracing method for facing video monitoring, comprises the following steps:
1) background modeling and target detection are extracted, first applied scene is carried out to background modeling, extract scene background picture from video monitoring image, carry out gray scale for static scene and be modeled as pFimage, after completing, modeling starts video to carry out the detection of moving object, current detection frame is pFrame, first pFrame is carried out to gaussian filtering with smoothed image, reduce the interference of noise, present frame and background subtraction value obtain the target to be tracked of motion, the target target to be tracked detecting is carried out to morphologic filtering and can remove noise, and fill hole and make the profile of object more approach real human body, for the variation of indoor light and entering of some non-human objects, adopt the context update strategy of turnover rate β, the formula wherein upgrading is
pFimage=pFimage+βpFrame
For detected object to be tracked, utilize the size exclusion of area to fall non-human moving target, obtain needing the human body target of tracking, in actual implementation process, turnover rate β is 0.005;
2) Kalman filter initialization and prediction
If target appears at F first
kframe, it appears at the upper p of being set to of image
k(x
k, y
k), according to detecting that the position of target is to the preliminary examination of Kalman filter, and utilize the state equation of Kalman filtering to dope the position (x of next frame target
p, y
p), for F
kframe is by predicted position (x
p, y
p) be made as the actual position of human body, and near region future position is set as estimation range;
3) initialization of human body target tracking
By F
kframe is transformed into the color distribution histogram hist of the hsv color space target body H component that also statistics obtains according to target detection from rgb space, calculate the back projection figure bp (x of estimation range according to hist
ij);
4) foundation of template and renewal
If target is to occur first, set up the feature templates model of human body at RGB color space, and the frame number that statistics has been followed the tracks of, if the frame number of statistics is the multiple that upgrades coefficient, think that template changes, recalculate body templates model with the target area of current tracking, template using new template as search, upgrades coefficient and be made as 4 in actual implementation process;
5) search of estimation range
In order to reduce calculated amount and to improve robustness, in the next frame of the frame that target occurs, near the position of searching for estimation range after target moves, searching method is at former frame target location (x
k, y
k) locate to choose the candidate target region the same with target sizes, centered by the position of former frame target, set up candidate target region model1, obtain the color probability distribution maximum position p of candidate target region
n;
6) searching position of judgement final goal
The method that determines whether optimal estimation is the color probability distribution maximum position p with twice target
n+1(x
n+1, y
n+1) and p
n(x
n, y
n) displacement | p
n+1-p
n| whether be less than threshold value, if be less than threshold value, the position p of new target
n+1(x
n+1, y
n+1) as the searching position of final goal, if be greater than threshold value, with new p
n+1(x
n+1, y
n+1) be core, To Template size re-establishes new candidate target region, and calculates the model1 of new candidate target region, returns to 5) recalculate the core position p of fresh target
n+2(x
n+2, y
n+2), until be less than threshold value or reached the number of times circulating, return to the target location p finally obtaining
n+m(x
n+m, y
n+m), in actual enforcement, threshold value is made as 3, and cycle index is 20 times;
7) self-adaptation is calculated human region
According to 3) the target projection figure bp (x that obtains
ij) calculate the zeroth order square of target location, first moment, second moment meter and calculate length and the wide l of image
1and l
2and the angle tilting;
8) correction of Kalman filtering and renewal
When 5) position and 2 of the target that obtains of search) in the residual error of predicted position exceed after certain threshold value, select the position of prediction of Kalman filtering as the final position of target, otherwise use Kalman filter to proofread and correct position later as final position, the state of renewal carry out to(for) Kalman filter comprises the current state value of covariance and target, makes Kalman filter in next frame, can continue to predict the position of human body target.
2. the human body tracing method of a kind of facing video monitoring according to claim 1 is characterized in that: step 2) middle according to detecting that the position of target is to the preliminary examination of Kalman filter, the state equation of Kalman filtering is
X(k)=A?X(k-1)+W(k)
Wherein X (k) is current state matrix, the state matrix that X (k-1) is previous moment, W (k) is system noise, its feature distributes and meets Gaussian distribution, A is the transition matrix of system, X (k) is the matrix of one 4 dimension, X (k)=(x
k, y
k, V
kx, V
ky) x wherein
k, y
krepresent initial position horizontal ordinate and the ordinate of object of which movement, V
kx, V
kyrepresent transverse velocity and the longitudinal velocity of the speed of object of which movement, original state x
k, y
kfor the position of target, the speed V of preliminary examination
kx, V
kybe 0; The prediction object of Kalman filtering when carrying out location finding for human body target and to calculate estimation range perspective view, is searched near Kalman prediction point, can reduce the scope of search, reduces calculated amount, and the real-time of enhancing target following wherein
3. step 3) in the target body region that obtains according to target detection, the probability histogram hist of the color of object H component in statistics HSV space, the interval statistics value of each is
Wherein p
ijexpression pixel (i, j) is located the value of pixel, and u represents histogrammic u interval, and the scope of u is
l is the discrimination arranging, m, and n represents horizontal number and longitudinal number of target body area pixel point, in specific implementation process, l is made as 50;
The back projection figure bp (x of estimation range
ij) formula as follows:
Wherein p
ijexpression pixel (i, j) is located the value of pixel, b (p
ij) be illustrated in the p on position (i, j)
ijbetween corresponding u Statistical Area of histogram, b
urepresent u the value between Statistical Area, m, n represents that in the horizontal number of estimation range pixel and above two formulas of longitudinal number, δ (x) function expression is as follows:
If belong to color of object in the perspective view obtaining, its value can be larger, do not belong to color of object value be 0.
4. the human body tracing method of a kind of facing video monitoring according to claim 1 is characterized in that: step 4) in set up human body feature templates be the target body of utilizing target detection to obtain first, be p in target location
k(x
k, y
k) locate to adopt the way of kernel density estimation to set up the model model of human body, model represents it is the probability density estimation of the estimation of u eigenwert in To Template, set up the triple channel model model of the RGB model of human body, model is a three-dimensional matrix, be used for representing the color characteristic of target, color of object is characterized as 16*16*16, and object module can be expressed as:
Wherein b (x
i) be x
ithe fiducial value of u feature of pixel at place, the value of u be 1...m}, δ is Kronecker function, h is kernel function bandwidth, the height of n target area, C is that normalization coefficient is expressed as:
K (x) Ye Panieqi Nico husband kernel function expression formula is:
The size that wherein cd is area of space, the dimension of d representation space;
5. the human body tracing method of a kind of facing video monitoring according to claim 1 is characterized in that: step 5) in the core calculations of candidate target region be the principle according to average drifting, set up the weight ratio value matrix ω of the object module based on kernel function
i
Wherein model is the object module in right 4, the model that model1 is candidate target region, and modeling pattern is identical with model, and the summation that m is target area point, when target former frame target location is p
k(x
k, y
k) time, the candidate target region of choosing due to present frame is taking former frame target location as core, and the expression formula of setting up candidate region model model1 is as follows
B (x
i) be x
ithe fiducial value of u feature of pixel at place, the value of u be 1...m}, δ is Kronecker function, h is kernel function bandwidth, the height that n is candidate target region, C is normalization coefficient, expression formula is identical with model model,
The core p of present frame candidate target region
nfor
Wherein g (x) is the derivative of kernel function K (x), and K (x) adopts Ye Panieqi Nico husband kernel function, the height that n is candidate target region, the bandwidth that h is kernel function, i.e. candidate target region width.
6. the human body tracing method of a kind of facing video monitoring according to claim 1 is characterized in that: step 7) according to perspective view bp (x
ij) moment of the orign of target area, first order and second order moments, and calculate size and the following M of its computing method of angle of inclination of target
00for moment of the orign is M
10, M
01, M
11for first moment M
20, M
02for second moment
Wherein l
1, l
2the length in the region of expression human body target and wide, θ represents the angle that target tilts, w
irepresent perspective view bp (x
ij) upper point (x, y) value of locating.
7. the human body tracing method of a kind of facing video monitoring according to claim 1 is characterized in that: step 8) in the renewal of Kalman filter comprise the current state value of covariance and target, wherein upgrade step 5 in observed reading observed reading Z (k) right 1 of Karman equation) in the position of the human body searching
Current state value is made to optimized being estimated as follows:
X(k|k)=X(k|k-1)+Kg(k)(Z(k)-H?X(k|k-1))
The kalman gain (Kalman Gain) that wherein Kg (k) is current state, the optimal estimation of X (k|k) current state, X (k|k-1) is the optimal estimation to the k moment in the k-1 moment, and H is the given measurement matrix of initialization
Kg(k)=P(k|k-1)H
T/(H?P(k|k-1)H
T+R)
Wherein P (k|k-1) is illustrated in the covariance of the optimal estimation of k-1 moment to the k moment, upgrades P (k|k-1),
P(k|k-1)=A?P(k-1|k-1)A
T+Q
Wherein A is the transition matrix of state equation in right 2, finally needs to upgrade the covariance of X under k state (k|k)
P(k|k)=(I-Kg(k)H)P(k|k-1)
In above formula, Q and R represent respectively the covariance of motion artifacts and the measurement noise of system, are made as constant 1e-5 and 1e-1;
The correction of Kalman filter is the position (x (k) of the target that obtains of present frame searching algorithm, y (k)) and former frame Kalman prediction position (x (k-1), y (k-1)) residual error exceed after threshold value, select the position of prediction of Kalman filtering as the final position of target, wherein residual error is defined as:
Do not exceed threshold value and talk about the position adopting after Kalman filter is proofreaied and correct as final position.
Content described in this instructions embodiment is only enumerating of way of realization to inventive concept; protection scope of the present invention should not be regarded as only limiting to the concrete form that embodiment states, protection scope of the present invention is also forgiven those skilled in the art and conceived the equivalent technologies means that can expect according to the present invention.
Claims (7)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410328405.8A CN104200485B (en) | 2014-07-10 | 2014-07-10 | Video-monitoring-oriented human body tracking method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410328405.8A CN104200485B (en) | 2014-07-10 | 2014-07-10 | Video-monitoring-oriented human body tracking method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN104200485A true CN104200485A (en) | 2014-12-10 |
| CN104200485B CN104200485B (en) | 2017-05-17 |
Family
ID=52085771
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201410328405.8A Active CN104200485B (en) | 2014-07-10 | 2014-07-10 | Video-monitoring-oriented human body tracking method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN104200485B (en) |
Cited By (24)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104598900A (en) * | 2015-02-26 | 2015-05-06 | 张耀 | Human body recognition method and device |
| CN104933542A (en) * | 2015-06-12 | 2015-09-23 | 临沂大学 | Logistics storage monitoring method based computer vision |
| CN104992451A (en) * | 2015-06-25 | 2015-10-21 | 河海大学 | Improved target tracking method |
| CN105139424A (en) * | 2015-08-25 | 2015-12-09 | 四川九洲电器集团有限责任公司 | Target tracking method based on signal filtering |
| WO2016106595A1 (en) * | 2014-12-30 | 2016-07-07 | Nokia Technologies Oy | Moving object detection in videos |
| CN106778712A (en) * | 2017-03-01 | 2017-05-31 | 扬州大学 | A kind of multi-target detection and tracking method |
| CN106920249A (en) * | 2017-02-27 | 2017-07-04 | 西北工业大学 | The fast track method of space maneuver target |
| CN107705321A (en) * | 2016-08-05 | 2018-02-16 | 南京理工大学 | Moving object detection and tracking method based on embedded system |
| CN107767392A (en) * | 2017-10-20 | 2018-03-06 | 西南交通大学 | A Ball Track Tracking Method Adapting to Occlusion Scenes |
| CN108133491A (en) * | 2017-12-29 | 2018-06-08 | 重庆锐纳达自动化技术有限公司 | A kind of method for realizing dynamic target tracking |
| CN108288281A (en) * | 2017-01-09 | 2018-07-17 | 翔升(上海)电子技术有限公司 | Visual tracking method, vision tracks of device, unmanned plane and terminal device |
| CN108469729A (en) * | 2018-01-24 | 2018-08-31 | 浙江工业大学 | A kind of human body target identification and follower method based on RGB-D information |
| CN108762309A (en) * | 2018-05-03 | 2018-11-06 | 浙江工业大学 | Human body target following method based on hypothesis Kalman filtering |
| CN110020621A (en) * | 2019-04-01 | 2019-07-16 | 浙江工业大学 | A kind of moving Object Detection method |
| CN110264498A (en) * | 2019-06-26 | 2019-09-20 | 北京深醒科技有限公司 | A kind of human body tracing method under video monitoring scene |
| CN110503665A (en) * | 2019-08-22 | 2019-11-26 | 湖南科技学院 | An Improved Camshift Target Tracking Algorithm |
| CN110545383A (en) * | 2019-09-16 | 2019-12-06 | 湖北公众信息产业有限责任公司 | Video integrated management platform system |
| CN110543881A (en) * | 2019-09-16 | 2019-12-06 | 湖北公众信息产业有限责任公司 | Video data management method based on cloud platform |
| CN111338275A (en) * | 2020-02-21 | 2020-06-26 | 江苏大量度电气科技有限公司 | Method and system for monitoring running state of electrical equipment |
| CN111915649A (en) * | 2020-07-27 | 2020-11-10 | 北京科技大学 | Strip steel moving target tracking method under shielding condition |
| CN111918034A (en) * | 2020-07-28 | 2020-11-10 | 上海电机学院 | Embedded unattended base station intelligent monitoring system |
| CN114674067A (en) * | 2020-12-25 | 2022-06-28 | 珠海拓芯科技有限公司 | A radar-based air conditioner control method, air conditioner, and computer-readable storage medium |
| CN117670940A (en) * | 2024-01-31 | 2024-03-08 | 中国科学院长春光学精密机械与物理研究所 | Single-stream satellite video target tracking method based on correlation peak distance analysis |
| CN118688806A (en) * | 2024-08-28 | 2024-09-24 | 海底鹰深海科技股份有限公司 | Tracking method, tracking system and computing device of single beam sonar |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060126895A1 (en) * | 2004-12-09 | 2006-06-15 | Sung-Eun Kim | Marker-free motion capture apparatus and method for correcting tracking error |
| CN101320477A (en) * | 2008-07-10 | 2008-12-10 | 北京中星微电子有限公司 | Human body tracing method and equipment thereof |
| CN102110296A (en) * | 2011-02-24 | 2011-06-29 | 上海大学 | Method for tracking moving target in complex scene |
| US20130188827A1 (en) * | 2012-01-19 | 2013-07-25 | Electronics And Telecommunications Research Institute | Human tracking method and apparatus using color histogram |
-
2014
- 2014-07-10 CN CN201410328405.8A patent/CN104200485B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060126895A1 (en) * | 2004-12-09 | 2006-06-15 | Sung-Eun Kim | Marker-free motion capture apparatus and method for correcting tracking error |
| CN101320477A (en) * | 2008-07-10 | 2008-12-10 | 北京中星微电子有限公司 | Human body tracing method and equipment thereof |
| CN102110296A (en) * | 2011-02-24 | 2011-06-29 | 上海大学 | Method for tracking moving target in complex scene |
| US20130188827A1 (en) * | 2012-01-19 | 2013-07-25 | Electronics And Telecommunications Research Institute | Human tracking method and apparatus using color histogram |
Non-Patent Citations (3)
| Title |
|---|
| S.SARAVANAKUMAR等: "Multiple human object tracking using background subtraction and shadow removal techniques", 《INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING》 * |
| 刘海燕等: "基于梯度特征和颜色特征的运动目标跟踪算法", 《计算机应用》 * |
| 罗元等: "基于CamShift和Kalman滤波混合的视频手势跟踪算法", 《计算机应用研究》 * |
Cited By (30)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2016106595A1 (en) * | 2014-12-30 | 2016-07-07 | Nokia Technologies Oy | Moving object detection in videos |
| CN104598900A (en) * | 2015-02-26 | 2015-05-06 | 张耀 | Human body recognition method and device |
| CN104933542A (en) * | 2015-06-12 | 2015-09-23 | 临沂大学 | Logistics storage monitoring method based computer vision |
| CN104933542B (en) * | 2015-06-12 | 2018-12-25 | 临沂大学 | A kind of logistic storage monitoring method based on computer vision |
| CN104992451A (en) * | 2015-06-25 | 2015-10-21 | 河海大学 | Improved target tracking method |
| CN105139424A (en) * | 2015-08-25 | 2015-12-09 | 四川九洲电器集团有限责任公司 | Target tracking method based on signal filtering |
| CN105139424B (en) * | 2015-08-25 | 2019-01-18 | 四川九洲电器集团有限责任公司 | Method for tracking target based on signal filtering |
| CN107705321A (en) * | 2016-08-05 | 2018-02-16 | 南京理工大学 | Moving object detection and tracking method based on embedded system |
| CN108288281A (en) * | 2017-01-09 | 2018-07-17 | 翔升(上海)电子技术有限公司 | Visual tracking method, vision tracks of device, unmanned plane and terminal device |
| CN106920249A (en) * | 2017-02-27 | 2017-07-04 | 西北工业大学 | The fast track method of space maneuver target |
| CN106778712A (en) * | 2017-03-01 | 2017-05-31 | 扬州大学 | A kind of multi-target detection and tracking method |
| CN106778712B (en) * | 2017-03-01 | 2020-04-14 | 扬州大学 | A multi-target detection and tracking method |
| CN107767392A (en) * | 2017-10-20 | 2018-03-06 | 西南交通大学 | A Ball Track Tracking Method Adapting to Occlusion Scenes |
| CN108133491A (en) * | 2017-12-29 | 2018-06-08 | 重庆锐纳达自动化技术有限公司 | A kind of method for realizing dynamic target tracking |
| CN108469729A (en) * | 2018-01-24 | 2018-08-31 | 浙江工业大学 | A kind of human body target identification and follower method based on RGB-D information |
| CN108469729B (en) * | 2018-01-24 | 2020-11-27 | 浙江工业大学 | A Human Target Recognition and Following Method Based on RGB-D Information |
| CN108762309A (en) * | 2018-05-03 | 2018-11-06 | 浙江工业大学 | Human body target following method based on hypothesis Kalman filtering |
| CN108762309B (en) * | 2018-05-03 | 2021-05-18 | 浙江工业大学 | Human body target following method based on hypothesis Kalman filtering |
| CN110020621A (en) * | 2019-04-01 | 2019-07-16 | 浙江工业大学 | A kind of moving Object Detection method |
| CN110264498A (en) * | 2019-06-26 | 2019-09-20 | 北京深醒科技有限公司 | A kind of human body tracing method under video monitoring scene |
| CN110503665A (en) * | 2019-08-22 | 2019-11-26 | 湖南科技学院 | An Improved Camshift Target Tracking Algorithm |
| CN110545383A (en) * | 2019-09-16 | 2019-12-06 | 湖北公众信息产业有限责任公司 | Video integrated management platform system |
| CN110543881A (en) * | 2019-09-16 | 2019-12-06 | 湖北公众信息产业有限责任公司 | Video data management method based on cloud platform |
| CN111338275A (en) * | 2020-02-21 | 2020-06-26 | 江苏大量度电气科技有限公司 | Method and system for monitoring running state of electrical equipment |
| CN111915649A (en) * | 2020-07-27 | 2020-11-10 | 北京科技大学 | Strip steel moving target tracking method under shielding condition |
| CN111918034A (en) * | 2020-07-28 | 2020-11-10 | 上海电机学院 | Embedded unattended base station intelligent monitoring system |
| CN114674067A (en) * | 2020-12-25 | 2022-06-28 | 珠海拓芯科技有限公司 | A radar-based air conditioner control method, air conditioner, and computer-readable storage medium |
| CN117670940A (en) * | 2024-01-31 | 2024-03-08 | 中国科学院长春光学精密机械与物理研究所 | Single-stream satellite video target tracking method based on correlation peak distance analysis |
| CN117670940B (en) * | 2024-01-31 | 2024-04-26 | 中国科学院长春光学精密机械与物理研究所 | Single-stream satellite video target tracking method based on correlation peak value distance analysis |
| CN118688806A (en) * | 2024-08-28 | 2024-09-24 | 海底鹰深海科技股份有限公司 | Tracking method, tracking system and computing device of single beam sonar |
Also Published As
| Publication number | Publication date |
|---|---|
| CN104200485B (en) | 2017-05-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN104200485A (en) | Video-monitoring-oriented human body tracking method | |
| Zhou et al. | Efficient road detection and tracking for unmanned aerial vehicle | |
| CN101739551B (en) | moving object identification method and system | |
| CN104318258B (en) | Time domain fuzzy and kalman filter-based lane detection method | |
| CN103530893B (en) | Based on the foreground detection method of background subtraction and movable information under camera shake scene | |
| EP2858008B1 (en) | Target detecting method and system | |
| CN103164858B (en) | Adhesion crowd based on super-pixel and graph model is split and tracking | |
| CN103310444B (en) | A kind of method of the monitoring people counting based on overhead camera head | |
| CN103077539A (en) | Moving object tracking method under complicated background and sheltering condition | |
| US8094884B2 (en) | Apparatus and method for detecting object | |
| CN107316321B (en) | Multi-feature fusion target tracking method and weight self-adaption method based on information entropy | |
| CN102214309B (en) | Special human body recognition method based on head and shoulder model | |
| CN103971386A (en) | Method for foreground detection in dynamic background scenario | |
| CN101916446A (en) | Gray Target Tracking Algorithm Based on Edge Information and Mean Shift | |
| CN103136537B (en) | Vehicle type identification method based on support vector machine | |
| CN109919053A (en) | A deep learning vehicle parking detection method based on surveillance video | |
| CN104835147A (en) | Method for detecting crowded people flow in real time based on three-dimensional depth map data | |
| CN101408983A (en) | Multi-object tracking method based on particle filtering and movable contour model | |
| CN106780560B (en) | A visual tracking method of bionic robotic fish based on feature fusion particle filter | |
| CN104200492B (en) | Video object automatic detection tracking of taking photo by plane based on profile constraints | |
| CN105488811A (en) | Depth gradient-based target tracking method and system | |
| CN106780564A (en) | A kind of anti-interference contour tracing method based on Model Prior | |
| CN112613565B (en) | Anti-occlusion tracking method based on multi-feature fusion and adaptive learning rate updating | |
| Li et al. | A local statistical fuzzy active contour model for change detection | |
| CN109064498A (en) | Method for tracking target based on Meanshift, Kalman filtering and images match |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |