CN105139425B - A kind of demographic method and device - Google Patents
A kind of demographic method and device Download PDFInfo
- Publication number
- CN105139425B CN105139425B CN201510540599.2A CN201510540599A CN105139425B CN 105139425 B CN105139425 B CN 105139425B CN 201510540599 A CN201510540599 A CN 201510540599A CN 105139425 B CN105139425 B CN 105139425B
- Authority
- CN
- China
- Prior art keywords
- frame
- target
- head
- image
- shoulder
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
Landscapes
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本申请涉及图像处理技术领域,尤其涉及一种人数统计方法及装置。The present application relates to the technical field of image processing, and in particular to a method and device for counting people.
背景技术Background technique
在许多公共场所(例如,商场、超市、公园等)都部署了实时人数统计系统,以便管理人员掌握实时客流情况,采取必要的疏导措施,防止由于人数过多发生踩踏等危险事件。Real-time people counting systems are deployed in many public places (such as shopping malls, supermarkets, parks, etc.), so that managers can grasp the real-time passenger flow situation, take necessary diversion measures, and prevent dangerous events such as stampede due to too many people.
目前的人数统计方法主要以视频检测为主,通过在公共场所的进出口安装摄像头进行视频采集,再对采集的图像进行分析,最终统计出场所内的人数。例如,采用背景建模外加特征库匹配的方法进行行人检测,或者,采用多个人头分类器进行人头检测。但是,上述人数统计方法普遍存在统计准确度不高,且处理效率低等问题。The current people counting method is mainly based on video detection. Cameras are installed at the entrances and exits of public places to collect video, and then the collected images are analyzed to finally count the number of people in the place. For example, the method of background modeling plus feature library matching is used for pedestrian detection, or multiple head classifiers are used for head detection. However, the above-mentioned people counting methods generally have problems such as low statistical accuracy and low processing efficiency.
发明内容Contents of the invention
有鉴于此,本申请提供一种人数统计方法及装置。In view of this, the present application provides a method and device for counting people.
具体地,本申请是通过如下技术方案实现的:Specifically, this application is achieved through the following technical solutions:
本申请提供一种人数统计方法,该方法包括:This application provides a method for counting people, which method includes:
通过对当前帧图像检测区域进行目标分割提取运动前景目标;Extract the moving foreground target by performing target segmentation on the current frame image detection area;
检测所述运动前景目标中的头肩特征框;Detecting the head and shoulders feature frame in the moving foreground target;
判断所述运动前景目标中的头肩特征框是否满足人数统计触发条件;Judging whether the head and shoulders feature frame in the moving foreground target satisfies the people counting trigger condition;
当所述头肩特征框满足所述人数统计触发条件时,根据所述头肩特征框进行人数统计。When the head and shoulders feature frame satisfies the people counting trigger condition, the people counting is performed according to the head and shoulders feature frame.
本申请还提供一种人数统计装置,该装置包括:The present application also provides a people counting device, which includes:
提取单元,用于通过对当前帧图像检测区域进行目标分割提取运动前景目标;An extraction unit is used to extract a moving foreground target by performing target segmentation on the current frame image detection area;
检测单元,用于检测所述运动前景目标中的头肩特征框;A detection unit, configured to detect the head and shoulders feature frame in the moving foreground target;
判断单元,用于判断所述运动前景目标中的头肩特征框是否满足人数统计触发条件;A judging unit, configured to judge whether the head and shoulders feature frame in the moving foreground object satisfies the trigger condition for people counting;
统计单元,用于当所述头肩特征框满足所述人数统计触发条件时,根据所述头肩特征框进行人数统计。A counting unit, configured to count people according to the head and shoulders feature frame when the head and shoulders feature frame satisfies the people counting trigger condition.
由以上描述可以看出,本申请可缩短特征检测时间,降低特征误检率,提升特征检测效果,从而提升人数统计效率以及准确率。It can be seen from the above description that the present application can shorten the feature detection time, reduce the feature false detection rate, improve the feature detection effect, and thus improve the efficiency and accuracy of people counting.
附图说明Description of drawings
图1是本申请一示例性实施例示出的应用场景示意图;Fig. 1 is a schematic diagram of an application scenario shown in an exemplary embodiment of the present application;
图2是本申请一示例性实施例示出的一种人数统计方法流程图;Fig. 2 is a flow chart of a people counting method shown in an exemplary embodiment of the present application;
图3是本申请一示例性实施例示出的运动前景目标提取流程图;Fig. 3 is a flow chart of extracting moving foreground objects shown in an exemplary embodiment of the present application;
图4是本申请一示例性实施例示出的一种人数统计装置所在设备的基础硬件结构示意图;Fig. 4 is a schematic diagram of the basic hardware structure of a device where a people counting device is shown in an exemplary embodiment of the present application;
图5是本申请一示例性实施例示出的一种人数统计装置的结构示意图。Fig. 5 is a schematic structural diagram of a people counting device shown in an exemplary embodiment of the present application.
具体实施方式Detailed ways
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numerals in different drawings refer to the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with this application. Rather, they are merely examples of apparatuses and methods consistent with aspects of the present application as recited in the appended claims.
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。The terminology used in this application is for the purpose of describing particular embodiments only, and is not intended to limit the application. As used in this application and the appended claims, the singular forms "a", "the", and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It should also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items.
应当理解,尽管在本申请可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。It should be understood that although the terms first, second, third, etc. may be used in this application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of the present application, first information may also be called second information, and similarly, second information may also be called first information. Depending on the context, the word "if" as used herein may be interpreted as "at" or "when" or "in response to a determination."
在许多公共场所(例如,商场、超市、公园等)都部署了实时人数统计系统,以便管理人员掌握实时客流情况,采取必要的疏导措施,防止由于人数过多发生踩踏等危险事件。Real-time people counting systems are deployed in many public places (such as shopping malls, supermarkets, parks, etc.), so that managers can grasp the real-time passenger flow situation, take necessary diversion measures, and prevent dangerous events such as stampede due to too many people.
目前的人数统计方法主要以视频检测为主,通过在公共场所的进出口安装摄像头进行视频采集,再对采集的图像进行分析,最终统计出场所内的人数。The current people counting method is mainly based on video detection. Cameras are installed at the entrances and exits of public places to collect video, and then the collected images are analyzed to finally count the number of people in the place.
现有技术方案一,采用背景建模算法提取前景,再对提取的前景区域采用训练的特征库进行行人检测,最后对检测到的行人进行跟踪计数。但是,该统计方法在打伞以及人员相互遮挡较多的情况下,存在漏检;且在未进行前景目标分割的情况下,行人检测耗时较大,致使人数统计效率较低。Solution 1 of the prior art uses a background modeling algorithm to extract the foreground, then uses a trained feature library to detect pedestrians in the extracted foreground area, and finally tracks and counts the detected pedestrians. However, this statistical method has missed detection in the case of umbrellas and people blocking each other; and in the case of no foreground object segmentation, pedestrian detection takes a long time, resulting in low efficiency of people counting.
现有技术方案二,采用多个人头分类器检测,人头检出率高。但是,该检测方法同样存在耗时大的问题,当应用于摄像机进行实时检测时,不能保证每一帧图像都被处理,存在丢帧风险,导致人数统计准确率降低。In the second prior art solution, a plurality of head classifiers are used for detection, and the head detection rate is high. However, this detection method also has the problem of time-consuming. When it is applied to cameras for real-time detection, it cannot guarantee that every frame of image will be processed, and there is a risk of frame loss, resulting in a decrease in the accuracy of people counting.
针对上述问题,本申请实施例提出一种人数统计方法,该方法将检测出的头肩特征和提取出的运动前景目标相结合进行人数统计,以降低漏检概率;同时,在运动前景目标提取过程中,采用目标分割的方法,减少后续头肩特征检测的运算量,提高人数统计效率。In view of the above problems, the embodiment of the present application proposes a people counting method, which combines the detected head and shoulders features with the extracted moving foreground objects to perform people counting, so as to reduce the probability of missed detection; In the process, the method of target segmentation is adopted to reduce the calculation amount of subsequent head and shoulder feature detection and improve the efficiency of people counting.
参见图1,为本申请一个较佳的应用场景示意图。该应用场景中摄像机竖直或接近竖直安装,摄像机的俯视角α范围为65度~90度。在该应用场景下,人员遮挡情况较少,采用本申请实施例的人数统计方法,统计准确率更高,且速度较快。Referring to FIG. 1 , it is a schematic diagram of a preferred application scenario of the present application. In this application scenario, the camera is installed vertically or nearly vertically, and the viewing angle α of the camera ranges from 65 degrees to 90 degrees. In this application scenario, the situation of people occlusion is less, and the people counting method of the embodiment of the present application is adopted, and the counting accuracy is higher and the speed is faster.
参见图2,为本申请人数统计方法的一个实施例流程图,该实施例对人数统计过程进行描述。Referring to FIG. 2 , it is a flowchart of an embodiment of the applicant's method for counting the number of people, and this embodiment describes the process of counting the number of people.
步骤201,通过对当前帧图像检测区域进行目标分割提取运动前景目标。Step 201, extracting a moving foreground object by performing object segmentation on the detection area of the current frame image.
摄像机的监控视野范围较大,在进行运动前景目标提取时,无需对整帧图像进行处理。通常摄像机的重点监控区域位于图像的中心区域,例如,摄像机监控地铁闸口时,虽然摄像机的视野范围很大,但图像处理时真正有效的区域仅为地铁闸口所处图像区域,当摄像机的安装位置和角度固定时,图像中有效区域位置也就确定。因此,本申请实施例仅对当前帧图像中的检测区域(相当于前述有效区域)进行运动前景目标提取,以缩小图像处理范围,提高运动前景目标的提取效率。The monitoring field of view of the camera is large, and there is no need to process the entire frame of images when extracting moving foreground objects. Usually the key monitoring area of the camera is located in the central area of the image. For example, when the camera monitors the subway gate, although the camera has a large field of view, the really effective area during image processing is only the image area where the subway gate is located. When the camera is installed When the angle and angle are fixed, the position of the effective area in the image is also determined. Therefore, the embodiment of the present application only extracts the moving foreground object from the detection area (equivalent to the aforementioned effective area) in the current frame image, so as to narrow the image processing range and improve the extraction efficiency of the moving foreground object.
参见图3,为本申请运动前景目标提取流程图,具体描述如下:Referring to Fig. 3, it is a flowchart for extracting moving foreground objects of the present application, and the specific description is as follows:
步骤2011,获取当前帧图像检测区域的前景图像。例如,可利用混合高斯模型进行前景图像提取,通过对监控实况进行多高斯建模,实时更新高斯模型背景,从而提高前景图像提取精度。Step 2011, acquire the foreground image of the detection area of the current frame image. For example, the mixed Gaussian model can be used for foreground image extraction, and the Gaussian model background can be updated in real time by performing multi-Gaussian modeling on the monitoring scene, thereby improving the accuracy of foreground image extraction.
步骤2012,通过对前景图像进行后处理得到前景目标框。该后处理可以包括中值滤波、膨胀操作以及区域连通处理等,经后处理后得到前景目标框,该前景目标框为运动前景目标在检测区域内的大致范围,通过前景目标框进一步缩小了运动前景目标的检测范围。Step 2012, obtain the foreground target frame by post-processing the foreground image. The post-processing may include median filtering, expansion operation, and regional connection processing, etc. After post-processing, the foreground target frame is obtained. The foreground target frame is the approximate range of the moving foreground target in the detection area. Detection range of foreground objects.
步骤2013,计算当前帧图像检测区域和前一帧图像检测区域的帧差图。Step 2013, calculating the frame difference map between the current frame image detection area and the previous frame image detection area.
步骤2014,获取检测区域帧差图的帧差边缘纹理图。例如,可通过Sobel(索贝尔)处理获取帧差边缘纹理图。Step 2014, acquire the frame difference edge texture map of the detection area frame difference map. For example, the frame difference edge texture map can be obtained through Sobel (Sobel) processing.
步骤2015,在获取的帧差边缘纹理图上,对前景目标框对应的图像区域进行水平投影和垂直投影。Step 2015, on the obtained frame difference edge texture map, perform horizontal projection and vertical projection on the image area corresponding to the foreground target frame.
步骤2016,获取投影后生成的水平投影直方图和垂直投影直方图。Step 2016, acquiring the horizontal projection histogram and the vertical projection histogram generated after the projection.
步骤2017,根据水平投影直方图和垂直投影直方图对前景目标框范围内的帧差边缘纹理图像进行目标分割获得运动前景目标。Step 2017, perform target segmentation on the frame difference edge texture image within the range of the foreground target frame according to the horizontal projection histogram and the vertical projection histogram to obtain a moving foreground target.
首先,分别计算水平投影直方图和垂直投影直方图的处理优先级。First, the processing priorities of the horizontally projected histogram and the vertically projected histogram are calculated separately.
水平投影直方图和垂直投影直方图的处理优先级计算方法相同,具体为:The processing priority calculation method of the horizontal projection histogram and the vertical projection histogram is the same, specifically:
公式(1) Formula 1)
其中,ux为处理优先级;xi为投影值为i的行数或列数;ωi为加权系数;n为预设的投影阈值。Among them, u x is the processing priority; x i is the number of rows or columns whose projection value is i; ω i is the weighting coefficient; n is the preset projection threshold.
其中,投影阈值n可以根据实验数据选择一个较小值(例如,n=5),当投影值小于等于该投影阈值时,表示该投影值对应的图像区域为背景。投影值越小,对应图像区域为背景的可能性越大。因此,在设置加权系数ωi时,越小的投影值设置的加权系数越大。本申请实施例通过处理优先级的高低反映投影直方图中背景区域的大小,其中,处理优先级较高的投影直方图对应的背景区域越大,反之亦然。Wherein, the projection threshold n can select a small value (for example, n=5) according to the experimental data, and when the projection value is less than or equal to the projection threshold, it means that the image region corresponding to the projection value is the background. The smaller the projection value, the more likely the corresponding image area is the background. Therefore, when setting the weighting coefficient ω i , the smaller the projection value is, the larger the weighting coefficient is set. In this embodiment of the present application, the size of the background area in the projection histogram is reflected by the level of the processing priority, wherein the background area corresponding to the projection histogram with a higher processing priority is larger, and vice versa.
在计算出水平投影直方图的处理优先级和垂直投影直方图的处理优先级后,首先,选择处理优先级高的投影直方图对当前前景目标框范围内的帧差边缘纹理图像进行目标分割,然后,选择处理优先级低的投影直方图对经过处理优先级高的投影直方图分割后的帧差边缘纹理图像进行目标分割。例如,假设水平投影直方图的处理优先级高于垂直投影直方图的处理优先级,则先根据水平投影直方图进行目标分割,再根据垂直投影直方图进行目标分割。After calculating the processing priority of the horizontal projection histogram and the processing priority of the vertical projection histogram, first, select the projection histogram with high processing priority to perform target segmentation on the frame difference edge texture image within the range of the current foreground target frame, Then, the projection histogram with a low processing priority is selected to perform target segmentation on the frame difference edge texture image after being segmented by the projection histogram with a high processing priority. For example, assuming that the processing priority of the horizontal projection histogram is higher than that of the vertical projection histogram, object segmentation is first performed according to the horizontal projection histogram, and then object segmentation is performed according to the vertical projection histogram.
不同方向的投影直方图的目标分割方法相同,具体为,根据目标分割算法计算当前选择的投影直方图的分割累加值,其中,该目标分割算法可以为:The target segmentation method of the projection histogram in different directions is the same, specifically, calculate the segmentation cumulative value of the currently selected projection histogram according to the target segmentation algorithm, wherein the target segmentation algorithm can be:
公式(2) Formula (2)
公式(3) Formula (3)
其中,ωj为加权系数,且为正整数;n为投影阈值;yj为第j行或列的投影值;f(ωj)为带正负方向的加权系数;m1、m2为行或列,且m2>m1;Ty为分割累加值。如前所述,投影阈值n可以取一个较小值,以表示投影值小于或等于该投影阈值n的图像区域为背景。Among them, ω j is the weighting coefficient, which is a positive integer; n is the projection threshold; y j is the projection value of the jth row or column; f(ω j ) is the weighting coefficient with positive and negative directions; m 1 and m 2 are row or column, and m 2 >m 1 ; T y is the accumulated value of division. As mentioned above, the projection threshold n can take a small value, and the image region whose projection value is less than or equal to the projection threshold n is used as the background.
通过遍历投影直方图的行(如果当前选择的投影直方图为水平投影直方图)或列(如果当前选择的投影直方图为垂直投影直方图),选择不同的m1和m2计算分割累加值Ty。Select different m 1 and m 2 to calculate the split accumulation value by traversing the rows (if the currently selected projection histogram is a horizontal projection histogram) or columns (if the currently selected projection histogram is a vertical projection histogram) of the projection histogram Ty .
当分割累加值Ty大于或等于预设的分割阈值T,且小于分割阈值T时,确认m1和m2为一组目标分割线,位于目标分割线m1和m2之间的图像区域为背景。在完成当前投影直方图所有目标分割线的确认后,可将分割出的背景区域剔除,减少后续处理的运算量。这也是本申请实施例按照处理优先级从高到低进行目标分割的原因,即先从包含背景较多的方向进行处理,剔除大部分背景,再从包含背景相对较少的方向进行处理,剔除剩余的背景,提高目标分割的整体效率。When the segmentation accumulation value T y is greater than or equal to the preset segmentation threshold T, and When it is smaller than the segmentation threshold T, it is confirmed that m 1 and m 2 are a group of target segmentation lines, and the image area between the target segmentation lines m 1 and m 2 is the background. After the confirmation of all the target segmentation lines of the current projection histogram is completed, the segmented background area can be eliminated to reduce the computation load of subsequent processing. This is also the reason why the embodiment of the present application performs target segmentation according to the processing priority from high to low, that is, first process from the direction containing more background, remove most of the background, and then process from the direction containing relatively less background, remove remaining background, improving the overall efficiency of object segmentation.
在完成上述目标分割后,可得到多个包含运动目标的图像块,以下将包含运动目标的图像块简称为运动前景目标,该运动前景目标用于后续头肩特征检测。可见,本申请通过目标分割可缩小后续头肩特征的检测范围,从而降低特征误检率,提高特征的检出效率。After the above object segmentation is completed, a plurality of image blocks containing moving objects can be obtained. Hereinafter, the image blocks containing moving objects are referred to as moving foreground objects for short, and the moving foreground objects are used for subsequent head and shoulder feature detection. It can be seen that the present application can narrow the detection range of subsequent head and shoulders features through target segmentation, thereby reducing the false detection rate of features and improving the detection efficiency of features.
步骤202,检测所述运动前景目标中的头肩特征框。Step 202, detecting the head and shoulders feature frame in the moving foreground object.
利用现有的特征检测算法(例如,Adaboost算法)对运动前景目标中的头肩特征进行检测,其中,在进行分类器训练时,可选择角度一致、大小接近、特征相近的头肩图像作为正样本,以提升特征检测的速度。Use the existing feature detection algorithm (for example, Adaboost algorithm) to detect the head and shoulders features in the moving foreground target. When training the classifier, the head and shoulders images with the same angle, close size and similar features can be selected as positive samples to improve the speed of feature detection.
通过对运动前景目标进行头肩检测,可检测出多个头肩特征框,每一个头肩特征框代表一个人。但是由于特征检测存在误差,对同一个人可能检出多个头肩特征框,如果不进行特殊处理,后续人数统计的误差将会较大。By performing head and shoulder detection on moving foreground objects, multiple head and shoulder feature boxes can be detected, and each head and shoulder feature box represents a person. However, due to errors in feature detection, multiple head and shoulder feature boxes may be detected for the same person. If special processing is not performed, the error in subsequent population counts will be large.
本申请实施例利用同一目标的特征框彼此重叠的特点,将多个重叠的头肩特征框融合成一个头肩特征框,以便后续特征跟踪和匹配。The embodiment of the present application utilizes the characteristic that feature frames of the same target overlap each other, and fuses multiple overlapping head and shoulder feature frames into one head and shoulder feature frame for subsequent feature tracking and matching.
融合处理过程如下:从当前存在的多个头肩特征框中选择两个相互之间未进行过融合处理的头肩特征框,分别获取这两个头肩特征框的面积以及这两个头肩特征框的交叉面积。根据这两个头肩特征框的面积以及交叉面积判断这两个头肩特征框是否满足融合条件,当两个头肩特征框满足融合条件时,将这两个头肩特征框的最小外接矩形作为新的头肩特征框;当两个头肩特征框不满足融合条件时,作为独立的头肩特征框存在。The fusion processing process is as follows: select two head and shoulders feature boxes that have not been fused with each other from the currently existing multiple head and shoulders feature boxes, and obtain the area of the two head and shoulders feature boxes and the area of the two head and shoulders feature boxes respectively. intersection area. According to the area and intersection area of the two head and shoulders feature boxes, it is judged whether the two head and shoulders feature boxes meet the fusion conditions. When the two head and shoulders feature boxes meet the fusion conditions, the minimum circumscribed rectangle of the two head and shoulders feature boxes is used as the new head Shoulder feature box; when two head and shoulder feature boxes do not meet the fusion conditions, they exist as independent head and shoulder feature boxes.
判断当前存在的任意两个头肩特征框之间是否均已进行融合处理,若是,则停止融合处理,当前存在的头肩特征框均为独立的头肩特征框,即可以认为每一个头肩特征框对应一个人;若否,则返回继续执行融合处理。Determine whether fusion processing has been performed between any two currently existing head and shoulder feature frames, and if so, stop the fusion processing. The currently existing head and shoulder feature frames are all independent head and shoulder feature frames, that is, each head and shoulder feature frame can be considered The box corresponds to a person; if not, return to continue the fusion process.
其中,判断两个头肩特征框是否满足融合条件的过程为:Among them, the process of judging whether the two head and shoulder feature boxes meet the fusion conditions is as follows:
首先,判断两个头肩特征框的交叉面积Area_Over是否大于标准头肩特征框面积Area_S乘以预设的面积百分比阈值θ(例如,θ=50%)。其中,标准头肩特征框面积Area_S为当前应用场景下预设的头肩标准宽度Width的平方。Firstly, it is judged whether the intersection area Area_Over of two head and shoulder feature frames is greater than the standard head and shoulder feature frame area Area_S multiplied by a preset area percentage threshold θ (for example, θ=50%). Wherein, the area of the standard head and shoulders feature frame Area_S is the square of the preset head and shoulders standard width Width in the current application scenario.
当交叉面积Area_Over小于或等于标准头肩特征框面积Area_S乘以预设的面积百分比阈值θ时,说明两个头肩特征框的交叉面积很小或完全独立,因此,确认两个头肩特征框不满足融合条件。When the intersection area Area_Over is less than or equal to the area Area_S of the standard head and shoulders feature box multiplied by the preset area percentage threshold θ, it means that the intersection area of the two head and shoulders feature boxes is small or completely independent. Therefore, it is confirmed that the two head and shoulders feature boxes do not satisfy Fusion conditions.
当交叉面积Area_Over大于标准头肩特征框面积Area_S乘以预设的面积百分比阈值θ时,说明两个头肩特征框的交叉面积初步满足融合条件,还需进一步确认。When the intersection area Area_Over is greater than the area Area_S of the standard head and shoulders feature box multiplied by the preset area percentage threshold θ, it means that the intersection area of the two head and shoulders feature boxes initially meets the fusion condition, and further confirmation is needed.
在两个头肩特征框的交叉面积初步满足融合条件后,计算两个头肩特征框的交叉面积百分比:After the intersection area of the two head and shoulders feature boxes initially meets the fusion condition, the percentage of the intersection area of the two head and shoulders feature boxes is calculated:
公式(4) Formula (4)
ωa+ωb=1,ωa>0,ωb>0 公式(5)ω a +ω b = 1, ω a > 0, ω b > 0 Formula (5)
其中,ωa为头肩特征框A的加权系数;ωb为头肩特征框B的加权系数;Area_Over为头肩特征框A和头肩特征框B的交叉面积;Area_A为头肩特征框A的面积;Area_B为头肩特征框B的面积;p为头肩特征框A和头肩特征框B的交叉面积百分比。Among them, ω a is the weighting coefficient of the head and shoulders feature box A; ω b is the weighting coefficient of the head and shoulders feature box B; Area_Over is the intersection area of the head and shoulders feature box A and the head and shoulders feature box B; Area_A is the head and shoulders feature box A area; Area_B is the area of the head and shoulders feature box B; p is the cross area percentage of the head and shoulders feature box A and the head and shoulders feature box B.
当两个头肩特征框的交叉面积百分比p大于预设的面积百分比阈值θ,确认两个头肩特征框满足融合条件。当两个头肩特征框的交叉面积百分比p小于或等于预设的面积百分比阈值θ,确认两个头肩特征框不满足融合条件。When the intersection area percentage p of the two head and shoulder feature boxes is greater than the preset area percentage threshold θ, it is confirmed that the two head and shoulder feature boxes meet the fusion condition. When the intersection area percentage p of the two head and shoulder feature boxes is less than or equal to the preset area percentage threshold θ, it is confirmed that the two head and shoulder feature boxes do not meet the fusion condition.
步骤203,判断所述运动前景目标中的头肩特征是否满足人数统计触发条件。Step 203, judging whether the head and shoulders feature in the moving foreground object satisfies the trigger condition of people counting.
在完成步骤202的头肩特征框检测后,对检测出的头肩特征框进行目标匹配及轨迹跟踪。After the head and shoulders feature frame detection in step 202 is completed, target matching and trajectory tracking are performed on the detected head and shoulders feature frame.
头肩特征框的匹配过程如下:获取当前帧图像和前一帧图像中的头肩特征框的面积以及位置。假设,当前帧图像中头肩特征框A的宽度为wa,则头肩特征框A的面积为前一帧图像中头肩特征框B的宽度为wb,则头肩特征框B的面积为头肩特征框的坐标通常采用特征框中心点的坐标,假设,头肩特征框A的坐标为(xa,ya),头肩特征框B的坐标为(xb,yb)。The matching process of the head and shoulders feature box is as follows: obtain the area and position of the head and shoulders feature box in the current frame image and the previous frame image. Suppose, the width of the head and shoulders feature box A in the current frame image is w a , then the area of the head and shoulders feature box A is The width of the head and shoulders feature box B in the previous frame image is w b , then the area of the head and shoulders feature box B is The coordinates of the head and shoulders feature box usually adopt the coordinates of the center point of the feature box. Assume that the coordinates of the head and shoulders feature box A are (x a , y a ), and the coordinates of the head and shoulders feature box B are (x b , y b ).
根据获取的头肩特征框的面积以及位置确定当前帧图像和前一帧图像中的头肩特征框是否匹配,具体可根据如下公式进行确认。According to the acquired area and position of the head and shoulders feature box, determine whether the head and shoulders feature box in the current frame image and the previous frame image match, which can be confirmed according to the following formula.
公式(6) Formula (6)
公式(7) Formula (7)
公式(8) Formula (8)
公式(9) Formula (9)
ω1+ω2=1,ω3×η+ω4=1,η>1 公式(10)ω 1 +ω 2 =1, ω 3 ×η+ω 4 =1, η>1 Formula (10)
其中,dist(a,b)为头肩特征框A和头肩特征框B之间的距离;diff_area(a,b)为头肩特征框A和头肩特征框B的面积均方差;direction表示头肩特征框A相对于头肩特征框B的移动方向,0和1代表两个相反的移动方向;θ1和θ2为预设的不同移动方向上的距离阈值;Thr_Direction表示根据移动方向选定的距离阈值;ω1和ω3为距离的权重系数;ω2和ω4为面积均方差的权重系数;η为重要程度系数,表示距离值更重要一些;Thr为匹配评价值。Among them, dist(a, b) is the distance between the head and shoulders feature box A and the head and shoulders feature box B; diff_area(a, b) is the area mean square difference between the head and shoulders feature box A and the head and shoulders feature box B; direction means The moving direction of the head and shoulders feature frame A relative to the head and shoulders feature frame B, 0 and 1 represent two opposite moving directions; θ 1 and θ 2 are preset distance thresholds in different moving directions; Thr_Direction means selecting ω 1 and ω 3 are the weight coefficients of the distance; ω 2 and ω 4 are the weight coefficients of the mean square error of the area; η is the importance coefficient, indicating that the distance value is more important; Thr is the matching evaluation value.
在公式(8)中设置了两个与移动方向相关的距离阈值(θ1和θ2),这是由于摄像机安装角度的原因,导致人在靠近摄像机或背离摄像机移动相同距离时,显示在画面中的移动距离不同,因此,本申请实施例通过在不同的移动方向上设置不同的距离阈值,提高头肩特征匹配的精度。In formula (8), two distance thresholds (θ 1 and θ 2 ) related to the moving direction are set. This is due to the installation angle of the camera. The moving distances are different, therefore, the embodiments of the present application improve the accuracy of head and shoulders feature matching by setting different distance thresholds in different moving directions.
通过上述公式计算出相邻两帧图像中头肩特征框的匹配评价值Thr,该匹配评价值Thr反应了相邻两帧图像中头肩特征框的匹配程度。获取预设的匹配评价阈值δ,判断匹配评价值Thr是否小于匹配评价阈值δ。当匹配评价值Thr小于匹配评价阈值δ时,确定当前帧图像和前一帧图像中的头肩特征框匹配;否则,确定当前帧图像和前一帧图像中的头肩特征框不匹配,当前帧图像中的头肩特征框为新的头肩特征框。The matching evaluation value Thr of the head and shoulders feature boxes in two adjacent frames of images is calculated by the above formula, and the matching evaluation value Thr reflects the matching degree of the head and shoulders feature boxes in two adjacent frames of images. A preset matching evaluation threshold δ is acquired, and it is judged whether the matching evaluation value Thr is smaller than the matching evaluation threshold δ. When the matching evaluation value Thr is less than the matching evaluation threshold δ, it is determined that the head and shoulder feature boxes in the current frame image and the previous frame image match; otherwise, it is determined that the head and shoulder feature boxes in the current frame image and the previous frame image do not match, and the current The head and shoulders feature box in the frame image is a new head and shoulders feature box.
从上述头肩特征匹配过程可以看出,本申请实施例的头肩特征匹配原则为:相邻帧图像中的头肩特征框的距离越近、面积越接近,匹配度越高。From the above head and shoulders feature matching process, it can be seen that the head and shoulders feature matching principle of the embodiment of the present application is: the closer the distance and the closer the area of the head and shoulders feature boxes in adjacent frame images, the higher the matching degree.
在确定头肩特征匹配后,对头肩特征进行轨迹跟踪处理。具体为,记录该头肩特征框在当前帧图像中的位置,简称当前位置;记录该头肩特征框首次出现在图像检测区域的位置,即起始位置;累计该头肩特征框的出现次数。After the matching of the head and shoulders features is determined, track the head and shoulders features. Specifically, record the position of the head and shoulders feature box in the current frame image, referred to as the current position; record the position where the head and shoulders feature box first appears in the image detection area, that is, the starting position; accumulate the number of occurrences of the head and shoulders feature box .
在获取上述头肩特征的轨迹跟踪信息后,判断该头肩特征是否满足人数统计触发条件,具体为:判断该头肩特征框是否沿运动方向远离计数触发线,其中,该运动方向为头肩特征框从起始位置到当前位置的方向,该计数触发线为检测区域内的一条预设线;判断该头肩特征框的帧间移动距离是否大于或等于预设的帧间移动距离阈值,其中,该帧间移动距离为头肩特征框在当前帧图像和前一帧图像之间的移动距离。After obtaining the trajectory tracking information of the above-mentioned head-and-shoulders feature, it is judged whether the head-shoulders feature satisfies the people counting trigger condition, specifically: judging whether the head-shoulders feature frame is far away from the counting trigger line along the movement direction, wherein the movement direction is head-shoulders The direction of the feature frame from the initial position to the current position, the counting trigger line is a preset line in the detection area; determine whether the inter-frame movement distance of the head and shoulders feature box is greater than or equal to the preset inter-frame movement distance threshold, Wherein, the moving distance between frames is the moving distance of the head and shoulders feature box between the current frame image and the previous frame image.
当头肩特征框沿运动方向远离计数触发线,且该头肩特征框的帧间移动距离大于或等于预设的帧间移动距离阈值时,确认该头肩特征框满足人数统计触发条件;否则,确认该头肩特征框不满足人数统计触发条件。When the head and shoulders feature box moves away from the counting trigger line along the motion direction, and the moving distance between frames of the head and shoulders feature box is greater than or equal to the preset moving distance threshold between frames, it is confirmed that the head and shoulders feature box meets the people counting trigger condition; otherwise, Confirm that the head and shoulders feature box does not meet the people counting trigger condition.
以下对上述人数统计触发条件作进一步补充说明:The following is a further supplementary description of the trigger conditions for the above-mentioned people counting:
条件一,头肩特征框沿运动方向远离计数触发线。Condition 1: The head-and-shoulders feature frame moves away from the counting trigger line along the movement direction.
该触发条件至少适用于以下两种场景:场景一,假设头肩特征框的起始位置在计数触发线的上方,当前位置在计数触发线的下方,说明该头肩特征框的运动方向为从上到下,且已穿越计数触发线向远离计数触发线的方向运动,因此,可进行人数统计。场景二,假设头肩特征框的起始位置在计数触发线的下方,头肩特征框在计数触发线的下方区域活动,最后从检测区域的下边缘离开,则当头肩特征框运动到起始位置下方时,可根据起始位置和当前位置确定其运动方向为从上到下,且向背离计数触发线的方向运动,同样可进行人数统计。This trigger condition is applicable to at least the following two scenarios: Scenario 1, assuming that the starting position of the head and shoulders feature frame is above the counting trigger line, and the current position is below the counting trigger line, it means that the movement direction of the head and shoulders feature frame is from Up to down, and has crossed the counting trigger line to move away from the counting trigger line, so people can be counted. Scenario 2, assuming that the starting position of the head and shoulders feature box is below the counting trigger line, the head and shoulders feature box moves in the area below the counting trigger line, and finally leaves from the lower edge of the detection area, then when the head and shoulders feature box moves to the start When the position is below, it can be determined according to the starting position and current position that its movement direction is from top to bottom, and it moves in a direction away from the counting trigger line, and it can also perform people counting.
条件二,头肩特征框的帧间移动距离大于或等于预设的帧间移动距离阈值。The second condition is that the inter-frame movement distance of the head and shoulders feature frame is greater than or equal to a preset inter-frame movement distance threshold.
由于头肩特征框检测存在误检,例如,将一些静态的背景物体当作人的头肩特征,因此,需要对检测出的头肩特征框作进一步筛选。本申请实施例利用背景物体移动距离较小的特点,预设一个帧间移动距离阈值,当相邻两帧中的头肩特征框的移动距离大于或等于预设的帧间移动距离阈值时,认为当前的头肩特征框是一个真实可信的头肩特征框。Since there are false detections in the head and shoulders feature frame detection, for example, some static background objects are regarded as human head and shoulders features, so the detected head and shoulders feature frames need to be further screened. In this embodiment of the present application, a threshold of moving distance between frames is preset by taking advantage of the small moving distance of background objects. When the moving distance of the head and shoulders feature frame in two adjacent frames is greater than or equal to the preset moving distance threshold between frames, It is considered that the current head and shoulders feature box is a true and credible head and shoulders feature box.
步骤204,当所述头肩特征框满足所述人数统计触发条件时,根据所述头肩特征框进行人数统计。Step 204, when the head and shoulders feature frame satisfies the people counting trigger condition, perform people counting according to the head and shoulders feature frame.
在通过步骤203确认头肩特征框满足人数统计触发条件时,在该头肩特征框的移动方向(从起始位置到当前位置的方向)上计数加一,并将该头肩特征框标记为已计数头肩特征框,避免重复计数。When step 203 confirms that the head-and-shoulders feature frame satisfies the triggering condition for people counting, count plus one in the moving direction (from the initial position to the current position) of the head-and-shoulders feature frame, and mark the head-and-shoulders feature frame as Head and shoulders feature boxes are counted to avoid double counting.
此外,本申请实施例在运动前景目标离开检测区域时,判断运动前景目标中的头肩特征框是否均未参与过人数统计。当运动前景目标中的头肩特征框均未参与过人数统计时,根据运动前景目标进行人数统计。In addition, in the embodiment of the present application, when the moving foreground object leaves the detection area, it is judged whether the head and shoulders feature boxes in the moving foreground object have not participated in people counting. When none of the head and shoulder feature boxes in the moving foreground object has participated in the people counting, the people counting is performed according to the moving foreground object.
前述描述介绍了基于头肩特征的人数统计方法,但该统计方式存在一定的漏检率,例如,运动前景目标中的人被遮挡物挡住,则通过头肩特征无法检测到人,因此,无法进行计数。The foregoing description introduces the method of counting people based on head and shoulder features, but there is a certain rate of missed detection in this statistical method. to count.
本申请实施例针对上述情况,在基于头肩特征进行人数统计的基础上,增加一种基于运动前景目标的辅助计数方法。该辅助计数方法在运动前景目标即将离开当前检测区域之前,对运动前景目标中头肩特征框计数统计情况进行判断,当该运动前景目标中的头肩特征框均未参与过人数统计时,根据该运动前景目标进行人数统计,以降低漏检率。In this embodiment of the present application, an auxiliary counting method based on moving foreground objects is added on the basis of counting people based on head and shoulder features. The auxiliary counting method judges the counting statistics of the head and shoulders feature frames in the moving foreground object before the moving foreground object is about to leave the current detection area. The moving foreground target performs people counting to reduce the missed detection rate.
具体为,在通过步骤201提取运动前景目标后,对该运动前景目标进行目标匹配及轨迹跟踪。Specifically, after the moving foreground object is extracted through step 201, object matching and trajectory tracking are performed on the moving foreground object.
运动前景目标的匹配过程如下:获取当前帧图像和前一帧图像中的运动前景目标的面积,当两帧图像中的运动前景目标的重合面积大于预设的重合面积阈值时,确认该运动前景目标匹配。The matching process of the moving foreground object is as follows: the area of the moving foreground object in the current frame image and the previous frame image is obtained, and when the overlapping area of the moving foreground object in the two frame images is greater than the preset overlapping area threshold, the moving foreground is confirmed target match.
在运动前景目标匹配成功后,对运动前景目标进行轨迹跟踪。具体为,记录该运动前景目标在当前帧图像中的位置,即当前位置;记录该运动前景目标首次出现在图像检测区域的位置,即起始位置;累计该运动前景目标的出现次数。After the moving foreground target is successfully matched, track the moving foreground target. Specifically, record the position of the moving foreground object in the current frame image, that is, the current position; record the position where the moving foreground object first appears in the image detection area, that is, the starting position; and accumulate the number of occurrences of the moving foreground object.
在确认运动前景目标中的头肩特征框均未参与过人数统计时,利用运动前景目标的匹配跟踪结果进行人数统计。When it is confirmed that none of the head and shoulder feature boxes in the moving foreground object has participated in the people counting, the matching tracking results of the moving foreground object are used for the people counting.
首先,设置运动前景目标的出现次数阈值。该出现次数阈值根据当前已参与人数统计的头肩特征框的出现次数进行设置。具体为,求前N个参与人数统计的头肩特征框的出现次数平均值,将出现次数平均值乘以预设的调节系数作为运动前景目标的出现次数阈值。First, set the occurrence threshold of moving foreground objects. The threshold of occurrence times is set according to the number of occurrences of the head and shoulders feature box that has currently participated in the counting of people. Specifically, calculate the average number of occurrences of the head and shoulders feature boxes of the first N number of participants, and multiply the average number of occurrences by a preset adjustment coefficient as the threshold for the number of occurrences of the moving foreground target.
其中,头肩特征框的出现次数平均值始终根据最新参与人数统计的头肩特征框进行计算,使得根据该出现次数平均值计算出的出现次数阈值不为一个固定值,而是一个可以根据应用环境实时变化的数据,可提高人数统计的准确度。此外,由于人在经过检测区域时,头肩特征框的出现次数会少于相对比较稳定的运动前景目标的出现次数,因此,本申请在头肩特征框的出现次数平均值的基础上增加一个调节系数,该调节系数大于1,以使运动前景目标的出现次数阈值的设置更为合理。Among them, the average number of occurrences of the head and shoulders feature box is always calculated according to the head and shoulders feature box of the latest participant statistics, so that the occurrence threshold calculated according to the average number of occurrences is not a fixed value, but a value that can be determined according to the application The data of real-time changes in the environment can improve the accuracy of people counting. In addition, because when a person passes through the detection area, the number of occurrences of the head and shoulders feature frame will be less than the number of occurrences of the relatively stable moving foreground target. Therefore, this application adds an average number of occurrences of the head and shoulders feature frame An adjustment coefficient, the adjustment coefficient is greater than 1, so that the setting of the threshold of the number of occurrences of the moving foreground object is more reasonable.
在完成运动前景目标的出现次数阈值的设置后,判断当前运动前景目标的出现次数是否大于设置的出现次数阈值。当运动前景目标的出现次数大于设置的出现次数阈值时,在当前运动前景目标的运动方向(运动前景目标从起始位置到当前位置的方向)上,对人数加一。After completing the setting of the threshold of the number of appearances of the moving foreground object, it is judged whether the number of appearances of the current moving foreground object is greater than the set threshold of appearances. When the number of occurrences of the moving foreground object is greater than the set occurrence threshold, add one to the number of people in the moving direction of the current moving foreground object (the direction of the moving foreground object from the initial position to the current position).
由上述描述可以看出,本申请将检测出的头肩特征和提取出的运动前景目标相结合进行人数统计,降低人数统计的漏检概率,增强了人数统计的场景适应性;同时,通过目标分割等方法降低人数统计过程中的运算量,使得本申请的人数统计方法可以应用于摄像机等处理能力相对较弱的视频监控设备上,提高人数统计的实时性以及效率。It can be seen from the above description that this application combines the detected head and shoulder features with the extracted moving foreground target for people counting, which reduces the probability of missed detection of people counting and enhances the scene adaptability of people counting; at the same time, through the target Methods such as segmentation reduce the amount of computation in the process of people counting, so that the people counting method of the present application can be applied to video surveillance equipment with relatively weak processing capabilities such as cameras, and improve the real-time performance and efficiency of people counting.
与前述人数统计方法的实施例相对应,本申请还提供了人数统计装置的实施例。Corresponding to the embodiments of the aforementioned method for counting people, the present application also provides embodiments of a device for counting people.
本申请人数统计装置的实施例可以应用在图像处理设备上。装置实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻辑意义上的装置,是通过其所在设备的处理器运行存储器中对应的计算机程序指令形成的。从硬件层面而言,如图4所示,为本申请人数统计装置所在设备的一种硬件结构图,除了图4所示的处理器、其它接口、以及存储器之外,实施例中装置所在的设备通常根据该设备的实际功能,还可以包括其他硬件,对此不再赘述。The embodiments of the number counting device of the applicant can be applied to image processing equipment. The device embodiments can be implemented by software, or by hardware or a combination of software and hardware. Taking software implementation as an example, as a device in a logical sense, it is formed by running the corresponding computer program instructions in the memory on the processor of the device where it is located. From the perspective of hardware, as shown in Figure 4, it is a hardware structure diagram of the device where the applicant's number counting device is located. In addition to the processor, other interfaces, and memory shown in Figure 4, the device in the embodiment is located Usually, the device may also include other hardware according to the actual function of the device, which will not be repeated here.
请参考图5,为本申请一个实施例中的人数统计装置的结构示意图。该人数统计装置包括提取单元501、检测单元502、判断单元503以及统计单元504,其中:Please refer to FIG. 5 , which is a schematic structural diagram of a people counting device in an embodiment of the present application. The people counting device includes an extraction unit 501, a detection unit 502, a judgment unit 503 and a statistics unit 504, wherein:
提取单元501,用于通过对当前帧图像检测区域进行目标分割提取运动前景目标;An extraction unit 501, configured to extract a moving foreground target by performing target segmentation on the current frame image detection area;
检测单元502,用于检测所述运动前景目标中的头肩特征框;A detection unit 502, configured to detect the head and shoulders feature frame in the moving foreground object;
判断单元503,用于判断所述运动前景目标中的头肩特征框是否满足人数统计触发条件;A judging unit 503, configured to judge whether the head and shoulders feature frame in the moving foreground object satisfies the trigger condition of people counting;
统计单元504,用于当所述头肩特征框满足所述人数统计触发条件时,根据所述头肩特征框进行人数统计。The counting unit 504 is configured to perform counting of people according to the head and shoulders feature frame when the head and shoulders feature frame satisfies the people counting trigger condition.
进一步地,所述提取单元501,包括:Further, the extracting unit 501 includes:
前景图像获取模块,用于获取当前帧图像检测区域的前景图像;The foreground image acquisition module is used to acquire the foreground image of the current frame image detection area;
前景目标框获取模块,用于通过对所述前景图像进行后处理得到前景目标框;The foreground target frame acquisition module is used to obtain the foreground target frame by post-processing the foreground image;
帧差图计算模块,用于计算所述当前帧图像检测区域和前一帧图像检测区域的帧差图;A frame difference map calculation module, used to calculate the frame difference map of the current frame image detection area and the previous frame image detection area;
帧差纹理图获取模块,用于获取检测区域帧差图的帧差边缘纹理图;The frame difference texture map acquisition module is used to obtain the frame difference edge texture map of the frame difference map of the detection area;
图像投影模块,用于在所述帧差边缘纹理图上,对所述前景目标框对应的图像区域进行水平投影和垂直投影;An image projection module, configured to perform horizontal projection and vertical projection on the image area corresponding to the foreground target frame on the frame difference edge texture map;
直方图获取模块,用于获取投影后生成的水平投影直方图和垂直投影直方图;A histogram acquisition module, configured to acquire a horizontal projection histogram and a vertical projection histogram generated after projection;
目标分割模块,用于根据所述水平投影直方图和所述垂直投影直方图对所述前景目标框范围内的帧差边缘纹理图像进行目标分割获得所述运动前景目标。An object segmentation module, configured to perform object segmentation on the frame difference edge texture image within the range of the foreground object frame according to the horizontal projection histogram and the vertical projection histogram to obtain the moving foreground object.
进一步地,所述目标分割模块,包括:Further, the target segmentation module includes:
优先级计算子模块,用于分别计算所述水平投影直方图的处理优先级和所述垂直投影直方图的处理优先级;A priority calculation submodule, configured to calculate the processing priority of the horizontal projection histogram and the processing priority of the vertical projection histogram respectively;
目标分割子模块,用于选择处理优先级高的投影直方图对当前前景目标框范围内的帧差边缘纹理图像进行目标分割;选择处理优先级低的投影直方图对经过处理优先级高的投影直方图分割后的帧差边缘纹理图像进行目标分割,以获得若干运动前景目标。The target segmentation sub-module is used to select the projection histogram with high processing priority to perform target segmentation on the frame difference edge texture image within the range of the current foreground target frame; select the projection histogram with low processing priority to process the projection with high priority The frame difference edge texture image after histogram segmentation is subjected to object segmentation to obtain several moving foreground objects.
进一步地,所述优先级计算子模块,具体用于:Further, the priority calculation submodule is specifically used for:
所述水平投影直方图和所述垂直投影直方图的处理优先级计算方法相同,具体为:The processing priority calculation method of the horizontal projection histogram and the vertical projection histogram is the same, specifically:
其中,in,
ux为处理优先级;u x is the processing priority;
xi为投影值为i的行数或列数;x i is the number of rows or columns whose projection value is i;
ωi为加权系数;ω i is the weighting coefficient;
n为预设的投影阈值。n is a preset projection threshold.
进一步地,所述目标分割子模块,具体用于:Further, the target segmentation submodule is specifically used for:
所述选择处理优先级高的投影直方图进行目标分割和所述选择处理优先级低的投影直方图进行目标分割的方法相同,具体为:The method of selecting a projection histogram with a high processing priority for target segmentation is the same as the method for selecting a projection histogram with a low processing priority for target segmentation, specifically:
根据目标分割算法计算选择的投影直方图的分割累加值,其中,所述目标分割算法为:Calculate the segmentation cumulative value of the selected projection histogram according to the target segmentation algorithm, wherein the target segmentation algorithm is:
其中,in,
ωj为加权系数,且为正整数;ω j is the weighting coefficient and is a positive integer;
n为投影阈值;n is the projection threshold;
yj为第j行或列的投影值;y j is the projection value of the jth row or column;
f(ωj)为带正负方向的加权系数;f(ω j ) is the weighting coefficient with positive and negative directions;
m1、m2为行或列,且m2>m1;m 1 and m 2 are rows or columns, and m 2 >m 1 ;
Ty为分割累加值;T y is the division accumulation value;
当所述分割累加值Ty大于或等于预设的分割阈值T,且小于所述分割阈值T时,确认m1和m2为目标分割线。When the segmentation accumulation value T y is greater than or equal to the preset segmentation threshold T, and When it is smaller than the segmentation threshold T, it is confirmed that m 1 and m 2 are target segmentation lines.
进一步地,所述装置还包括:Further, the device also includes:
跟踪单元,用于所述检测单元502检测所述运动前景目标中的头肩特征框之后,对所述头肩特征框进行目标匹配以及轨迹跟踪;A tracking unit, configured to perform target matching and trajectory tracking on the head and shoulders feature frame after the detection unit 502 detects the head and shoulders feature frame in the moving foreground object;
记录单元,用于根据目标匹配以及轨迹跟踪结果记录所述头肩特征框在当前帧图像中的当前位置;记录所述头肩特征框首次出现在图像检测区域的起始位置;The recording unit is used to record the current position of the head and shoulders feature frame in the current frame image according to the target matching and trajectory tracking results; record the initial position where the head and shoulders feature frame appears in the image detection area for the first time;
所述判断单元503,具体用于判断所述头肩特征框是否沿运动方向远离计数触发线,所述运动方向为所述头肩特征框从所述起始位置到所述当前位置的方向;判断所述头肩特征框的帧间移动距离是否大于或等于预设的帧间移动距离阈值,所述帧间移动距离为所述头肩特征框在当前帧图像和前一帧图像之间的移动距离;当所述头肩特征框沿运动方向远离计数触发线,且所述头肩特征框的帧间移动距离大于或等于预设的帧间移动距离阈值时,确定所述头肩特征框满足人数统计触发条件;否则,确定所述头肩特征框不满足人数统计触发条件。The judging unit 503 is specifically used to judge whether the head and shoulders feature frame is away from the counting trigger line in a moving direction, the moving direction being the direction of the head and shoulders feature frame from the initial position to the current position; Judging whether the inter-frame movement distance of the head and shoulders feature frame is greater than or equal to a preset inter-frame movement distance threshold, the inter-frame movement distance is the distance between the current frame image and the previous frame image of the head and shoulders feature box Moving distance; when the head and shoulders feature frame moves away from the counting trigger line along the movement direction, and the moving distance between frames of the head and shoulders feature frame is greater than or equal to the preset moving distance threshold between frames, determine the head and shoulders feature frame The people counting trigger condition is met; otherwise, it is determined that the head and shoulders feature box does not meet the people counting trigger condition.
上述装置中各个单元的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。For the implementation process of the functions and effects of each unit in the above device, please refer to the implementation process of the corresponding steps in the above method for details, and will not be repeated here.
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本申请方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。As for the device embodiment, since it basically corresponds to the method embodiment, for related parts, please refer to the part description of the method embodiment. The device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network elements. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this application. It can be understood and implemented by those skilled in the art without creative effort.
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。The above is only a preferred embodiment of the application, and is not intended to limit the application. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the application should be included in the application. within the scope of protection.
Claims (11)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510540599.2A CN105139425B (en) | 2015-08-28 | 2015-08-28 | A kind of demographic method and device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510540599.2A CN105139425B (en) | 2015-08-28 | 2015-08-28 | A kind of demographic method and device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN105139425A CN105139425A (en) | 2015-12-09 |
| CN105139425B true CN105139425B (en) | 2018-12-07 |
Family
ID=54724757
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201510540599.2A Active CN105139425B (en) | 2015-08-28 | 2015-08-28 | A kind of demographic method and device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN105139425B (en) |
Families Citing this family (23)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105631418B (en) * | 2015-12-24 | 2020-02-18 | 浙江宇视科技有限公司 | A method and apparatus for counting people |
| CN105844234B (en) * | 2016-03-21 | 2020-07-31 | 商汤集团有限公司 | A method and device for people counting based on head and shoulders detection |
| CN106067008A (en) * | 2016-06-15 | 2016-11-02 | 汤美 | Student's statistical method of network courses and system |
| CN106504261B (en) * | 2016-10-31 | 2019-08-06 | 北京奇艺世纪科技有限公司 | A kind of image partition method and device |
| CN106530328B (en) * | 2016-11-04 | 2019-09-20 | 深圳维周机器人科技有限公司 | A method of it is followed based on video image to moving object detection and smoothly |
| CN108256404B (en) * | 2016-12-29 | 2021-12-10 | 北京旷视科技有限公司 | Pedestrian detection method and device |
| CN107093186A (en) * | 2017-03-10 | 2017-08-25 | 北京环境特性研究所 | The strenuous exercise's detection method matched based on edge projection |
| CN107330386A (en) * | 2017-06-21 | 2017-11-07 | 厦门中控智慧信息技术有限公司 | A kind of people flow rate statistical method and terminal device |
| CN108197579B (en) * | 2018-01-09 | 2022-05-20 | 杭州智诺科技股份有限公司 | Method for detecting number of people in protection cabin |
| CN108280952B (en) * | 2018-01-25 | 2020-03-27 | 盛视科技股份有限公司 | Passenger trailing monitoring method based on foreground object segmentation |
| CN110490030B (en) * | 2018-05-15 | 2023-07-14 | 保定市天河电子技术有限公司 | Method and system for counting number of people in channel based on radar |
| CN108921072B (en) * | 2018-06-25 | 2021-10-15 | 苏州欧普照明有限公司 | People flow statistical method, device and system based on visual sensor |
| CN108989677A (en) * | 2018-07-27 | 2018-12-11 | 上海与德科技有限公司 | A kind of automatic photographing method, device, server and storage medium |
| CN109101929A (en) * | 2018-08-16 | 2018-12-28 | 新智数字科技有限公司 | A kind of pedestrian counting method and device |
| CN111353342B (en) * | 2018-12-21 | 2023-09-19 | 浙江宇视科技有限公司 | Shoulder recognition model training method and device, and people counting method and device |
| CN111461086A (en) * | 2020-03-18 | 2020-07-28 | 深圳北斗应用技术研究院有限公司 | People counting method and system based on head detection |
| CN111723664A (en) * | 2020-05-19 | 2020-09-29 | 烟台市广智微芯智能科技有限责任公司 | Pedestrian counting method and system for open type area |
| CN114900669A (en) * | 2020-10-30 | 2022-08-12 | 深圳市商汤科技有限公司 | Scene monitoring method and device, electronic equipment and storage medium |
| CN112434566B (en) * | 2020-11-04 | 2024-05-07 | 深圳云天励飞技术股份有限公司 | Passenger flow statistics method and device, electronic equipment and storage medium |
| CN113469982B (en) * | 2021-07-12 | 2024-09-06 | 浙江大华技术股份有限公司 | Accurate passenger flow statistics method and device and electronic equipment |
| CN114383555A (en) * | 2021-12-31 | 2022-04-22 | 西朗门业(苏州)有限公司 | An electric door seal control method and electric door seal control device |
| CN115131827B (en) * | 2022-06-29 | 2025-03-11 | 珠海视熙科技有限公司 | Passenger flow human body detection method, device, storage medium and passenger flow counting camera |
| CN116091291A (en) * | 2023-01-31 | 2023-05-09 | 广东利元亨智能装备股份有限公司 | Robot number identification method and device and electronic equipment |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7940957B2 (en) * | 2006-06-09 | 2011-05-10 | Sony Computer Entertainment Inc. | Object tracker for visually tracking object motion |
| CN102214309A (en) * | 2011-06-15 | 2011-10-12 | 北京工业大学 | Special human body recognition method based on head and shoulder model |
| CN104751491A (en) * | 2015-04-10 | 2015-07-01 | 中国科学院宁波材料技术与工程研究所 | Method and device for tracking crowds and counting pedestrian flow |
-
2015
- 2015-08-28 CN CN201510540599.2A patent/CN105139425B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7940957B2 (en) * | 2006-06-09 | 2011-05-10 | Sony Computer Entertainment Inc. | Object tracker for visually tracking object motion |
| CN102214309A (en) * | 2011-06-15 | 2011-10-12 | 北京工业大学 | Special human body recognition method based on head and shoulder model |
| CN104751491A (en) * | 2015-04-10 | 2015-07-01 | 中国科学院宁波材料技术与工程研究所 | Method and device for tracking crowds and counting pedestrian flow |
Non-Patent Citations (2)
| Title |
|---|
| 基于视觉的行人流量统计方法研究;吴玉堂;《中国优秀硕士学位论文全文数据库 信息科技辑》;20131215(第S2期);第I138-1033页 * |
| 视频对象自动分割技术及其细胞神经网络实现方法的研究;张庆利;《中国博士学位论文全文数据库 信息科技辑》;20051115(第07期);第I136-13页 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN105139425A (en) | 2015-12-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN105139425B (en) | A kind of demographic method and device | |
| CN111144247B (en) | A method for retrograde detection of escalator passengers based on deep learning | |
| CN105844234B (en) | A method and device for people counting based on head and shoulders detection | |
| CN105631418B (en) | A method and apparatus for counting people | |
| CN108021848B (en) | Passenger flow statistics method and device | |
| CN102542797B (en) | Image-based traffic parameter detection system and method | |
| CN105940430B (en) | Personnel's method of counting and its device | |
| CN102819764B (en) | Method for counting pedestrian flow from multiple views under complex scene of traffic junction | |
| CN108053427A (en) | A kind of modified multi-object tracking method, system and device based on KCF and Kalman | |
| CN108009473A (en) | Based on goal behavior attribute video structural processing method, system and storage device | |
| CN108052859A (en) | A kind of anomaly detection method, system and device based on cluster Optical-flow Feature | |
| WO2015131734A1 (en) | Method, device, and storage medium for pedestrian counting in forward looking surveillance scenario | |
| Chen et al. | Intelligent vehicle counting method based on blob analysis in traffic surveillance | |
| EP2709066A1 (en) | Concept for detecting a motion of a moving object | |
| CN103986910A (en) | A method and system for counting passenger flow based on intelligent analysis camera | |
| US8599261B1 (en) | Vision-based car counting for multi-story carparks | |
| CN107688764A (en) | Detect the method and device of vehicle peccancy | |
| CN114898326A (en) | Method, system and device for one-way vehicle retrograde detection based on deep learning | |
| CN104835147A (en) | Method for detecting crowded people flow in real time based on three-dimensional depth map data | |
| Xu et al. | Segmentation and tracking of multiple moving objects for intelligent video analysis | |
| CN116311166A (en) | Traffic obstacle recognition method and device and electronic equipment | |
| CN106570449A (en) | A method and system for detecting people flow and popularity index based on region definition | |
| CN115423735A (en) | Passenger flow volume statistical method and system | |
| CN109977796A (en) | Trail current detection method and device | |
| EP2709065A1 (en) | Concept for counting moving objects passing a plurality of different areas within a region of interest |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |