[go: up one dir, main page]

CN101398896B - Device and method for extracting color features with strong discriminative power for imaging device - Google Patents

Device and method for extracting color features with strong discriminative power for imaging device Download PDF

Info

Publication number
CN101398896B
CN101398896B CN200710151897A CN200710151897A CN101398896B CN 101398896 B CN101398896 B CN 101398896B CN 200710151897 A CN200710151897 A CN 200710151897A CN 200710151897 A CN200710151897 A CN 200710151897A CN 101398896 B CN101398896 B CN 101398896B
Authority
CN
China
Prior art keywords
color
mrow
rectangle
histogram
msubsup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN200710151897A
Other languages
Chinese (zh)
Other versions
CN101398896A (en
Inventor
陈茂林
郑文植
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Original Assignee
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Samsung Telecommunications Technology Research Co Ltd, Samsung Electronics Co Ltd filed Critical Beijing Samsung Telecommunications Technology Research Co Ltd
Priority to CN200710151897A priority Critical patent/CN101398896B/en
Priority to KR1020070136917A priority patent/KR101329138B1/en
Priority to US12/216,707 priority patent/US8331667B2/en
Publication of CN101398896A publication Critical patent/CN101398896A/en
Application granted granted Critical
Publication of CN101398896B publication Critical patent/CN101398896B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

提供一种提取具有强识别力的颜色特征的方法和设备。该方法包括:将输入图像中的对象区域和非对象区域划分为多个矩形;建立对象矩形和非对象矩形的颜色直方图,每个颜色直方图被划分为多个离散的区间;根据颜色直方图提取矩形的主颜色;针对提取的主颜色计算对象矩形的颜色直方图区间与非对象矩形的颜色直方图区间之间的最小距离;基于最小距离计算对象矩形的颜色直方图区间的权重;基于对象矩形的颜色直方图区间的权重计算对象矩形的颜色分量的权重;基于对象矩形的颜色直方图区间的权重计算对象矩形的权重;基于对象矩形的权重和颜色分量的权重对所述对象矩形的颜色直方图区间进行重新加权;基于重新加权结果产生对象的最终颜色模型。

Provided are a method and device for extracting color features with strong discrimination. The method includes: dividing the object area and non-object area in the input image into multiple rectangles; establishing the color histogram of the object rectangle and the non-object rectangle, each color histogram is divided into multiple discrete intervals; according to the color histogram Figure extracts the main color of the rectangle; calculates the minimum distance between the color histogram interval of the object rectangle and the color histogram interval of the non-object rectangle for the extracted main color; calculates the weight of the color histogram interval of the object rectangle based on the minimum distance; based on Calculate the weight of the color component of the object rectangle based on the weight of the color histogram interval of the object rectangle; calculate the weight of the object rectangle based on the weight of the color histogram interval of the object rectangle; calculate the weight of the object rectangle based on the weight of the object rectangle and the weight of the color component The color histogram bins are reweighted; based on the reweighting results a final color model of the object is produced.

Description

用于成像设备的提取有强识别力的颜色特征的设备和方法Device and method for extracting color features with strong discriminative power for imaging device

技术领域technical field

本发明涉及模式识别、特征提取、统计学习以及对象检测技术(诸如人体检测技术),更具体地讲,涉及对象的色貌的具有强识别力的特征的提取或者揭示与相同或不同种类的其它对象相对的隐藏的具有强区分能力的颜色分布模式,本发明还涉及一种提取具有强识别力的颜色特征的设备和方法以及使用该设备和方法的成像设备。The present invention relates to pattern recognition, feature extraction, statistical learning, and object detection techniques (such as human detection techniques), and more particularly, relates to the extraction of highly discriminative features of an object's color appearance or reveals features related to other objects of the same or different kind. Objects are relatively hidden color distribution patterns with strong distinguishing ability. The present invention also relates to a device and method for extracting color features with strong distinguishing power and an imaging device using the device and method.

背景技术Background technique

对象外观的代表性颜色特征广泛用于视频帧中的对象检测和跟踪。在表示颜色特征时,通常使用颜色直方图。为了捕获颜色分布的空间布局,从对象区域划分的独立子区域的直方图是适合的。除了该方法之外,还有另外的方法(诸如运动信息、形状信息、几何约束等)来补偿颜色模型,以获得更好的检测结果。Representative color features of object appearance are widely used for object detection and tracking in video frames. When representing color features, a color histogram is usually used. To capture the spatial layout of the color distribution, a histogram of independent subregions divided from the object region is suitable. In addition to this method, there are additional methods (such as motion information, shape information, geometric constraints, etc.) to compensate the color model to obtain better detection results.

已经作出了各种尝试来解决使用颜色模型来跟踪连贯的运动对象的问题。这些尝试的示例包括:使用色貌进行概率跟踪的方法(该方法公开在计算机视觉欧洲会议2002中的由P.Perez等人所著的标题为“基于颜色的概率跟踪”中,以下简称为Perez方法);颜色直方图建模和概率图检测方法(该方法公开在第5845009A1号美国专利中,以下简称为009专利);基于颜色模型比较相似性的多搜索窗方法(该方法公开在第2007127775A1号美国专利中,以下简称为775专利);基于统计颜色模型提取皮肤颜色的方法(该方法公开在第2004017938A1号美国专利中,以下简称为938专利)。Various attempts have been made to solve the problem of tracking coherent moving objects using color models. Examples of these attempts include: a method for probabilistic tracking using color appearance (this method is disclosed in Computer Vision European Conference 2002 by P. method); color histogram modeling and probability map detection method (this method is disclosed in US Patent No. 5845009A1, hereinafter referred to as the 009 patent); based on the multi-search window method of color model comparison similarity (this method is disclosed in 2007127775A1 No. US Patent, hereinafter referred to as the 775 patent); a method for extracting skin color based on a statistical color model (this method is disclosed in US Patent No. 2004017938A1, hereinafter referred to as the 938 patent).

对象外观的代表性颜色特征广泛用于视频帧中的对象检测。在表示颜色特征时,通常使用颜色直方图(参见Perez方法、009专利、775专利以及938专利)。给定运动对象的位置和尺寸,可计算该运动对象的颜色直方图。在随后的视频帧中,计算对于该直方图模型的相似度映射。然后,连通域(blob)分析技术能够将具有与直方图模型高相似度的像素区域聚集为指示具有高概率的运动对象的位置的连通域。许多方法关注于如何有效地表示运动对象的颜色特征。实际上,重要的问题在于如何将运动对象与非运动对象(即,相同或不同种类的其它对象以及背景)相区分,  而不是完美的忠于目标的原来的颜色分布。对于运动对象,它由多个部分组成。例如,假设运动对象是一个人,则这个人包括面部/头部、着衣上身、着衣下身。人的色貌可能看起来与其它图像区域相似,尤其是在具有杂乱背景的环境下。在这种情况下,需要从人体选择具有强识别力的颜色区域作为颜色特征,而不是选择整个人体的代表性颜色特征。Representative color features of object appearance are widely used for object detection in video frames. When expressing color features, a color histogram is generally used (see Perez method, 009 patent, 775 patent, and 938 patent). Given the position and size of a moving object, the color histogram of the moving object can be calculated. In subsequent video frames, a similarity map for this histogram model is computed. Then, connected-domain (blob) analysis techniques are able to cluster pixel regions with high similarity to the histogram model into connected blobs indicating the locations of moving objects with high probability. Many methods focus on how to effectively represent the color features of moving objects. In fact, the important issue is how to distinguish moving objects from non-moving objects (ie, other objects of the same or different kind as well as the background), rather than being perfectly faithful to the original color distribution of the target. For moving objects, it consists of multiple parts. For example, assuming that the moving object is a person, the person includes a face/head, a clothed upper body, and a clothed lower body. The color appearance of people may appear similar to other image areas, especially in environments with cluttered backgrounds. In this case, it is necessary to select color regions with strong discrimination from the human body as color features instead of selecting representative color features of the entire human body.

发明内容Contents of the invention

具有强识别力的颜色特征指的是在运动对象本身和其它图像区域之间存在大的颜色差别。为了提取具有强识别力的颜色特征,将划分成若干块的对象的颜色直方图中的颜色矢量与其它图像区域的颜色直方图中的颜色矢量进行比较,与其它图像区域中的颜色矢量接近的运动对象的颜色矢量的重要性降低,另外的颜色矢量的重要性提高。A color feature with strong discriminative power means that there is a large color difference between the moving object itself and other image regions. In order to extract color features with strong discrimination, the color vectors in the color histogram of the object divided into several blocks are compared with the color vectors in the color histograms of other image regions, and the color vectors close to the color vectors in other image regions The importance of the color vector of the moving object is reduced, and the importance of the other color vector is increased.

因此,本发明提供一种对于成像和场景的不同情况提取具有强识别力的颜色特征和为成像单元的动态调节提供焦点区域的方法和设备。Therefore, the present invention provides a method and device for extracting color features with strong discrimination for different situations of imaging and scenes and providing focus areas for dynamic adjustment of imaging units.

根据本发明的一方面,提供一种提取具有强识别力的颜色特征的方法,该方法包括以下步骤:将输入图像中的对象区域和非对象区域划分为多个矩形;使用三个颜色通道建立对象矩形和非对象矩形的颜色直方图,其中,每个颜色直方图被划分为多个离散的区间;根据颜色直方图提取矩形的主颜色;针对提取的主颜色计算对象矩形的颜色直方图区间与非对象矩形的颜色直方图区间之间的最小距离;基于计算的最小距离计算对象矩形的颜色直方图区间的权重;基于计算的对象矩形的颜色直方图区间的权重计算对象矩形的颜色分量的权重;基于计算的对象矩形的颜色直方图区间的权重计算对象矩形的权重;基于计算的对象矩形的权重和颜色分量的权重对所述对象矩形的颜色直方图区间进行重新加权,从而提取对象的具有强识别力的颜色特征;基于对象矩形的颜色直方图区间的重新加权结果产生对象的最终颜色模型。According to one aspect of the present invention, there is provided a method for extracting color features with strong discrimination, the method comprising the following steps: dividing the object area and non-object area in the input image into a plurality of rectangles; using three color channels to establish Color histograms of object rectangles and non-object rectangles, where each color histogram is divided into multiple discrete intervals; extract the main color of the rectangle according to the color histogram; calculate the color histogram interval of the object rectangle for the extracted main colors The minimum distance from the color histogram bins of non-object rectangles; the weights of the color histogram bins of object rectangles are calculated based on the calculated minimum distance; the weights of the color components of object rectangles are calculated based on the weights of the color histogram bins of object rectangles calculated weight; calculating the weight of the object rectangle based on the calculated weight of the color histogram interval of the object rectangle; reweighting the color histogram interval of the object rectangle based on the calculated weight of the object rectangle and the weight of the color component, thereby extracting the object's Discriminatory color features; reweighting results based on the color histogram bins of the object rectangle yields the final color model of the object.

该方法还可包括:基于对象的最终颜色模型,从新的输入图像提取对象的颜色连通域;对提取的颜色连通域进行连通域分析,基于所述连通域分析来计算输入图像中的对象的形心和尺寸,从而对所述对象进行定位和跟踪。The method may further include: extracting the color connected domain of the object from the new input image based on the final color model of the object; performing connected domain analysis on the extracted color connected domain, and calculating the shape of the object in the input image based on the connected domain analysis. center and size to locate and track the object.

根据本发明的另一方面,提供一种提取具有强识别力的颜色特征的设备,该设备包括:区域划分单元,将输入图像中的对象区域和非对象区域划分为多个矩形;直方图计算单元,使用颜色通道建立对象矩形和非对象矩形的颜色直方图,其中,每个颜色直方图被划分为多个离散的区间;主颜色提取单元,根据直方图计算单元建立的颜色直方图提取对象矩形的主颜色;区间最小距离计算单元,针对主颜色提取单元提取的主颜色计算对象矩形的颜色直方图区间与非对象矩形的颜色直方图区间之间的最小距离;区间权重计算单元,基于区间最小距离计算单元计算的最小距离计算对象矩形的颜色直方图区间的权重;颜色分量权重计算单元,基于区间权重计算单元计算的对象矩形的颜色直方图区间的权重计算对象矩形的颜色分量的权重;矩形权重计算单元,基于区间权重计算单元计算的对象矩形的颜色直方图区间的权重计算对象矩形的权重;区间重新加权单元,基于矩形权重计算单元计算的对象矩形的权重和颜色分量权重计算单元计算的颜色分量的权重对所述对象矩形的颜色直方图区间进行重新加权,从而提取对象的具有强识别力的颜色特征;最终颜色模型产生单元,基于对象矩形的颜色直方图区间的重新加权结果产生对象的最终颜色模型。According to another aspect of the present invention, there is provided a device for extracting color features with strong discrimination, which includes: an area division unit, which divides the object area and non-object area in the input image into a plurality of rectangles; histogram calculation The unit uses the color channel to establish the color histogram of the object rectangle and the non-object rectangle, wherein each color histogram is divided into multiple discrete intervals; the main color extraction unit extracts the object according to the color histogram established by the histogram calculation unit The main color of the rectangle; the interval minimum distance calculation unit, for the main color extracted by the main color extraction unit, calculates the minimum distance between the color histogram interval of the object rectangle and the color histogram interval of the non-object rectangle; the interval weight calculation unit, based on the interval The minimum distance calculated by the minimum distance calculation unit calculates the weight of the color histogram interval of the object rectangle; the color component weight calculation unit calculates the weight of the color component of the object rectangle based on the weight of the color histogram interval of the object rectangle calculated by the interval weight calculation unit; The rectangle weight calculation unit calculates the weight of the object rectangle based on the weight of the color histogram interval of the object rectangle calculated by the interval weight calculation unit; the interval reweighting unit calculates the weight of the object rectangle based on the rectangle weight calculation unit and the color component weight calculation unit The weight of the color component of the object rectangle is reweighted to the color histogram interval of the object rectangle, thereby extracting the color feature with strong recognition power of the object; the final color model generation unit is based on the reweighted result of the color histogram interval of the object rectangle. The final color model of the object.

该设备还可包括:颜色连通域提取单元,基于对象的最终颜色模型,从新的输入图像提取对象的颜色连通域;对象定位单元,对颜色连通域提取单元提取的颜色连通域进行连通域分析,基于所述连通域分析来计算输入图像中的对象的形心和尺寸,从而对所述对象进行定位和跟踪。The device may further include: a color connected domain extraction unit, based on the final color model of the object, extracting the color connected domains of the object from a new input image; an object positioning unit, performing connected domain analysis on the color connected domains extracted by the color connected domain extraction unit, The centroid and size of the object in the input image are calculated based on the connected domain analysis, so as to locate and track the object.

根据本发明的另一方面,提供一种成像设备,该成像设备包括:成像单元,拍摄对象的图像;根据本发明的提取具有强识别力的颜色特征的设备,从成像单元接收图像,提取对象的具有强识别力的颜色特征,产生对象的最终颜色模型,基于最终颜色模型提取对象的颜色连通域,进行连通域分析,并基于所述连通域分析来产生调整成像设备姿态的参数,以获得更好的图像质量;控制单元,从所述提取具有强识别力的颜色特征的设备接收调整成像设备姿态的参数,调整成像设备的姿态,其中,成像设备的姿态可以是用于PTZ摄像机的摆动角度、倾斜角度、缩放比例以及聚焦区域的选择,或者是上述中的一些,例如,用于数字相机的缩放比例以及聚焦区域的选择;存储单元,存储拍摄的对象的图像;显示单元,显示拍摄的对象的图像。According to another aspect of the present invention, there is provided an imaging device, which includes: an imaging unit that takes an image of an object; according to the device for extracting a color feature with strong discrimination according to the present invention, receives the image from the imaging unit, and extracts the object The color feature with strong recognition power generates the final color model of the object, extracts the color connected domain of the object based on the final color model, performs connected domain analysis, and generates parameters for adjusting the posture of the imaging device based on the connected domain analysis, to obtain Better image quality; the control unit receives parameters for adjusting the posture of the imaging device from the device for extracting color features with strong recognition power, and adjusts the posture of the imaging device, wherein the posture of the imaging device can be used for the swing of the PTZ camera Selection of angle, tilt angle, zoom ratio, and focus area, or some of the above, for example, zoom ratio and selection of focus area for a digital camera; storage unit, which stores an image of a photographed object; display unit, which displays a photographed image of the object.

该成像设备还可包括:标注单元,将用户在图像上手动标注的对象区域提供给所述提取具有强识别力的颜色特征的设备。The imaging device may further include: a labeling unit, which provides an object area manually marked by a user on the image to the device for extracting color features with strong recognition power.

控制单元可根据调整成像设备姿态的参数调整成像设备的摆动、倾斜、缩放以及选择聚焦区域的操作中的至少一种。The control unit may adjust at least one of operations of panning, tilting, zooming, and selecting a focus area of the imaging device according to parameters for adjusting the attitude of the imaging device.

在选择聚焦区域的操作中,控制单元控制成像设备选择对象所在的新的区域作为聚焦依据,以便对所述区域进行聚焦。In the operation of selecting the focus area, the control unit controls the imaging device to select a new area where the object is located as a basis for focusing, so as to focus on the area.

在控制单元控制成像设备选择聚焦区域时,成像设备将图像中心区域选择作为默认的成像聚焦区域,或者动态地选择对象所在的新的图像区域作为成像聚焦区域,根据聚焦区域的图像数据信息,动态地调整成像设备的缩放倍数、焦距、摆动或倾斜参数。When the control unit controls the imaging device to select the focus area, the imaging device selects the central area of the image as the default imaging focus area, or dynamically selects a new image area where the object is located as the imaging focus area, and dynamically Adjust the zoom factor, focus, swing or tilt parameters of the imaging device accurately.

附图说明Description of drawings

通过结合附图,从下面的实施例的描述中,本发明这些和/或其它方面及优点将会变得清楚,并且更易于理解,其中:These and/or other aspects and advantages of the present invention will become clear and easier to understand from the description of the following embodiments in conjunction with the accompanying drawings, wherein:

图1显示了根据本发明的提取具有强识别力的颜色特征的方法的流程图;Fig. 1 has shown the flow chart of the method for extracting the color feature with strong discrimination according to the present invention;

图2示出了如何计算人矩形;Figure 2 shows how the person rectangle is calculated;

图3示出了每个矩形直方图的计算;Figure 3 shows the calculation of the histogram for each rectangle;

图4示出了区间权重的计算以及使用本发明的具有强识别力的颜色提取的效果;Fig. 4 shows the calculation of interval weights and the effect of using the color extraction with strong discrimination of the present invention;

图5显示了重要性加权函数;Figure 5 shows the importance weighting function;

图6显示了使用具有强识别力的颜色特征的颜色连通域检测;Figure 6 shows color-connected domain detection using color features with strong discriminative power;

图7示出了用于定位人的位置和尺寸的连通域分析;Figure 7 shows a connected domain analysis for locating a person's location and size;

图8显示了根据本发明的提取具有强识别力的颜色特征的设备的框图;Fig. 8 shows the block diagram of the device for extracting color features with strong discrimination according to the present invention;

图9显示了根据本发明的成像设备的框图。Fig. 9 shows a block diagram of an imaging device according to the present invention.

具体实施方式Detailed ways

现在将详细描述本发明的实施例,其示例在附图中示出,其中,相同的标号始终表示相同的部件。下面通过参照附图来描述这些实施例以解释本发明。Reference will now be made in detail to embodiments of the invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like parts throughout. The embodiments are described below in order to explain the present invention by referring to the figures.

在下文中,以人作为运动对象的示例来描述本发明,但本发明不限于此,也可以是其它类型的运动对象。In the following, the present invention is described by taking a human as an example of a moving object, but the present invention is not limited thereto, and may also be other types of moving objects.

图1显示了根据本发明的提取具有强识别力的颜色特征的方法的流程图。在步骤101,系统接收视频输入、人的位置和尺寸信息,所述人的位置和尺寸信息通过运动提取、手动标注或人体检测器给出(如美国专利US20060147108A1所公开的)。有多种运动提取方法,例如,由于成像设备处于初始状态下的静止状态,所以可使用背景相减。手动标注可直接给出人的位置和尺寸信息。人体检测器检测用于候选的人的位置和尺寸的输入图像。尽管在图像中人具有不同的尺寸,但是宽高比具有固定的范围,这可以用来安全地确定除了运动对象之外的其它图像区域(即,非对象区域,包括相同或不同种类的其它对象以及背景)。接下来的步骤是基于已知的人矩形图像区域和非人矩形图像区域如何提取具有强识别力的颜色区域。在步骤102,对彩色图像进行量化,具有3个颜色通道的颜色像素被量化到N个离散的区间(bin),例如,N=64。FIG. 1 shows a flowchart of a method for extracting color features with strong discrimination according to the present invention. In step 101, the system receives video input, person's position and size information given by motion extraction, manual annotation or human body detector (as disclosed in US20060147108A1). There are various motion extraction methods, for example, background subtraction can be used due to the stationary state of the imaging device in its initial state. Manual annotation can directly give the position and size information of the person. A human detector detects the position and size of an input image for candidate persons. Although people have different sizes in the image, the aspect ratio has a fixed range, which can be used to safely determine other image regions except moving objects (i.e., non-object regions, including other objects of the same or different kind and background). The next step is to extract color regions with strong discriminative power based on known human and non-human rectangular image regions. In step 102, the color image is quantized, and color pixels with 3 color channels are quantized into N discrete intervals (bins), for example, N=64.

在步骤103,计算人头部和人体的矩形。图2示出了人头部和人体的矩形的计算。例如,当给定人头部位置和尺寸时,具有相同尺寸的几个矩形将被向下排列,直到达到图像的下边界。作为现有知识,人的面部和人的头发具有相区别的色貌,头部矩形被划分为几个小的矩形。按照相同的原理,除了被人占据的区域之外的其它图像区域可基于统计上的人的高(头部)宽比被安全地估计。没有必要精确地估计背景矩形区域,即,可以粗略地估计背景矩形区域,就可确保没有运动对象的图像像素被错误划分为背景矩形。In step 103, the rectangles of the human head and human body are calculated. Figure 2 shows the calculation of the rectangles of the human head and body. For example, when a person's head position and size are given, several rectangles with the same size will be arranged down until the lower boundary of the image is reached. As prior knowledge, the human face and human hair have distinct color appearances, and the head rectangle is divided into several small rectangles. Following the same principle, other image regions than those occupied by people can be safely estimated based on statistical people's height (head) to width ratios. It is not necessary to accurately estimate the background rectangle area, that is, the background rectangle area can be roughly estimated to ensure that no image pixels of moving objects are incorrectly classified as the background rectangle.

在步骤104,计算矩形直方图。图3示出了每个矩形的颜色直方图的计算,所述矩形包括人头部子矩形、人体矩形和背景矩形(或者除了给出的人矩形之外的其它矩形,即,非对象区域)。这些矩形的直方图被分别计算,直方图被划分为多个区间,在每个区间中累加像素的数量。In step 104, a histogram of rectangles is calculated. Figure 3 shows the calculation of the color histogram for each rectangle, which includes the human head sub-rectangle, human body rectangle and background rectangle (or other rectangles except the given human rectangle, i.e., non-object area) . The histograms of these rectangles are calculated separately, the histogram is divided into bins, and the number of pixels is accumulated in each bin.

在步骤105,在获得了每个矩形的颜色直方图之后,主颜色被保留,细微颜色被丢弃。因为噪声可影响具有少量像素的细微颜色,这些像素具有所述细微颜色,所以细微颜色是不稳定或不可靠的。In step 105, after obtaining the color histogram of each rectangle, the dominant colors are kept and the subtle colors are discarded. The subtle colors are unstable or unreliable because noise can affect the subtle colors with a small number of pixels having the subtle colors.

在步骤106,计算区间最小距离。In step 106, the interval minimum distance is calculated.

如等式(1)和(2)所示,计算对象和非对象的矩形颜色直方图区间之间的距离。下面以背景区域作为非对象区域的示例。The distance between the rectangular color histogram bins of objects and non-objects is computed as shown in equations (1) and (2). The following takes the background area as an example of the non-object area.

JJ [[ Hh rr cc (( ii )) ,, BB GG cc (( jj )) ]] == argarg minmin jj || bb ii -- bb jj ′′ || -- -- -- (( 11 ))

DD. rr cc (( ii )) == || bb ii -- bb JJ ′′ || -- -- -- (( 22 ))

其中,J表示颜色通道c的背景颜色分布的一个区间标识(ID),所述区间J具有与人矩形r的主颜色直方图的颜色通道c的区间i的最小距离。bi是区间i的标识(ID)。Hr c(i)是人矩形r的颜色通道c的区间i的颜色直方图。BGc(j)是颜色通道c的区间j的背景颜色直方图。bj′是区间j的ID。Dr c(i)是人矩形r的颜色通道c的区间i与具有相同颜色通道的背景颜色直方图区间之间的最小距离。Wherein, J represents an interval identification (ID) of the background color distribution of the color channel c, and the interval J has the minimum distance from the interval i of the color channel c of the main color histogram of the human rectangle r. b i is the identification (ID) of interval i. H r c (i) is the color histogram of interval i of color channel c of human rectangle r. BG c (j) is the background color histogram for bin j of color channel c. b j ' is the ID of section j. D r c (i) is the minimum distance between interval i of color channel c of person rectangle r and the background color histogram interval with the same color channel.

在步骤107,计算区间权重。In step 107, interval weights are calculated.

如等式(3)所示,定义矩形r和通道c中的一个区间的识别力(权重)。As shown in Equation (3), the discriminative power (weight) of an interval in the rectangle r and the channel c is defined.

ww rr cc (( ii )) == TT (( DD. rr cc (( ii )) ,, sthe s )) ×× || Hh rr cc (( ii )) -- BB GG cc (( JJ )) || -- -- -- (( 33 ))

其中, s = sign ( H r c ( i ) - B G c ( J ) ) in, the s = sign ( h r c ( i ) - B G c ( J ) )

T(x,y)=ex×k×y,其中,k为常数T(x, y)=e x×k×y , where k is a constant

在步骤108,计算颜色分量权重。At step 108, color component weights are calculated.

在每个矩形中,像素由三种颜色分量组成,这三种颜色分量是红、绿和蓝元素。因为每种元素被量化为多个直方图区间,因此这些元素取决于它们所涉及的直方图区间的距离而具有不同的识别力。In each rectangle, a pixel is composed of three color components, which are red, green, and blue elements. Because each element is quantized into multiple histogram bins, these elements have different discriminative powers depending on the distance of the histogram bins they relate to.

ww cc == ΣΣ mm == 11 Mm ww rr cc (( mm )) -- -- -- (( 44 ))

WW cc == ww cc ΣΣ kk == 11 KK ww kk -- -- -- (( 55 ))

在等式(4)中,颜色分量c中的每个区间的权重被累加。因此,可按照等式(5)计算每种颜色分量的权重。在等式(4)和(5)中,M表示人矩形直方图中的区间的数量,K表示颜色通道的数量。In equation (4), the weights of each interval in the color component c are accumulated. Therefore, the weight of each color component can be calculated according to Equation (5). In Equations (4) and (5), M represents the number of bins in the human rectangular histogram, and K represents the number of color channels.

在步骤109,计算矩形权重。In step 109, rectangle weights are calculated.

一个人矩形的重要性取决于该人矩形中包括的颜色区间的识别力。当一个矩形覆盖高识别力的颜色区域时,相比于其它的人矩形,该矩形将具有大的权重。The importance of a person rectangle depends on the recognition power of the color intervals included in the person rectangle. When a rectangle covers a highly discriminative color area, it will have a large weight compared to other human rectangles.

ww rr == ΣΣ kk == 11 KK ΣΣ mm == 11 Mm ww rr kk (( mm )) -- -- -- (( 66 ))

WW rr == ww rr ΣΣ ll == 11 LL ww ll -- -- -- (( 77 ))

在等式(6)中,计算人矩形r中的区间权重的和。然后,可按照等式(7)计算矩形权重,等式(7)对具有高识别力的颜色区间的这些矩形进行重要性加权,其中,L表示人矩形的数量。In Equation (6), the sum of interval weights in the human rectangle r is calculated. Then, rectangle weights can be calculated according to Equation (7), which weights the importance of these rectangles with high discriminative color intervals, where L represents the number of person rectangles.

在步骤110,重新加权直方图区间。At step 110, the histogram bins are reweighted.

WW rr cc (( ii )) == ww rr cc (( ii )) ×× WW rr ×× WW cc -- -- -- (( 88 ))

每个直方图区间在目标运动对象和杂乱背景或其它运动对象之间具有不同的识别力。直方图区间的重要性取决于直方图中的该直方图区间的初始权重wr c(i)、颜色分量权重Wc以及矩形权重Wr。因此,直方图的最终重要性按照等式(8)被重新加权。Each histogram interval has a different discriminative power between the target moving object and the cluttered background or other moving objects. The importance of a histogram interval depends on the initial weight w r c (i), the color component weight W c and the rectangle weight W r of the histogram interval in the histogram. Therefore, the final importance of the histogram is reweighted according to equation (8).

在步骤111,产生最终颜色模型。At step 111, a final color model is generated.

NN hh == {{ NN hh rr ,, NN hh gg ,, NN hh bb }}

NN hh cc (( ii )) == ΣΣ rr == 11 RR hh ΣΣ mm == 11 NN bb WW rr cc (( bb mm == ii )) -- -- -- (( 99 ))

NN tt == {{ NN rr tt ,, NN tt gg ,, NN tt bb }}

NN tt cc (( ii )) == ΣΣ rr == 11 RR tt ΣΣ mm == 11 NN tt WW rr cc (( bb mm == ii )) -- -- -- (( 1010 ))

在对每个人矩形中的直方图区间进行重新加权后,可按照等式(9)和(10)产生最终的颜色模型,等式(9)和(10)分别表示人头部和人躯干的最终颜色模型(颜色直方图分布)。在等式(9)和(10)中,来自人头部和人躯干的矩形中的直方图区间的权重被分别累加。R是人矩形的数量,M是人矩形直方图的区间的数量。更具体地讲,Rh表示人头部矩形的数量,Rt表示人躯干矩形的数量,Mh表示人头部矩形直方图的区间的数量,Mt表示人躯干矩形直方图的区间的数量。After reweighting the histogram bins in each person rectangle, the final color model can be produced according to equations (9) and (10), which represent the Final color model (color histogram distribution). In equations (9) and (10), the weights from the histogram bins in the rectangles of the human head and human torso are accumulated respectively. R is the number of person rectangles, and M is the number of bins of the person rectangle histogram. More specifically, R h represents the number of human head rectangles, R t represents the number of human torso rectangles, M h represents the number of bins in the histogram of human head rectangles, and M t represents the number of bins in the histogram of human torso rectangles .

为了阐述本节后面的主要构思,图4示出了一个示例。图4中的(a)是运动对象的一个矩形中的颜色分量的一个颜色直方图。图4中的(b)是除运动对象之外的图像的其它区域中的颜色分量的一个颜色直方图。在这两个颜色直方图中都有两个区间,由BIN-1和BIN-2表示。随着运动对象的BIN-1在距离变换(等式(1)和(2))上比该运动对象的BIN-2更加远离背景的两个区间,运动对象的BIN-1的权重增加,而运动对象的BIN-2的权重减小,如图4中的(c)所示。尽管BIN-2的初始权重大于BIN-1的初始权重,但是在具有强识别力的颜色特征提取的变换之后,BIN-1的新权重大于BIN-2的新权重。图5显示了重要性加权函数T(x,y),该函数增加具有大区间距离的权重,并减小具有小区间距离的权重,所述区间距离按照等式(1)和(2)计算。To illustrate the main idea later in this section, Figure 4 shows an example. (a) in FIG. 4 is a color histogram of color components in a rectangle of a moving object. (b) in FIG. 4 is a color histogram of color components in other regions of the image than moving objects. In both color histograms there are two bins, denoted by BIN-1 and BIN-2. As the BIN-1 of the moving object is farther away from the two intervals of the background than the BIN-2 of the moving object in the distance transformation (equations (1) and (2)), the weight of the BIN-1 of the moving object increases, while The weight of BIN-2 of moving objects decreases, as shown in (c) in Fig. 4. Although the initial weight of BIN-2 is larger than that of BIN-1, after the transformation with strong discriminative color feature extraction, the new weight of BIN-1 is larger than that of BIN-2. Figure 5 shows the importance weighting function T(x, y), which increases the weights with large interval distances and decreases the weights with small interval distances, which are calculated according to equations (1) and (2) .

值得注意的是,这里示出了针对彩色图像的三个颜色通道的方法,在该方法中,使用一维颜色直方图独立地建立最终颜色模型。但是实际上存在多种变化,诸如使用二维颜色直方图或三维颜色直方图。例如,存在三种二维颜色直方图,即,红-绿二维颜色直方图、红-蓝二维颜色直方图、蓝-绿二维颜色直方图。但是,本发明的方法也可按照类似的方式适用于这些直方图,从而提取具有强识别力的颜色特征。Notably, an approach is shown here for the three color channels of a color image, in which the final color model is built independently using a one-dimensional color histogram. But in practice there are variations such as using a two-dimensional color histogram or a three-dimensional color histogram. For example, there are three kinds of two-dimensional color histograms, namely, a red-green two-dimensional color histogram, a red-blue two-dimensional color histogram, and a blue-green two-dimensional color histogram. However, the method of the present invention can also be applied to these histograms in a similar manner, thereby extracting color features with strong discriminative power.

基于从具有强识别力的颜色特征提取而产生的最终颜色模型,可从新的输入图像提取属于运动对象的颜色连通域。对于输入图像中的每个像素,其与目标运动对象的相似度被形成为一个权重矩阵,该权重矩阵的大小与输入图像的大小相同。该权重矩阵的元素来自最终颜色直方图模型中的累加的权重,该最终颜色直方图模型用颜色像素值来量化获得。Based on the final color model produced from the extraction of highly discriminative color features, the color-connected domains belonging to moving objects can be extracted from the new input image. For each pixel in the input image, its similarity to the target moving object is formed into a weight matrix whose size is the same as that of the input image. The elements of the weight matrix come from the accumulated weights in the final color histogram model obtained by quantizing the color pixel values.

在图6中,存在两个人。一个人按照某种方式被选择为跟踪对象,例如,右侧的人被选择为跟踪对象。图6中的(a)是新的输入图像,图6中的(b)是使用人躯干模型的连通域检测的结果,图6中的(c)是使用人头部模型的连通域检测的结果。由于白色广泛分布在背景以及另一个人中,所以右侧的人的白色具有很低的权重,其占据了颜色直方图的初始权重的大部分。In Fig. 6, there are two persons. A person is selected as a follower in a certain way, for example, the person on the right is selected as a follower. (a) in Figure 6 is the new input image, (b) in Figure 6 is the result of connected domain detection using the human torso model, and (c) in Figure 6 is the connected domain detection using the human head model result. Since white is widely distributed in the background as well as the other person, the white of the person on the right has a very low weight, which occupies most of the initial weight of the color histogram.

参照图7中的(a),存在两个来自具有形状模型的人头部检测器的头部候选。因此,人头部的概率分布是以给定的头部位置为中心的高斯分布。图7中的(b)示出了使用头部颜色模型的颜色连通域检测的结果,图7中的(c)示出了使用躯干颜色模型的颜色连通域检测的结果。在人头部的颜色连通域检测中,人的概率分布是以给定的头部位置为中心的高斯分布。由于假设了人的上下姿态,概率分布向下延伸,以指示人躯干的存在。在人躯干的颜色连通域检测中,人的概率分布也是以给定的躯干位置为中心的高斯分布。由于人头部位于人躯干的顶部区域(这约束了几何关系),所以概率分布向上延伸,以指示人头部的存在。因此,参照图7中的(d),可计算概率图的形心(centroid)作为跟踪对象的最终位置。Referring to (a) in FIG. 7, there are two head candidates from a human head detector with a shape model. Therefore, the probability distribution of a human head is a Gaussian distribution centered at a given head position. (b) in FIG. 7 shows the result of the color connected domain detection using the head color model, and (c) in FIG. 7 shows the result of the color connected domain detection using the torso color model. In color-connected domain detection of human heads, the probability distribution of a person is a Gaussian distribution centered at a given head position. Since an up-down pose of the person is assumed, the probability distribution extends downward to indicate the presence of the person's torso. In the color-connected domain detection of the human torso, the probability distribution of the person is also a Gaussian distribution centered on a given torso position. Since the human head is located in the top region of the human torso (which constrains the geometric relationship), the probability distribution extends upwards to indicate the presence of the human head. Therefore, referring to (d) in FIG. 7 , the centroid of the probability map can be calculated as the final position of the tracking object.

图8显示了根据本发明的提取具有强识别力的颜色特征的设备的框图。FIG. 8 shows a block diagram of a device for extracting color features with strong discrimination according to the present invention.

参照图8,根据本发明的提取具有强识别力的颜色特征的设备包括区域划分单元801、直方图计算单元802、主颜色提取单元803、区间最小距离计算单元804、区间权重计算单元805、颜色分量权重计算单元806、矩形权重计算单元807、区间重新加权单元808以及最终颜色模型产生单元809。Referring to Fig. 8, the device for extracting color features with strong discrimination according to the present invention includes an area division unit 801, a histogram calculation unit 802, a main color extraction unit 803, an interval minimum distance calculation unit 804, an interval weight calculation unit 805, a color A component weight calculation unit 806 , a rectangle weight calculation unit 807 , an interval reweighting unit 808 , and a final color model generation unit 809 .

区域划分单元801将输入图像中的对象区域和非对象区域划分为多个矩形。The area dividing unit 801 divides an object area and a non-object area in an input image into a plurality of rectangles.

直方图计算单元802使用颜色通道建立对象矩形和非对象矩形的颜色直方图,其中,所述颜色直方图被划分为多个离散的区间。The histogram calculation unit 802 uses the color channels to establish a color histogram of the object rectangle and the non-object rectangle, wherein the color histogram is divided into a plurality of discrete intervals.

主颜色提取单元803根据颜色直方图提取对象矩形的主颜色。The main color extracting unit 803 extracts the main color of the object rectangle according to the color histogram.

区间最小距离计算单元804针对主颜色提取单元803提取的主颜色计算对象矩形的颜色直方图区间与非对象矩形的颜色直方图区间之间的最小距离。区间最小距离计算单元804可按照等式(1)和(2)来计算所述最小距离。The section minimum distance calculation unit 804 calculates the minimum distance between the color histogram section of the object rectangle and the color histogram section of the non-object rectangle with respect to the main colors extracted by the main color extraction unit 803 . The section minimum distance calculation unit 804 can calculate the minimum distance according to equations (1) and (2).

区间权重计算单元805基于区间最小距离计算单元804计算的最小距离计算对象矩形的颜色直方图区间的权重。区间权重计算单元805可按照等式(3)来计算对象矩形的颜色直方图区间的权重。The section weight calculation unit 805 calculates the weight of the color histogram section of the object rectangle based on the minimum distance calculated by the section minimum distance calculation unit 804 . The interval weight calculation unit 805 may calculate the weight of the color histogram interval of the object rectangle according to Equation (3).

颜色分量权重计算单元806基于区间权重计算单元805计算的对象矩形的颜色直方图区间的权重计算对象矩形的颜色分量的权重;颜色分量权重计算单元806可按照等式(4)和(5)来计算对象矩形的颜色分量的权重。The color component weight calculation unit 806 calculates the weight of the color component of the object rectangle based on the weight of the color histogram interval of the object rectangle calculated by the interval weight calculation unit 805; the color component weight calculation unit 806 can be calculated according to equations (4) and (5). Computes the weights for the color components of the object's rectangle.

矩形权重计算单元807基于颜色分量权重计算单元806计算的对象矩形的颜色直方图区间的权重计算对象矩形的权重。矩形权重计算单元807可按照等式(6)和(7)来计算对象矩形的权重。The rectangle weight calculation unit 807 calculates the weight of the object rectangle based on the weights of the color histogram sections of the object rectangle calculated by the color component weight calculation unit 806 . The rectangle weight calculation unit 807 can calculate the weight of the object rectangle according to Equations (6) and (7).

区间重新加权单元808基于矩形权重计算单元807计算的对象矩形的权重和颜色分量权重计算单元806计算的颜色分量的权重对所述对象矩形的颜色直方图区间进行重新加权,从而提取对象的具有强识别力的颜色特征。区间重新加权单元808可按照等式(8)来对所述对象矩形的颜色直方图区间进行重新加权。The section reweighting unit 808 reweights the color histogram section of the object rectangle based on the weight of the object rectangle calculated by the rectangle weight calculation unit 807 and the weight of the color component calculated by the color component weight calculation unit 806, thereby extracting the color histogram sections of the object with strong Distinguishable color characteristics. The bin reweighting unit 808 may reweight the color histogram bins of the object rectangle according to equation (8).

最终颜色模型产生单元809基于对象矩形的颜色直方图区间的重新加权结果产生对象的最终颜色模型。最终颜色模型产生单元809可按照下面的等式产生对象的最终颜色模型:The final color model generation unit 809 generates the final color model of the object based on the reweighted result of the color histogram interval of the object rectangle. The final color model generation unit 809 can generate the final color model of the object according to the following equation:

N={Nr,Ng,Nb}N={N r , N g , N b }

NN cc (( ii )) == ΣΣ rr == 11 RR ΣΣ mm == 11 Mm WW rr cc (( bb mm == ii )) -- -- -- (( 1111 ))

其中,R表示对象矩形的数量,M表示对象矩形直方图的区间的数量。Wherein, R represents the number of object rectangles, and M represents the number of intervals of the object rectangle histogram.

另外,根据本发明的提取具有强识别力的颜色特征的设备还可包括颜色连通域提取单元810和对象定位单元811。In addition, the device for extracting color features with strong discrimination according to the present invention may further include a color connected domain extraction unit 810 and an object positioning unit 811 .

颜色连通域提取单元810基于对象的最终颜色模型,从新的输入图像提取对象的颜色连通域。对象定位单元811基于颜色连通域提取单元810提取的颜色连通域进行连通域分析,计算输入图像中的对象的形心和尺寸,从而对所述对象进行定位和跟踪。The color connected domain extracting unit 810 extracts the color connected domain of the object from the new input image based on the final color model of the object. The object positioning unit 811 performs connected domain analysis based on the color connected domain extracted by the color connected domain extraction unit 810, and calculates the centroid and size of the object in the input image, so as to locate and track the object.

图9显示了根据本发明的成像设备的框图。Fig. 9 shows a block diagram of an imaging device according to the present invention.

该成像设备包括成像单元901、根据本发明的提取具有强识别力的颜色特征的设备902、控制单元903、存储单元904、显示单元905和标注单元906。该成像设备可以是PTZ(PAN,TILT,ZOOM)摄像机、静止监视摄像机、DC(数字相机)、DV(数字摄像机)和PVR(个人视频记录器)中的任何一种。The imaging device includes an imaging unit 901 , a device 902 for extracting color features with strong discrimination according to the present invention, a control unit 903 , a storage unit 904 , a display unit 905 and a labeling unit 906 . The imaging device may be any one of a PTZ (PAN, TILT, ZOOM) camera, a still surveillance camera, a DC (Digital Camera), a DV (Digital Video Camera), and a PVR (Personal Video Recorder).

成像单元901是硬件装置,例如CCD或COMS装置,用于感知并产生自然界的图像,并且成像单元的图像处理芯片可达到良好的图像质量。为了跟踪运动对象,有两种方法提供对象的运动区域的位置和尺寸。第一种方法是自动方法,使用嵌入的算法提取感兴趣的对象的区域的大小和位置。第二种方法是手动方法,用户或操作者在显示的图像(例如,触摸屏)上标注感兴趣的对象的区域。对于自动方法,使用嵌入的算法(例如USP20060147108A1、USP20050276446中公开的方法,或者在2005年的机器视觉和应用国际会议上公开的名称为“静止图像中的多视角的人头部检测”的论文中公开的方法),对象可被自动检测。标注单元906将标注功能提供给用户或操作者,以使用户或操作者能够用笔或手指在图像上手动标注感兴趣的对象区域。The imaging unit 901 is a hardware device, such as a CCD or a CMOS device, used to perceive and generate natural images, and the image processing chip of the imaging unit can achieve good image quality. In order to track a moving object, there are two methods to provide the location and size of the moving region of the object. The first method is an automatic method, using an embedded algorithm to extract the size and location of the region of the object of interest. The second method is a manual method, where a user or operator marks an area of an object of interest on a displayed image (eg, a touch screen). For automatic methods, use embedded algorithms (such as those disclosed in USP20060147108A1, USP20050276446, or in the paper titled "Multi-view Human Head Detection in Still Images" published at the International Conference on Machine Vision and Applications in 2005 public methods), objects can be detected automatically. The labeling unit 906 provides a labeling function to the user or operator, so that the user or operator can manually label the object area of interest on the image with a pen or finger.

提取具有强识别力的颜色特征的设备902可从成像单元901接收图像数据,也可接收用户例如以粗略标记的形式标注的感兴趣的对象区域的大小和位置信息。提取具有强识别力的颜色特征的设备902提取对象的具有强识别力的颜色特征,产生对象的最终颜色模型,基于最终颜色模型提取对象的颜色连通域,进行连通域分析,并基于所述连通域分析来产生调整成像设备姿态的参数。值得注意的是,当使用提供对象位置和大小的第一种方法时,标注单元906是可选的。当有多个跟踪对象供选择时,例如有多个进行跟踪的运动对象时,用户可修改本发明中成像设备自动选择的跟踪对象。The device 902 for extracting color features with strong discrimination can receive image data from the imaging unit 901, and can also receive the size and position information of the object region of interest marked by the user, for example in the form of a rough mark. The device 902 for extracting color features with strong recognition power extracts the color features with strong recognition power of the object, generates the final color model of the object, extracts the color connected domain of the object based on the final color model, performs connected domain analysis, and based on the connected domain analysis to generate parameters that adjust the pose of the imaging device. It is worth noting that when using the first method of providing object location and size, the labeling unit 906 is optional. When there are multiple tracking objects for selection, for example, there are multiple moving objects to be tracked, the user can modify the tracking object automatically selected by the imaging device in the present invention.

控制单元903可调整成像设备的姿态,成像设备的姿态由PTZ摄像机的摆动、倾斜、缩放以及选择聚焦区域进行自动聚焦的操作控制,或者由静止监视摄像机、DC、DV或PVR的缩放操作控制。控制单元903从提取具有强识别力的颜色特征的设备902接收调整成像设备姿态的参数。提取具有强识别力的颜色特征的设备902可提供新的时间点或新的帧数据的对象位置和尺寸信息。控制单元903根据所述参数调整成像设备的姿态,以通过摆动/倾斜操作使得对象在图像中居中,通过选择聚焦区域的操作来选择感兴趣的对象的区域,并通过缩放和自动聚焦操作来对感兴趣的对象的区域进行聚焦,以获得高图像质量的运动对象的细节。在选择聚焦区域的操作中,控制单元可控制成像设备选择对象所在的新的区域作为聚焦依据,以便对所述区域进行聚焦。另外,在控制单元控制成像设备选择聚焦区域时,成像设备除了可以将图像中心区域选择作为默认的成像聚焦区域,还可以动态地选择对象所在的新的图像区域作为成像聚焦区域,根据聚焦区域的图像数据信息,动态地调整成像设备的缩放倍数、自动聚焦、焦距、摆动或倾斜参数,从而获得更好的成像效果。The control unit 903 can adjust the posture of the imaging device. The posture of the imaging device is controlled by the operation of swinging, tilting, zooming and selecting the focus area for automatic focusing of the PTZ camera, or by the zooming operation of the still surveillance camera, DC, DV or PVR. The control unit 903 receives parameters for adjusting the pose of the imaging device from the device 902 for extracting color features with strong discrimination. The device 902 for extracting color features with strong discrimination can provide object position and size information of a new time point or new frame data. The control unit 903 adjusts the posture of the imaging device according to the parameters to center the object in the image through the swing/tilt operation, select the area of the object of interest through the operation of selecting the focus area, and focus on the object through the zoom and auto focus operation. The area of the object of interest is brought into focus to obtain high image quality details of moving objects. In the operation of selecting the focus area, the control unit may control the imaging device to select a new area where the object is located as a basis for focusing, so as to focus on the area. In addition, when the control unit controls the imaging device to select the focus area, the imaging device can not only select the central area of the image as the default imaging focus area, but also dynamically select a new image area where the object is located as the imaging focus area. Image data information, dynamically adjust the zoom factor, auto focus, focal length, swing or tilt parameters of the imaging device, so as to obtain better imaging effects.

对于握在用户手中的电子产品,诸如DC、DV或PVR,用户可手动调整其姿态,使得感兴趣的对象在图像中居中。这时候,设备本身可以自动地进行其它的操作,如改变缩放倍数、自动聚焦。For an electronic product held in the user's hand, such as DC, DV or PVR, the user can manually adjust its posture so that the object of interest is centered in the image. At this time, the device itself can automatically perform other operations, such as changing the zoom factor and auto-focusing.

存储单元904可将图像或视频存储到存储装置中,显示单元将现场的图像或视频显示给用户。The storage unit 904 can store images or videos in the storage device, and the display unit can display the images or videos on site to the user.

本发明可被实现为软件,该软件用于连接到成像设备置和控制单元的嵌入式系统,以调节成像设备姿态的参数。对于嵌入式成像设备系统,它可接收视频作为输入,并且将命令发送到成像设备的控制单元,以调节成像设备的姿态、镜头聚焦区域等。The invention can be implemented as software for an embedded system connected to the imaging device setup and control unit to adjust the parameters of the imaging device pose. For an embedded imaging device system, it can receive video as input and send commands to the imaging device's control unit to adjust the imaging device's attitude, lens focus area, etc.

本发明不是提取主颜色直方图来表示对象色貌,而是提取具有强识别力的颜色模型。在代表性颜色模型和具有强识别力的颜色模型之间存在差别。有时当运动对象色貌与非对象和背景颜色模型有很大差别时,具有强识别力的颜色模型与代表性颜色模型相似。有时具有强识别力的颜色模型与代表性颜色模型不相似,特别是当运动对象色貌与非对象和背景颜色模型有部分差别时。本发明能够有效地在具有与运动对象相似的干扰颜色区域的杂乱背景中使用具有强识别力的颜色模型来定位目标对象。本发明能够将多个具有部分相似颜色的运动对象与跟踪对象相区分,接着定位跟踪对象的位置和尺寸,从而调节成像设备以获得良好的成像条件。具有强识别力的颜色模型使用重要性加权策略的原理。具有强识别力的颜色模型增加与非对象或背景有很大差别的颜色元素的重要性/权重,减小与非对象或背景相似的颜色元素的重要性/权重。使用静止摄像机或用于连贯跟踪感兴趣或怀疑的目标运动对象的PTZ摄像机来跟踪对象,可用于事件分析和图像质量增强。在使用具有强识别力的颜色模型的检测中,该检测扫描输入图像,并基于具有强识别力的颜色模型将该输入图像变换为连通域图。该检测也可用于检验来自人体检测器的人候选,以拒绝错误的警报。The present invention does not extract the main color histogram to represent the color appearance of the object, but extracts the color model with strong recognition ability. There is a difference between a representative color model and a color model with strong recognition. Sometimes when the color appearance of moving objects is very different from the non-object and background color models, the color model with strong discrimination is similar to the representative color model. Sometimes discriminating color models are not similar to representative color models, especially when the color appearance of moving objects is partially different from non-object and background color models. The present invention can effectively use a color model with strong discrimination to locate a target object in a cluttered background with interfering color regions similar to moving objects. The invention can distinguish multiple moving objects with partially similar colors from tracking objects, and then locate the position and size of the tracking objects, so as to adjust the imaging device to obtain good imaging conditions. A color model with strong discriminative power uses the principle of an importance weighting strategy. A color model with strong discriminative power increases the importance/weight of color elements that are very different from non-objects or backgrounds, and decreases the importance/weight of color elements that are similar to non-objects or backgrounds. Tracking objects using stationary cameras or PTZ cameras for coherent tracking of moving objects of interest or suspicion, can be used for event analysis and image quality enhancement. In detection using a discriminative color model, the detection scans an input image and transforms the input image into a connected domain graph based on the discriminative color model. This detection can also be used to check human candidates from human detectors to reject false alarms.

在PTZ摄像机中,本发明可被实现为嵌入式系统中的软件,嵌入式系统接收输入的视频图像,分析输入图像,以在建立输入图像的具有强识别力的颜色模型之后用于对人定位,调整PTZ摄像机的姿态(摆动、倾斜缩放、选择聚焦区域)以对人进行跟踪,使人位于图像的中央位置,而具有良好的分辨率和图像质量。当摄像机摆动和倾斜时,对运动的人进行跟踪,避免人离开摄像机的视野。当摄像机进行光学缩放时,调节光学焦距,以获得运动的人的局部细节或整个身体的高质量图像。视频记录系统可将现场数据保存在存储装置中,该视频记录系统在进入用户设置的区域时以高图像质量记录人的行为,或者与机器进行交互式操作。In a PTZ camera, the invention can be implemented as software in an embedded system that receives an input video image, analyzes the input image for use in locating a person after building a highly discriminative color model of the input image , adjust the attitude of the PTZ camera (swing, tilt zoom, select the focus area) to track the person, so that the person is located in the center of the image, with good resolution and image quality. When the camera swings and tilts, the moving person is tracked to avoid people leaving the camera's field of view. When the camera is optically zoomed, adjust the optical focus to obtain high-quality images of local details of moving people or the entire body. The video recording system can save the field data in the storage device, and the video recording system records human behavior with high image quality when entering the area set by the user, or performs interactive operation with the machine.

值得注意的是,本发明也可用于便携式成像设备(例如,DC、DV或移动摄像机)。人的初始位置和尺寸可由用户手动标注,或者被检测器自动检测。通过对运动的人进行定位并直接调节焦距,可自动对运动的人进行聚焦,以获得高图像质量。为了将运动的人在图像中居中,终端用户可手动调节他/她持有的成像设备的镜头方向。It is worth noting that the invention can also be used in portable imaging devices (eg DC, DV or mobile cameras). The initial position and size of the person can be manually marked by the user, or automatically detected by the detector. By positioning the moving person and adjusting the focus directly, it can automatically focus on the moving person to obtain high image quality. To center the moving person in the image, the end user can manually adjust the lens orientation of the imaging device he/she is holding.

在静止摄像机中,本发明可被实现具有对运动的人的连贯跟踪的该运动的人的轨迹集合。轨迹包括运动的人的位置和尺寸。在跟踪期间,系统可分析是否发生了某些事件,或者交互地发出一些警报或通知。用户定义的事件可以是危险的情况,或者是用户的一般的通知情况,例如,进入某个预先设置的禁止区域作为闯入者的运动的人的位置或尺寸。In still cameras, the invention can be implemented with a collection of trajectories of a moving person with coherent tracking of the moving person. The trajectory includes the position and size of the moving person. During tracking, the system can analyze whether certain events occurred, or interactively issue some alerts or notifications. A user-defined event can be a dangerous situation, or a general notification situation to the user, for example, the position or size of a person entering a certain pre-set forbidden area as the movement of an intruder.

在本发明中所使用的术语“成像设备”,指的是至少具有成像单元和控制单元的设备,可以是摄像机,相机或者具有成像功能的其它的便携式设备。The term "imaging device" used in the present invention refers to a device having at least an imaging unit and a control unit, which may be a video camera, a camera or other portable devices with an imaging function.

本发明也可用于使用机器人成像设备来对人进行定位的机器人。机器人的基本功能包括规避障碍、与人交互、对人进行跟踪等。本发明可用于发现附近区域是否存在人,或者对人的位置进行定位以进行跟踪或与人进行交互。The invention can also be used in robots that use robotic imaging devices to localize people. The basic functions of the robot include avoiding obstacles, interacting with people, and tracking people. The present invention can be used to discover whether there is a person in a nearby area, or to locate the location of a person for tracking or interaction with the person.

本发明也可用作其它种类的运动对象的具有强识别力的颜色特征提取和检测,前面以人作为示例以便于解释。对于候选的目标应用,不限于人的建模和检测。例如,对于用DC来对运动对象进行成像,终端用户可用手或笔在触摸屏上划线,来手动标注感兴趣的对象区域,以指示对象位置、尺寸或区域。本发明可建立对象的具有强识别力的颜色模型,并揭示该对象在接下来的视频帧的位置和尺寸。因此,DC可调节成像参数,以获得运动对象的高图像质量。The present invention can also be used for the color feature extraction and detection with strong discrimination of other kinds of moving objects, and the human is taken as an example for the convenience of explanation. Candidate target applications are not limited to human modeling and detection. For example, for using DC to image a moving object, the end user can draw a line on the touch screen with hand or pen to manually mark the object area of interest to indicate the object position, size or area. The present invention can establish a highly discriminative color model of an object and reveal the position and size of the object in the next video frame. Therefore, DC can adjust imaging parameters to obtain high image quality of moving objects.

虽然本发明是参照其示例性的实施例被具体描述和显示的,但是本领域的普通技术人员应该理解,在不脱离由权利要求限定的本发明的精神和范围的情况下,可以对其进行形式和细节的各种改变。While the invention has been particularly described and shown with reference to exemplary embodiments thereof, it will be understood by those skilled in the art that modifications may be made thereto without departing from the spirit and scope of the invention as defined by the claims. Various changes in form and detail.

Claims (21)

1. A method of extracting color features with strong discriminatory power, comprising the steps of:
dividing an object region and a non-object region in an input image into a plurality of rectangles;
establishing color histograms of an object rectangle and a non-object rectangle using color channels, wherein each color histogram is divided into a plurality of discrete intervals;
extracting the main color of the rectangle according to the color histogram;
calculating a minimum distance between a color histogram section of the object rectangle and a color histogram section of the non-object rectangle for the extracted main color;
calculating a weight of a color histogram section of the object rectangle based on the calculated minimum distance;
calculating weights of color components of the target rectangle based on the calculated weights of the color histogram sections of the target rectangle;
calculating the weight of the target rectangle based on the calculated weight of the color histogram interval of the target rectangle;
re-weighting the color histogram interval of the object rectangle based on the calculated weight of the object rectangle and the weight of the color component, thereby extracting the color feature with strong identification power of the object;
a final color model of the object is generated based on the result of the re-weighting of the color histogram bins of the object rectangle.
2. The method of claim 1, wherein the minimum distance between the color histogram bin of the object rectangle and the color histogram bin of the non-object rectangle is calculated according to the following equation:
<math> <mrow> <mi>J</mi> <mo>[</mo> <msubsup> <mi>H</mi> <mi>r</mi> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>BG</mi> <mi>c</mi> </msup> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>=</mo> <munder> <mrow> <mi>arg</mi> <mi>min</mi> </mrow> <mi>j</mi> </munder> <mo>|</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>-</mo> <msubsup> <mi>b</mi> <mi>j</mi> <mo>&prime;</mo> </msubsup> <mo>|</mo> </mrow> </math>
<math> <mrow> <msubsup> <mi>D</mi> <mi>r</mi> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>-</mo> <msubsup> <mi>b</mi> <mi>J</mi> <mo>&prime;</mo> </msubsup> <mo>|</mo> </mrow> </math>
where J represents a bin of the color histogram of the non-target rectangle of the color channel c, said bin J having a minimum distance to bin i of the color channel c of the main color histogram of the target rectangle r, biIs the identification of the interval i and,is the color histogram, BG, of the interval i of the color channel c of the object rectangle rc(j) Is the non-object color histogram of bin j of color channel c,
Figure FSB00000816765100014
is the identification of the interval j and,
Figure FSB00000816765100015
is the minimum distance between the bin i of the color channel c of the object rectangle r and the non-object rectangle color histogram bin with the same color channel.
3. The method according to claim 2, wherein the weight of the color histogram interval of the object rectangle is calculated according to the following equation
Figure FSB00000816765100016
<math> <mrow> <msubsup> <mi>w</mi> <mi>r</mi> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>T</mi> <mrow> <mo>(</mo> <msubsup> <mi>D</mi> <mi>r</mi> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <mo>|</mo> <msubsup> <mi>H</mi> <mi>r</mi> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>BG</mi> <mi>c</mi> </msup> <mrow> <mo>(</mo> <mi>J</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> </math>
Wherein,
Figure FSB00000816765100021
T(x,y)=ex×k×yand k is a constant.
4. The method according to claim 3, wherein the weight W of the color component of the object rectangle is calculated according to the following equationc
<math> <mrow> <msup> <mi>w</mi> <mi>c</mi> </msup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <msubsup> <mi>w</mi> <mi>r</mi> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <mi>W</mi> <mi>c</mi> </msup> <mo>=</mo> <mfrac> <msup> <mi>w</mi> <mi>c</mi> </msup> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msup> <mi>w</mi> <mi>k</mi> </msup> </mrow> </mfrac> </mrow> </math>
Where K represents the number of color channels and M represents the number of bins in the object rectangle histogram, in equation
Figure FSB00000816765100024
The sum of the weights of each interval in the color component c of the object rectangle r is calculated in equation
Figure FSB00000816765100025
The weight of the color component c is calculated.
5. The method of claim 4, wherein the weight W of the object rectangle is calculated according to the following equationr
<math> <mrow> <msub> <mi>w</mi> <mi>r</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <msubsup> <mi>w</mi> <mi>r</mi> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>W</mi> <mi>r</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>w</mi> <mi>r</mi> </msub> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <msub> <mi>w</mi> <mi>l</mi> </msub> </mrow> </mfrac> </mrow> </math>
Where L represents the number of object rectangles in the equation
Figure FSB00000816765100028
The sum of the weights of the intervals in the object rectangular histogram is calculated, in equation
Figure FSB00000816765100029
The weights of the object rectangles r are calculated.
6. The method of claim 5, wherein the method is performed according to the equationThe color histogram interval of the object rectangle is reweighted to obtain the weight of the reweighted color histogram interval of the object rectangle
Figure FSB000008167651000211
7. The method of claim 6, wherein according to the equation N ═ { N ═ Nr,Ng,Nb-generating a final color model of the object,
wherein, for the color channel c,
Figure FSB000008167651000212
r denotes the number of target rectangles, and M denotes the number of bins of the target rectangle histogram.
8. The method of claim 7, further comprising:
extracting a color connected domain of the object from the new input image based on the final color model of the object;
and performing connected domain analysis on the extracted color connected domain, and calculating the centroid and the size of the object in the input image based on the connected domain analysis so as to locate and track the object.
9. An apparatus for extracting color features with strong recognizability, comprising:
a region dividing unit that divides an object region and a non-object region in an input image into a plurality of rectangles;
a histogram calculation unit that establishes color histograms of an object rectangle and a non-object rectangle using the color channels, wherein each color histogram is divided into a plurality of discrete intervals;
a main color extracting unit for extracting the main color of the object rectangle according to the color histogram established by the histogram calculating unit;
an interval minimum distance calculation unit that calculates a minimum distance between the color histogram interval of the target rectangle and the color histogram interval of the non-target rectangle for the main color extracted by the main color extraction unit;
an interval weight calculation unit that calculates a weight of a color histogram interval of the target rectangle based on the minimum distance calculated by the interval minimum distance calculation unit;
a color component weight calculation unit that calculates the weight of the color component of the target rectangle based on the weight of the color histogram section of the target rectangle calculated by the section weight calculation unit;
a rectangular weight calculation unit that calculates the weight of the target rectangle based on the weight of the color histogram section of the target rectangle calculated by the section weight calculation unit;
an interval re-weighting unit that re-weights the color histogram interval of the target rectangle based on the weight of the target rectangle calculated by the rectangle weight calculation unit and the weight of the color component calculated by the color component weight calculation unit, thereby extracting a color feature of the target having strong recognizability;
and a final color model generation unit which generates a final color model of the object based on a result of the re-weighting of the color histogram section of the object rectangle.
10. The apparatus according to claim 9, wherein the minimum distance calculation unit calculates the minimum distance between the color histogram bin of the object rectangle and the color histogram bin of the non-object rectangle according to the following equation:
<math> <mrow> <mi>J</mi> <mo>[</mo> <msubsup> <mi>H</mi> <mi>r</mi> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>BG</mi> <mi>c</mi> </msup> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>=</mo> <munder> <mrow> <mi>arg</mi> <mi>min</mi> </mrow> <mi>j</mi> </munder> <mo>|</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>-</mo> <msubsup> <mi>b</mi> <mi>j</mi> <mo>&prime;</mo> </msubsup> <mo>|</mo> </mrow> </math>
<math> <mrow> <msubsup> <mi>D</mi> <mi>r</mi> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>-</mo> <msubsup> <mi>b</mi> <mi>J</mi> <mo>&prime;</mo> </msubsup> <mo>|</mo> </mrow> </math>
where J represents a bin of the color histogram of the non-target rectangle of the color channel c, said bin J having a minimum distance to bin i of the color channel c of the main color histogram of the target rectangle r, biIs the identification of the interval i and,is toColor histogram, BG, of the interval i of a color channel c like a rectangle rc(j) Is the non-object color histogram of bin j of color channel c,is the identification of the interval j and,is the minimum distance between the bin i of the color channel c of the object rectangle r and the non-object rectangle color histogram bin with the same color channel.
11. The apparatus according to claim 10, wherein the interval weight calculation unit calculates the weight of the color histogram interval of the object rectangle according to the following equation
Figure FSB00000816765100043
<math> <mrow> <msubsup> <mi>w</mi> <mi>r</mi> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>T</mi> <mrow> <mo>(</mo> <msubsup> <mi>D</mi> <mi>r</mi> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <mo>|</mo> <msubsup> <mi>H</mi> <mi>r</mi> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>BG</mi> <mi>c</mi> </msup> <mrow> <mo>(</mo> <mi>J</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> </math>
Wherein,
Figure FSB00000816765100045
T(x,y)=ex×k×yand k is a constant.
12. The apparatus according to claim 11, wherein the color component weight calculation unit calculates the weight W of the color component of the object rectangle according to the following equationc
<math> <mrow> <msup> <mi>w</mi> <mi>c</mi> </msup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <msubsup> <mi>w</mi> <mi>r</mi> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <mi>W</mi> <mi>c</mi> </msup> <mo>=</mo> <mfrac> <msup> <mi>w</mi> <mi>c</mi> </msup> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msup> <mi>w</mi> <mi>k</mi> </msup> </mrow> </mfrac> </mrow> </math>
Where K represents the number of color channels and M represents the number of bins in the object rectangle histogram, in equation
Figure FSB00000816765100048
The sum of the weights of each interval in the color component c of the object rectangle r is calculated in equation
Figure FSB00000816765100049
In which weights for color components c are calculatedAnd (4) heavy.
13. The apparatus according to claim 12, wherein the object rectangle weight calculation unit calculates the weight W of the object rectangle according to the following equationr
<math> <mrow> <msub> <mi>w</mi> <mi>r</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <msubsup> <mi>w</mi> <mi>r</mi> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>W</mi> <mi>r</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>w</mi> <mi>r</mi> </msub> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <msub> <mi>w</mi> <mi>l</mi> </msub> </mrow> </mfrac> </mrow> </math>
Where L represents the number of object rectangles in the equationThe sum of the weights of the intervals in the object rectangular histogram is calculated, in equation
Figure FSB000008167651000413
The weights of the object rectangles r are calculated.
14. The apparatus of claim 13, wherein the interval re-weighting unit is according to equation
Figure FSB000008167651000414
The color histogram interval of the object rectangle is reweighted to obtain the weight of the reweighted color histogram interval of the object rectangle
Figure FSB00000816765100051
15. The apparatus of claim 14, wherein the final color model generation unit generates a final color model according to the equation N ═ { N ═ Nr,Ng,Nb-generating a final color model of the object,
wherein, for the color channel c,
Figure FSB00000816765100052
r denotes the number of target rectangles, and M denotes the number of bins of the target rectangle histogram.
16. The apparatus of claim 15, further comprising:
a color connected component extracting unit which extracts a color connected component of the object from the new input image based on the final color model of the object;
and an object positioning unit which performs connected component analysis based on the color connected component extracted by the color connected component extracting unit, and calculates the centroid and the size of the object in the input image based on the connected component analysis, thereby positioning and tracking the object.
17. An image forming apparatus comprising:
an imaging unit that takes an image of a subject;
the apparatus for extracting color features with strong recognizability according to claim 16, receiving an image from an imaging unit, extracting color features with strong recognizability of an object, generating a final color model of the object, extracting color connected domains of the object based on the final color model, performing connected domain analysis, and generating parameters for adjusting the pose of the imaging apparatus based on the connected domain analysis;
a control unit receiving a parameter for adjusting the posture of the imaging device from the device for extracting the color feature with strong recognition power, and adjusting the posture of the imaging device;
a storage unit that stores an image of a photographed subject;
and a display unit displaying the photographed image of the subject.
18. The imaging apparatus of claim 17, further comprising:
and the labeling unit is used for providing the object region manually labeled on the image by the user to the device for extracting the color features with strong identification force.
19. The imaging device of claim 18, wherein the control unit adjusts at least one of a pan, tilt, zoom, and select a focus region of the imaging device according to a parameter that adjusts the pose of the imaging device.
20. The imaging apparatus according to claim 19, wherein in the operation of selecting the focus area, the control unit controls the imaging apparatus to select a new area where the object is located as the basis for focusing so as to focus on the area.
21. The imaging device according to claim 20, wherein when the control unit controls the imaging device to select the focus area, the imaging device selects an image center area as a default imaging focus area, or dynamically selects a new image area where the object is located as an imaging focus area, and dynamically adjusts zoom factors, focal lengths, wobbling, or tilt parameters of the imaging device according to image data information of the focus area.
CN200710151897A 2007-09-28 2007-09-28 Device and method for extracting color features with strong discriminative power for imaging device Active CN101398896B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN200710151897A CN101398896B (en) 2007-09-28 2007-09-28 Device and method for extracting color features with strong discriminative power for imaging device
KR1020070136917A KR101329138B1 (en) 2007-09-28 2007-12-24 Imaging system, apparatus and method of discriminative color features extraction thereof
US12/216,707 US8331667B2 (en) 2007-09-28 2008-07-29 Image forming system, apparatus and method of discriminative color features extraction thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200710151897A CN101398896B (en) 2007-09-28 2007-09-28 Device and method for extracting color features with strong discriminative power for imaging device

Publications (2)

Publication Number Publication Date
CN101398896A CN101398896A (en) 2009-04-01
CN101398896B true CN101398896B (en) 2012-10-17

Family

ID=40517435

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200710151897A Active CN101398896B (en) 2007-09-28 2007-09-28 Device and method for extracting color features with strong discriminative power for imaging device

Country Status (2)

Country Link
KR (1) KR101329138B1 (en)
CN (1) CN101398896B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11720745B2 (en) 2017-06-13 2023-08-08 Microsoft Technology Licensing, Llc Detecting occlusion of digital ink

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2556417C2 (en) * 2009-06-25 2015-07-10 Конинклейке Филипс Электроникс Н.В. Detecting body movements using digital colour rear projection
CN102592272B (en) * 2011-01-12 2017-01-25 深圳市世纪光速信息技术有限公司 Extracting method and device of picture dominant tone
WO2012100819A1 (en) * 2011-01-25 2012-08-02 Telecom Italia S.P.A. Method and system for comparing images
CN103135754B (en) * 2011-12-02 2016-05-11 深圳泰山体育科技股份有限公司 Adopt interactive device to realize mutual method
CN102879101A (en) * 2012-08-22 2013-01-16 范迪 Chromatic aberration perception instrument
CN111476735B (en) * 2020-04-13 2023-04-28 厦门美图之家科技有限公司 Face image processing method and device, computer equipment and readable storage medium
CN111565300B (en) * 2020-05-22 2020-12-22 深圳市百川安防科技有限公司 Object-based video file processing method, device and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1384464A (en) * 2001-01-20 2002-12-11 三星电子株式会社 Feature matching and target extracting method and device based on sectional image regions
CN101004748A (en) * 2006-10-27 2007-07-25 北京航空航天大学 Method for searching 3D model based on 2D sketch

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4725105B2 (en) * 2004-01-06 2011-07-13 ソニー株式会社 Image processing apparatus and method, program, and recording medium
KR100572768B1 (en) * 2004-06-02 2006-04-24 김상훈 Automatic Detection of Human Face Objects for Digital Video Security

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1384464A (en) * 2001-01-20 2002-12-11 三星电子株式会社 Feature matching and target extracting method and device based on sectional image regions
CN101004748A (en) * 2006-10-27 2007-07-25 北京航空航天大学 Method for searching 3D model based on 2D sketch

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JP特开2003-36438A 2003.02.07
JP特开平5-159050A 1993.06.25

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11720745B2 (en) 2017-06-13 2023-08-08 Microsoft Technology Licensing, Llc Detecting occlusion of digital ink

Also Published As

Publication number Publication date
KR101329138B1 (en) 2013-11-14
CN101398896A (en) 2009-04-01
KR20090032908A (en) 2009-04-01

Similar Documents

Publication Publication Date Title
CN101406390B (en) Method and apparatus for detecting part of human body and human, and method and apparatus for detecting objects
CN101398896B (en) Device and method for extracting color features with strong discriminative power for imaging device
US8447100B2 (en) Detecting apparatus of human component and method thereof
EP1426898B1 (en) Human detection through face detection and motion detection
CN101894376B (en) Person tracking method and person tracking apparatus
CN101894375B (en) Person tracking method and person tracking apparatus
US8706663B2 (en) Detection of people in real world videos and images
JP5016541B2 (en) Image processing apparatus and method, and program
CN103473542B (en) Multi-clue fused target tracking method
JP4597391B2 (en) Facial region detection apparatus and method, and computer-readable recording medium
US20080013837A1 (en) Image Comparison
US20090245575A1 (en) Method, apparatus, and program storage medium for detecting object
JP5438601B2 (en) Human motion determination device and program thereof
CN113805824B (en) Electronic device and method for displaying image on display apparatus
GB2414615A (en) Object detection, scanning and labelling
US20030052971A1 (en) Intelligent quad display through cooperative distributed vision
CN114140745A (en) Method, system, device and medium for detecting personnel attributes of construction site
US8331667B2 (en) Image forming system, apparatus and method of discriminative color features extraction thereof
US20090245576A1 (en) Method, apparatus, and program storage medium for detecting object
GB2467643A (en) Improved detection of people in real world videos and images.
Bertozzi et al. Multi stereo-based pedestrian detection by daylight and far-infrared cameras
JP2011150594A (en) Image processor and image processing method, and program
KR101146417B1 (en) Apparatus and method for tracking salient human face in robot surveillance
Utasi et al. Recognizing human actions by using spatio-temporal motion descriptors
Nakashima et al. Inferring what the videographer wanted to capture

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant