JP2000023142A - Image monitoring device - Google Patents
Image monitoring deviceInfo
- Publication number
- JP2000023142A JP2000023142A JP10183672A JP18367298A JP2000023142A JP 2000023142 A JP2000023142 A JP 2000023142A JP 10183672 A JP10183672 A JP 10183672A JP 18367298 A JP18367298 A JP 18367298A JP 2000023142 A JP2000023142 A JP 2000023142A
- Authority
- JP
- Japan
- Prior art keywords
- image
- screen
- monitoring
- binarization
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Burglar Alarm Systems (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
(57)【要約】
【課題】照明変化が激しい環境下で感度レベルが異なる
複数の監視領域が存在しても短時間で高精度の2値化処
理を行う画像監視装置を提供する。
【解決手段】本発明の画像監視装置は、監視画面上に複
数の監視領域を設定し、監視領域毎に感度レベルを指定
する。領域毎に感度レベル値を書き込んだ画面を作成
し、入力画像と基準画像との画素毎の最小輝度/最大輝
度を求めて輝度値の画素毎に対応させた変換倍率を算出
して感度レベル調整を行う。入力画像と基準画像との画
素毎の差分処理を行い、差分画像と輝度調整した2値化
用しきい値画面との差分・2値化を行って変化領域を抽
出し、抽出した領域に近隣を集合した統合領域を生成す
る。現入力画像と基準画像において、同一統合領域同士
による正規化相関による濃淡パターンマッチングを行う
ことにより監視対象の物体を検出する構成とする。
(57) Abstract: Provided is an image monitoring device that performs high-precision binarization processing in a short time even if there are a plurality of monitoring areas having different sensitivity levels in an environment where illumination changes drastically. An image monitoring apparatus according to the present invention sets a plurality of monitoring areas on a monitoring screen and specifies a sensitivity level for each monitoring area. Create a screen in which the sensitivity level value is written for each area, calculate the minimum luminance / maximum luminance for each pixel of the input image and the reference image, calculate the conversion magnification corresponding to each pixel of the luminance value, and adjust the sensitivity level I do. A difference process is performed for each pixel between the input image and the reference image, and a difference / binarization is performed between the difference image and the luminance-adjusted binarization threshold screen to extract a change region. Is generated. In the current input image and the reference image, a monitoring target object is detected by performing grayscale pattern matching based on normalized correlation between the same integrated regions.
Description
【0001】[0001]
【発明の属する技術分野】本発明は、物体の映像を撮影
し、撮影して得られた映像を使って物体の画像監視装置
に関し、照明変化が激しい環境下で感度レベルが異なる
複数の監視領域が存在するような画像監視においても短
時間で高精度の2値化処理を行うような画像監視装置に
関する。BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a device for monitoring an image of an object by using the image obtained by capturing an image of the object, and a plurality of monitoring areas having different sensitivity levels in an environment where illumination changes drastically. The present invention relates to an image monitoring apparatus that performs high-precision binarization processing in a short period of time even in image monitoring in which exists.
【0002】[0002]
【従来の技術】物体を監視する第1の従来技術として、
照明変動が激しい環境で感度レベルが異なる複数領域の
領域毎に2値化処理を行って変化領域を抽出して物体を
監視する方法が知られている。2. Description of the Related Art As a first conventional technique for monitoring an object,
2. Description of the Related Art There is known a method in which an object is monitored by performing a binarization process on each of a plurality of regions having different sensitivity levels in an environment where illumination fluctuations are severe to extract a changed region.
【0003】物体を監視する第2の従来技術として、照
明変動が激しい場合、差分画像の濃度頻度分布等を算出
して2値化しきい値を自動計算する方法が知られてい
る。As a second conventional technique for monitoring an object, there is known a method of calculating a binarization threshold automatically by calculating a density frequency distribution or the like of a difference image when illumination fluctuation is severe.
【0004】[0004]
【発明が解決しようとする課題】上述した第1の従来技
術では、複雑な背景シーンに対し複数領域で感度レベル
を換えて領域毎に2値化処理を行うとすると、処理時間
が領域数に比例して増大し、オンライン監視が不可能に
なってしまうという問題がある。In the above-described first prior art, if a complex background scene is subjected to binarization processing for each region by changing the sensitivity level in a plurality of regions, the processing time is reduced to the number of regions. There is a problem that it increases in proportion and online monitoring becomes impossible.
【0005】また第2の従来技術の方法は、感度レベル
が異なるため領域毎に2値化しきい値を自動計算するた
め、処理時間が領域数に比例して増大し、オンライン監
視が不可能になってしまう。一方、処理時間を短縮する
ため全画面に対して濃度頻度分布等を算出して2値化し
きい値を自動計算すると、異なる輝度変化の一方に影響
され適切な2値化しきい値を算出できないという問題が
ある。In the second prior art method, since the binarization threshold value is automatically calculated for each area because the sensitivity level is different, the processing time increases in proportion to the number of areas, and online monitoring becomes impossible. turn into. On the other hand, if the binarization threshold value is automatically calculated by calculating the density frequency distribution or the like for the entire screen in order to reduce the processing time, it is not possible to calculate an appropriate binarization threshold value due to one of the different luminance changes. There's a problem.
【0006】本発明の目的は、上記した従来技術の問題
点を克服し、照明変化が激しい環境下で感度レベルが異
なる複数の監視領域が存在しても短時間で高精度の2値
化処理を行う画像監視装置を提供することにある。SUMMARY OF THE INVENTION An object of the present invention is to overcome the above-mentioned problems of the prior art, and to perform a high-precision binarization process in a short time even when a plurality of monitoring areas having different sensitivity levels exist in an environment where illumination changes drastically. To provide an image monitoring device that performs the following.
【0007】[0007]
【課題を解決するための手段】上記目的を達成する本発
明の方法は、まず、複数の監視領域に対して異なる感度
レベルを設定すると、監視領域毎に対応した感度レベル
値を画面に書込んで2値化用画面を作成する。次に、入
力画像と基準画像とを画素毎に比較して最大輝度又は最
小輝度の画像(最大輝度/最小輝度画面)を作成する。
該最大輝度/最小輝度画面を用いて、前記2値化用画面
の画素毎に感度レベル値を暗部は設定レベル値より低
く、明部は設定レベル値より高くする等の調整を行って
2値化用画面のしきい値調整を行うことを特徴とする。According to the method of the present invention for achieving the above object, first, when different sensitivity levels are set for a plurality of monitoring areas, sensitivity level values corresponding to each monitoring area are written on a screen. Creates a binarization screen. Next, the input image and the reference image are compared for each pixel to create an image having the maximum luminance or the minimum luminance (maximum luminance / minimum luminance screen).
Using the maximum luminance / minimum luminance screen, the sensitivity level value is adjusted for each pixel of the binarization screen such that the dark part is lower than the set level value and the light part is higher than the set level value. It is characterized in that the threshold value of the conversion screen is adjusted.
【0008】一方、入力画像と基準画像の画素毎の差分
を行って差分画像を作成する。該差分画像と前記2値化
用画面のしきい値調整を行った画面との画素毎の差分を
行った差分画面に対して、所定のしきい値(固定値)で
2値化を行って変化領域を抽出することを特徴とする。
抽出した変化領域のノイズ除去を行って、2値画像上で
近接する変化領域同士を外接矩形等によって一まとめに
する統合領域を生成する。そして、基準画像と入力画像
において、一方の画像で前記統合領域による追跡元領域
をテンプレートパターンとし、他方の画像で前記追跡元
領域に対応する領域とほぼ同一領域を探索領域とし、前
記テンプレートパターンで正規化相関による濃淡パター
ンマッチングを行い、相関値による類似度が所定値より
高い変化領域を外乱として消去し、類似度の低い変化領
域を監視対象の物体として抽出して監視することを特徴
とする。On the other hand, a difference image is created by performing a difference for each pixel between the input image and the reference image. Binarization is performed at a predetermined threshold (fixed value) on a difference screen obtained by performing a pixel-by-pixel difference between the difference image and the screen on which the threshold of the binarization screen has been adjusted. It is characterized in that a change area is extracted.
A noise is removed from the extracted change area, and an integrated area is created by combining the change areas adjacent to each other on the binary image by a circumscribed rectangle or the like. Then, in the reference image and the input image, a tracking source area based on the integrated area is used as a template pattern in one image, and a search area is set to be substantially the same as the area corresponding to the tracking source area in the other image. The method is characterized in that shading pattern matching by normalized correlation is performed, a change area having a similarity based on a correlation value higher than a predetermined value is eliminated as a disturbance, and a change area having a low similarity is extracted and monitored as an object to be monitored. .
【0009】上記目的を達成する本発明の装置は、設備
に設置されたITVカメラなどの撮像手段と、その画像
を処理して監視領域の物体を検出し、検出した物体画像
を格納して報知する表示制御を行う監視装置本体と、検
出した物体を表示する表示装置などのモニタを備える物
体の監視装置において、入力フレームを取り込みA/D
変換する入力手段,複数領域の感度レベルを設定する感
度領域設定手段,設定した感度レベルに応じて2値化用
画面を作成する2値化用画面作成手段,入力画像間の画
素毎の最大輝度/最小輝度の値に対応させて2値化用画
面のしきい値を調整する2値化用画面調整手段,基準画
像と入力画像間の差分を行う差分画像作成手段,差分画
像を2値化用画面調整手段でしきい値調整した画面と差
分を行い更に2値化して変化領域を抽出する変化領域抽
出手段,近接する抽出領域を一つの統合領域にまとめる
抽出領域統合手段,統合領域毎に基準画像と入力画像間
の正規化相関処理による濃淡パターンマッチングを行っ
て類似度が所定値より低い変化領域を物体として検知す
る物体検知手段,物体を検知した場合その画像を格納し
て必要に応じて要求があった場合や検知時に報知する表
示制御手段を、前記監視装置本体に設けたことを特徴と
する。また、上記目的を達成する本発明の装置は、設備
に設置されたITVカメラなどにより撮像し、その画像
を処理して監視領域の物体を検出し、検出した物体画像
を格納して報知する表示制御を行う監視装置本体と、検
出した物体を表示する表示装置などのモニタを備える物
体の監視装置において、設備に設置されたITVカメラ
などにより撮像し、その画像を処理して監視領域の物体
を検出し、検出した物体画像を格納して報知する表示制
御を行う監視装置本体と、検出した移動物体を表示する
表示装置などのモニタを備える物体の監視装置におい
て、入力フレームを取り込みA/D変換する入力手段,
複数領域の感度レベルを設定する感度領域設定手段,設
定した感度レベルに応じて2値化用画面を作成する2値
化用画面作成手段,入力画像間の画素毎の最大輝度/最
小輝度の値に対応させて2値化用画面のしきい値を調整
する2値化用画面調整手段,基準画像と入力画像間の差
分を行う差分画像作成手段,差分画像を2値化用画面調
整手段でしきい値調整した画面と差分を行い更に2値化
して変化領域を抽出する変化領域抽出手段,近接する抽
出領域を一つの統合領域にまとめる抽出領域統合手段,
統合領域毎に基準画像と入力画像間の正規化相関処理に
よる濃淡パターンマッチングを行って類似度が所定値よ
り低い変化領域を物体として検知する物体検知手段,物
体を検知した場合その画像を格納して必要に応じて要求
があった場合や検知時に報知する表示制御手段を、前記
監視装置本体に設けたことを特徴とする。An apparatus according to the present invention that achieves the above object, includes an image pickup means such as an ITV camera installed in equipment, detects an object in a monitoring area by processing the image, and stores and notifies the detected object image. An input frame is captured by a monitoring device main body that performs display control and an object monitoring device that includes a monitor such as a display device that displays a detected object.
Input means for conversion, sensitivity area setting means for setting sensitivity levels of a plurality of areas, binarization screen creation means for creating a binarization screen according to the set sensitivity levels, maximum luminance for each pixel between input images / Binarization screen adjustment means for adjusting the threshold value of the binarization screen corresponding to the minimum luminance value, difference image creation means for performing a difference between the reference image and the input image, and binarization of the difference image Changing area extracting means for performing a difference from the screen adjusted by the threshold value by the screen adjusting means and further binarizing to extract a changing area; extracting area integrating means for integrating adjacent extracting areas into one integrated area; An object detecting means for detecting a change area having a similarity lower than a predetermined value as an object by performing a grayscale pattern matching by a normalized correlation process between the reference image and the input image, and storing the image when the object is detected, as necessary. Important Display control means for informing at when a and detection, characterized in that provided in the monitoring apparatus. In addition, the apparatus of the present invention that achieves the above object captures an image with an ITV camera or the like installed in equipment, processes the image to detect an object in a monitoring area, and stores and notifies the detected object image. In an object monitoring device including a monitoring device main body that performs control and a monitor such as a display device that displays a detected object, an image is captured by an ITV camera or the like installed in the facility, and the image is processed to determine an object in a monitoring area. An input frame is captured and A / D-converted by a monitoring apparatus body that performs display control for detecting and storing and detecting a detected object image and notifying the same and a monitor such as a display device that displays a detected moving object. Input means to
Sensitivity area setting means for setting sensitivity levels of a plurality of areas, binarization screen creation means for creating a binarization screen according to the set sensitivity levels, maximum luminance / minimum luminance values for each pixel between input images A binarizing screen adjusting unit for adjusting a threshold value of the binarizing screen in correspondence with the above, a difference image creating unit for performing a difference between the reference image and the input image, and a binarizing screen adjusting unit for the differential image. Changing area extracting means for performing a difference with the screen adjusted for the threshold value and further binarizing to extract a changing area; extracting area integrating means for combining adjacent extracted areas into one integrated area;
Object detection means for detecting a change area having a similarity lower than a predetermined value by performing grayscale pattern matching by normalized correlation processing between the reference image and the input image for each integrated area, and storing the image when the object is detected Display control means for notifying when a request is made as necessary or upon detection is provided in the monitoring apparatus main body.
【0010】[0010]
【発明の実施の形態】以下、本発明の一実施例を図面を
用いて説明する。DESCRIPTION OF THE PREFERRED EMBODIMENTS One embodiment of the present invention will be described below with reference to the drawings.
【0011】図1は、本発明の一実施例を示す画像監視
装置のブロック図である。本実施例の監視装置は、監視
装置本体10,表示装置8000から構成される。また
は、監視装置本体20,ITVカメラ100,表示装置
8000で構成してもよい。本体20は、パーソナルコ
ンピュータに画像処理ボードを装着して実現してもよ
い。本体10は、カメラと画像処理ボードとパーソナル
コンピュータを一体としたインテリジェントカメラとし
て実現してもよい。FIG. 1 is a block diagram of an image monitoring apparatus according to an embodiment of the present invention. The monitoring device according to the present embodiment includes a monitoring device main body 10 and a display device 8000. Alternatively, the monitoring device main body 20, the ITV camera 100, and the display device 8000 may be configured. The main body 20 may be realized by mounting an image processing board on a personal computer. The main body 10 may be realized as an intelligent camera in which a camera, an image processing board, and a personal computer are integrated.
【0012】本実施例では、まず、ITVカメラ100
が監視対象を撮影すると、画像入力処理部500は、取
り込んだフレームの画像信号のA/D変換やCCDノイ
ズ低減処理等を行う。領域感度設定部1000は、複数
の監視領域をモニタ画面上で設定し、更に、監視領域毎
に対応させた感度レベルを指定する。2値化用画面作成
部1500は、領域感度設定部1000で設定した領域
と指定した感度レベルから領域毎に感度レベル値を書き
込んだ画面を作成する。2値化用画面調整部2000
は、入力画像と基準画像との画素毎の最小輝度又は最大
輝度を求めて輝度値の画素毎に対応させた変換倍率を算
出して2値化用画面の感度レベル調整を行う。In this embodiment, first, the ITV camera 100
When the camera captures an image of the monitoring target, the image input processing unit 500 performs A / D conversion of the captured frame image signal, CCD noise reduction processing, and the like. The area sensitivity setting unit 1000 sets a plurality of monitoring areas on a monitor screen, and further specifies a sensitivity level corresponding to each monitoring area. The binarization screen creation unit 1500 creates a screen in which a sensitivity level value is written for each area from the area set by the area sensitivity setting unit 1000 and the designated sensitivity level. Screen adjustment unit for binarization 2000
Calculates the minimum luminance or the maximum luminance of each pixel of the input image and the reference image, calculates the conversion magnification corresponding to each pixel of the luminance value, and adjusts the sensitivity level of the binarization screen.
【0013】差分画像作成部3000は、入力画像と基
準画像との画素毎の差分処理を行う。変化領域抽出部4
000は、差分画像作成部3000の差分画像と2値化
用画面調整部2000で輝度調整した2値化用しきい値
画面との差分・2値化を行って変化領域を抽出する。抽
出領域統合部5000は、抽出した領域に近隣を集合し
た統合領域を生成する。物体検知部6000は、現入力
画像と基準画像において、同一統合領域同士による正規
化相関による濃淡パターンマッチングを行い、正規化相
関の類似度を判定し、類似度の高い領域は外乱とみて除
外し、類似度の低い領域を監視対象の物体として検出す
る。The difference image creation unit 3000 performs difference processing for each pixel between the input image and the reference image. Change area extraction unit 4
000 performs a difference / binarization between the difference image of the difference image creating unit 3000 and the binarization threshold screen whose luminance has been adjusted by the binarization screen adjustment unit 2000, and extracts a change area. The extraction area integration unit 5000 generates an integration area in which neighbors are gathered in the extracted area. The object detection unit 6000 performs grayscale pattern matching based on the normalized correlation between the same integrated regions in the current input image and the reference image, determines the similarity of the normalized correlation, and excludes a region having a high similarity as a disturbance. , An area having a low similarity is detected as an object to be monitored.
【0014】表示制御部7000は、検知した物体の画
像データを格納し、その検知位置等の情報や検知日や時
刻等の情報をリアルタイムで、または要求が発生した都
度、表示装置8000に表示する。A display control unit 7000 stores image data of a detected object, and displays information such as a detection position and information such as a detection date and time on a display device 8000 in real time or whenever a request is generated. .
【0015】本発明の処理概要を図2を用いて説明す
る。まず、ITVカメラ100で監視対象シーンを撮影
して監視する場合、1つの監視対象シーンには感知レベ
ルを高くしたい重要な領域や感知レベルを鈍くしたい重
要でない領域等が複数混在する。更に昼夜を通しての監
視を行う場合明るさ変動が激しい。更に、監視範囲に照
明装置等がある場合には、この照明装置の点滅により局
部的な照明変動も発生してしまう。感度レベルが異なる
複数領域単位で処理を繰り返すと時間がかかりオンライ
ン監視ができなくなる。そこで、監視領域毎に異なる感
度レベルを設定(例えば手動設定)する。まず、領域毎
にレベルに対応した値を書き込んだ2値化用画面217
0を作成する。一方、入力画像2110と基準画像21
20との画素毎の最小輝度又は最大輝度を求めた画面Ma
x/Min(B,F)2140を作成する。次に、Max/Min
(B,F)2140の輝度値に応じた変換倍率2160
(例えば、倍率a=0〜2)の関数を作成して、Max/M
in(B,F)2140の輝度値の画素毎に対応させた変
換倍率を2値化用画面2170に乗ずる。これにより、
監視領域毎に設定した感度レベルに対して画像の輝度変
化に追随して調整を行った2値化調整画面2180が作
成される。そこで、入力画像2110と基準画像212
0の画素毎の差を求めた差分画像2130と前記2値化
調整画面2180との画素毎の差分を行ってから所定の
しきい値で2値化2150(ノイズを抽出しない程度の
固定しきい値)を行い、変化領域を抽出する。An outline of the processing of the present invention will be described with reference to FIG. First, when monitoring and monitoring a monitoring target scene with the ITV camera 100, one monitoring target scene includes a plurality of important regions for which a higher sensing level is required and a plurality of non-critical regions for which the sensing level is desired to be reduced. Further, when monitoring is performed throughout the day and night, the brightness varies greatly. Furthermore, when there is a lighting device or the like in the monitoring range, the flickering of the lighting device causes local lighting fluctuation. If the process is repeated for a plurality of areas having different sensitivity levels, it takes time, and online monitoring cannot be performed. Therefore, different sensitivity levels are set (for example, manually set) for each monitoring area. First, a binarization screen 217 in which values corresponding to levels are written for each area
Create 0. On the other hand, the input image 2110 and the reference image 21
Screen Ma for which the minimum luminance or the maximum luminance for each pixel was obtained with 20
x / Min (B, F) 2140 is created. Next, Max / Min
(B, F) Conversion magnification 2160 according to luminance value of 2140
(For example, a function of magnification a = 0 to 2) is created and Max / M
The binarization screen 2170 is multiplied by the conversion magnification corresponding to each pixel of the luminance value of in (B, F) 2140. This allows
A binarization adjustment screen 2180 is created in which the sensitivity level set for each monitoring area is adjusted according to the luminance change of the image. Therefore, the input image 2110 and the reference image 212
After performing a pixel-by-pixel difference between the difference image 2130 in which the difference of each pixel is 0 and the binarization adjustment screen 2180, the binarization 2150 (a fixed threshold that does not extract noise) is performed with a predetermined threshold value. Value) to extract the change area.
【0016】更に、抽出領域統合手段が、変化領域の近
接領域を統合して外接矩形領域を作成し、物体検知手段
が、現入力画像2110又は基準画像2120の一方を
テンプレートパターンとして登録し、他の一方の画像に
対し、テンプレートパターン領域とほぼ同一領域を探索
領域とし、前記テンプレートパターンで探索領域を濃淡
パターンマッチングを行い、相関値による類似度が所定
値より高い変化領域を外乱として消去し、類似度の低い
変化領域を監視対象の物体として検知する。Further, the extraction area integration means integrates the adjacent areas of the change area to create a circumscribed rectangular area, and the object detection means registers one of the current input image 2110 or the reference image 2120 as a template pattern. With respect to one of the images, a region substantially the same as the template pattern region is used as a search region, and the search region is subjected to light and shade pattern matching with the template pattern, and a change region in which the similarity by the correlation value is higher than a predetermined value is eliminated as a disturbance, A change area having a low similarity is detected as an object to be monitored.
【0017】このように、複数の監視領域を一まとめに
して1回の処理で変化領域の抽出を行う一方で、照明変
動に追随した2値化用画面のしきい値調整も行うため、
照明変動に追随した高速処理が可能になるので高精度な
オンライン監視ができる。As described above, since a plurality of monitoring regions are collectively collected to extract a changing region in one process, the threshold value of the binarization screen following the illumination fluctuation is also adjusted.
High-speed processing following illumination fluctuations becomes possible, enabling highly accurate online monitoring.
【0018】図3は、本発明における領域感度設定部1
000の監視領域とその感度レベル設定の一実施例を示
す説明図である。ITVカメラ100が撮影した監視対
象画像を表示したモニタ上で、感知を高くする重要な領
域1200を手動又は自動で指定し、更にその感度レベ
ルを高感度(例えば輝度値64)で指定する。また、感
知を行わない領域1300を手動又は自動で指定し、更
にその感度レベルを無感知(例えば輝度値255)で指
定する。一方、それ以外の領域1100は感知を鈍くし
てもよいのでその感度レベルを中感度(例えば輝度値1
2)で指定する。いま、入退室監視の場合では、感知を
高くする重要な領域は、ドア領域であり、それ以外は感
知を低くする領域として指定すればよい。FIG. 3 shows an area sensitivity setting section 1 according to the present invention.
FIG. 4 is an explanatory diagram showing an example of setting a monitoring area of 000 and its sensitivity level. On the monitor displaying the monitoring target image captured by the ITV camera 100, an important area 1200 for which the sensitivity is to be increased is manually or automatically designated, and the sensitivity level is designated with high sensitivity (for example, a luminance value of 64). In addition, the area 1300 where no sensing is performed is manually or automatically designated, and the sensitivity level is designated without sensing (for example, a brightness value of 255). On the other hand, since the sensitivity of the other region 1100 may be reduced, the sensitivity level is set to the medium sensitivity (for example, the luminance value 1).
Specify in 2). Now, in the case of the entry / exit monitoring, the important area where the sensing is high is the door area, and the other area may be designated as the area where the sensing is low.
【0019】図4は、本発明における画像入力処理部5
00の内部の一実施例を示すブロック図である。A/D
変換部550が、ITVカメラ100で撮影した画像を
取り込んでA/D変換して入力画像560Giを作成す
ると、入力画像Giのノイズ除去部570がノイズに対
応して平滑化処理やメディアンフィルタ処理等を行っ
て、入力画像560のCCDノイズ等を除去した画像F
iを作成する。FIG. 4 shows an image input processing unit 5 according to the present invention.
FIG. 3 is a block diagram showing an example of the internal configuration of the 00. A / D
When the conversion unit 550 takes in an image captured by the ITV camera 100 and performs A / D conversion to create an input image 560Gi, the noise removal unit 570 of the input image Gi performs a smoothing process, a median filter process, or the like corresponding to the noise. To remove the input image 560 from the CCD F and the like.
Create i.
【0020】図5は、本発明における2値化用画面調整
部2000の内部の一実施例を示すブロック図である。
2値化用画面設定部2510は、領域感度設定部100
0で指定した監視領域毎にそれぞれの感度レベル値を設
定した値を画面に書き込んで2値化しきい値用画面を作
成する。最大/最小輝度画面作成部2520は、現入力
画像と基準画像(背景画像や直前の入力画像等でよい)
の画素毎の最大輝度又は最小輝度を求めた最大/最小輝
度画面を作成する。2値化用画面変換部2530は、2値化
用画面設定部2510で作成した2値化しきい値用画面
の画素値(しきい値画面の値)を最大/最小輝度画面作
成部2520で求めた最大/最小輝度画面の輝度に対応
させてしきい値を変換する。変換倍率は、0〜2倍程度
とし、無感知領域は、最大輝度をしきい値とするため最
大輝度そのままでもよい。FIG. 5 is a block diagram showing an embodiment of the inside of the binarizing screen adjusting unit 2000 according to the present invention.
The binarization screen setting unit 2510 includes a region sensitivity setting unit 100
A value in which each sensitivity level value is set for each monitoring area designated by 0 is written on the screen to create a binarized threshold screen. The maximum / minimum luminance screen creation unit 2520 includes a current input image and a reference image (a background image or a previous input image may be used).
A maximum / minimum luminance screen in which the maximum luminance or the minimum luminance of each pixel is obtained is created. The binarization screen conversion unit 2530 obtains the pixel value (threshold screen value) of the binarization threshold screen created by the binarization screen setting unit 2510 by the maximum / minimum luminance screen creation unit 2520. The threshold is converted in accordance with the maximum / minimum luminance screen luminance. The conversion magnification is about 0 to 2 times, and the maximum luminance may be used as it is in the non-sensing area because the maximum luminance is used as the threshold value.
【0021】これにより、変換倍率の指定を輝度に対応
した非線形(線形でもよい)関数とすれば2値化しきい
値用画面のしきい値は画素毎に任意に変更可能できる。Thus, if the conversion magnification is specified as a non-linear (or linear) function corresponding to the luminance, the threshold value of the binarized threshold value screen can be arbitrarily changed for each pixel.
【0022】図6は、本発明における2値化用画面変換
部2530の変換倍率の一実施例を示す説明図である。
最大/最小輝度画面作成部2520で作成した最大/最
小輝度画面の輝度値2230に対して、変換倍率224
0の倍率は、関数2160とする。例えば、入力画像が
暗い場合は、基準画像と入力画像の輝度差が少ないこと
が多いので、2値化のしきい値を小さくした方が変化物
体の抽出には適する。しかし、入力画像が明るい場合
は、基準画像と入力画像の輝度差が大きいことが多いの
で、2値化のしきい値を大きくした方が変化物体の抽出
には適する。そのため、関数2160は、最大/最小輝
度画面の輝度値2230(Max/Min(B,F))が小さ
い場合は、2値化のしきい値部の画素の変換倍率を1.
0 倍未満にし、大きい場合には2値化のしきい値部の
画素の変換倍率を1.0 倍以上にするような非線形関数
にする。FIG. 6 is an explanatory diagram showing an embodiment of the conversion magnification of the screen conversion unit for binarization 2530 in the present invention.
The conversion magnification 224 is applied to the luminance value 2230 of the maximum / minimum luminance screen created by the maximum / minimum luminance screen creation unit 2520.
The magnification of 0 is a function 2160. For example, when the input image is dark, the luminance difference between the reference image and the input image is often small. Therefore, a smaller threshold value for binarization is more suitable for extracting a changing object. However, when the input image is bright, the luminance difference between the reference image and the input image is often large. Therefore, increasing the threshold value for binarization is more suitable for extracting a changing object. Therefore, when the luminance value 2230 (Max / Min (B, F)) of the maximum / minimum luminance screen is small, the function 2160 sets the conversion magnification of the pixel in the threshold part of the binarization to 1.
A non-linear function is set to be less than 0, and if it is larger, the conversion magnification of the pixel in the threshold part of the binarization is set to 1.0 or more.
【0023】図7は、本発明における2値化用画面変換
部2530で変換した2値化しきい値用画面の一実施例
を示す説明図である。図3で示す領域感度設定部100
0の監視領域とその感度レベルの値の場合、最大/最小
輝度画面作成部2520で作成した最大/最小輝度画面
の輝度値2230に応じて、高感度領域2630の値6
4に対し、画素毎に倍率が異なるため64×0.0〜6
4×2.0の範囲にばらつく。同様に、中感度領域26
10の値12に対し、画素毎に倍率が異なるため12×
0.0〜12×2.0の範囲にばらつく。しかし、無感知
領域2620は、感知レベルが最高輝度とした255の
ままでよい。即ち、入力画像が暗い場合は、指定した感
度より低くして輝度差が少なくても変化を抽出しやす
く、入力画像が明るい場合は、指定した感度より高くし
てノイズを抽出しにくい2値化しきい値画面を作成す
る。これにより、感知レベルの異なる複数領域があり照
明変動が激しい環境でも画素毎に2値化画面のしきい値
調整を行うことが可能になり、高速で検知精度の向上が
図れる。FIG. 7 is an explanatory diagram showing an embodiment of the binarization threshold screen converted by the binarization screen conversion section 2530 in the present invention. Area sensitivity setting section 100 shown in FIG.
In the case of the monitoring area of 0 and the value of the sensitivity level, the value 6 of the high sensitivity area 2630 is set according to the luminance value 2230 of the maximum / minimum luminance screen created by the maximum / minimum luminance screen creating unit 2520.
Since the magnification differs for each pixel, 64 × 0.0-6
It varies in the range of 4 × 2.0. Similarly, the medium sensitivity region 26
For the value 12 of 10, 12 ×
It varies in the range of 0.0 to 12 × 2.0. However, the non-sensing area 2620 may remain at 255 where the sensing level is the highest luminance. That is, when the input image is dark, the sensitivity is lower than the specified sensitivity and the change is easy to extract even if the luminance difference is small. When the input image is bright, the input image is higher than the specified sensitivity and the binarization is difficult to extract noise. Create a threshold screen. This makes it possible to adjust the threshold value of the binarized screen for each pixel even in an environment in which there are a plurality of regions having different sensing levels and illumination fluctuations are severe, and the detection accuracy can be improved at high speed.
【0024】図8は、本発明における変化領域抽出部4
000の内部の一実施例を示すブロック図である。2値
画像作成部4100は、差分画像作成部3000で算出
した差分画像と2値化用画面変換部2530で変換した
2値化しきい値用画面との差分・2値化を行って2値画
像を作成する。該2値化は、固定しきい値で行い、最大
ノイズに2〜3階調程度加算した値をしきい値としてよ
い。微小面積除外部4200は、ノイズレベルの微小面
積の孤立領域を除外する。FIG. 8 shows a change area extracting section 4 according to the present invention.
000 is a block diagram showing an example of the inside of the 000. The binary image creation unit 4100 performs a difference / binarization between the difference image calculated by the difference image creation unit 3000 and the binarization threshold screen converted by the binarization screen conversion unit 2530 to generate a binary image. Create The binarization is performed with a fixed threshold, and a value obtained by adding about two to three gradations to the maximum noise may be used as the threshold. The small area exclusion unit 4200 excludes an isolated region having a small area of a noise level.
【0025】図9は、本発明における抽出領域統合部5
000の内部の一実施例を示すブロック図である。ラベ
ル画像作成部5010は、変化領域抽出部4000で作
成した2値画像をラベル付けしてラベル画像を作成す
る。ラベル間距離算出部5020は、ラベル毎の重心(中
心)を求め、ラベル毎の重心間の距離を算出する。ラベ
ル統合判定部5030は、重心間の距離が所定距離以内
のラベルか否か判定し、所定距離以内の複数ラベルを同
一物体として統合の対象とする。統合サイズ算出部50
40は、統合対象の複数ラベルの外接矩形の大きさを算
出する。統合サイズ判定部5050は、統合対象の複数
ラベルの外接矩形の大きさが、所定サイズ以上か否か判
定する。ラベル統合部5060は、統合サイズ判定部5
050が、所定サイズ以内と判定した複数ラベル間を一
まとまりとして同一のラベル番号をつける。FIG. 9 shows an extraction area integration unit 5 according to the present invention.
000 is a block diagram showing an example of the inside of the 000. The label image creation unit 5010 labels the binary image created by the change area extraction unit 4000 to create a label image. The inter-label distance calculation unit 5020 calculates the center of gravity (center) of each label and calculates the distance between the centers of gravity of each label. The label integration determination unit 5030 determines whether or not the distance between the centers of gravity is within a predetermined distance, and sets a plurality of labels within the predetermined distance as the same object to be integrated. Integrated size calculation unit 50
40 calculates the size of the circumscribed rectangle of a plurality of labels to be integrated. The integrated size determination unit 5050 determines whether the size of the circumscribed rectangle of the plurality of labels to be integrated is equal to or larger than a predetermined size. The label integration unit 5060 includes the integrated size determination unit 5
050 assigns the same label number to a plurality of labels determined to be within the predetermined size.
【0026】図10は、本発明における抽出領域統合部
5000の抽出領域の統合を行う一手順を示す説明図で
ある。ステップ5110は、2値画像のラベリングを行
い、1〜Lnまでのラベルをつけると、Lnが総ラベル
数となる。総ラベル数Lnを繰り返すため、ステップ5
120でラベル番号を(i)を初期化する。次に、ステ
ップ5130で、ラベル番号を一つ大きく(iを増加)
し、ステップ5140で、全てのラベルが終了したか否
かチェックする。終了しない場合、i番目のラベルに対
し、ステップ5150以降の処理を行う。ステップ51
50で、i番目のラベルに対し、重心(中心)座標を算
出する。ステップ5160でi番目のラベルとi+1番
目〜Ln番目のラベル間中心のX方向とY方向の距離が
許容範囲か否かチェックする。許容範囲でない場合、ス
テップ5130へ戻る。許容範囲以内の場合、ステップ
5170で、許容範囲のラベルの外接矩形の大きさが所
定の大きさの範囲以内か否かチェックする。外接矩形が
許容範囲でない場合、ステップ5130へ戻る。許容範
囲以内の場合、ステップ5180で、許容範囲以内のラ
ベル全てをi番目のラベルに加え、加えたラベルを抹消
する。ステップ5190でi番目〜Ln番目のラベルについ
て、昇順にソートすると、ステップ5130へ戻り、新
たにi番目から再度処理を行う。これにより、次々と距
離と外接矩形の大きさが許容範囲以内のラベルが統合さ
れていく。FIG. 10 is an explanatory diagram showing one procedure for integrating extraction areas by the extraction area integration unit 5000 in the present invention. In step 5110, labeling of the binary image is performed, and when labels from 1 to Ln are attached, Ln becomes the total number of labels. Step 5 to repeat the total label number Ln
At 120, the label number (i) is initialized. Next, in step 5130, the label number is increased by one (i is increased).
Then, in step 5140, it is checked whether or not all the labels have been completed. If the processing is not to be ended, the processing after step 5150 is performed on the i-th label. Step 51
At 50, the center of gravity (center) coordinates are calculated for the i-th label. In step 5160, it is checked whether or not the distance between the i-th label and the center between the (i + 1) -th to Ln-th labels in the X and Y directions is within an allowable range. If not, the process returns to step 5130. If it is within the allowable range, it is checked in step 5170 whether or not the size of the circumscribed rectangle of the label in the allowable range is within a predetermined size range. If the circumscribed rectangle is not within the allowable range, the process returns to step 5130. If it is within the allowable range, at step 5180, all the labels within the allowable range are added to the i-th label, and the added label is deleted. When the i-th to Ln-th labels are sorted in ascending order in step 5190, the process returns to step 5130, and the processing is performed again from the i-th label. As a result, labels in which the distance and the size of the circumscribed rectangle are within the allowable range are integrated one after another.
【0027】図11は、本発明におけるステップ516
0で中心からの距離が許容範囲であるか否かチェックす
る場合の一実施例を示す説明図である。例えば、2個の
ラベルがある場合、ラベル5210の許容範囲にラベル
5220があるか否かは、ラベル5210の中心座標
(o1x,o1y)5240から、ラベル5220の中心座標
(o2x,o2y)5250を求める。座標(o1x,o1y)と座
標(o2x,o2y)のX方向の距離5260とY方向の距離
5270が許容範囲にあれば、同一ラベルとして統合す
る。X方向の距離5260の許容範囲及びY方向の距離
5270の許容範囲は、例えば、移動物体が縦に長い人
物の場合、X方向は約5〜10程度とし、Y方向は約5
〜15程度としたり、または、X方向は約5〜15程度
とし、Y方向も約5〜15程度としたりしてもよい。い
ずれにしても、どこまでの範囲を統合するかにより、適
切に設定すればよい。FIG. 11 shows step 516 in the present invention.
It is explanatory drawing which shows one Example at the time of checking whether the distance from the center is an allowable range at 0. For example, when there are two labels, whether or not the label 5220 is within the allowable range of the label 5210 is determined by calculating the center coordinates (o2x, o2y) 5250 of the label 5220 from the center coordinates (o1x, o1y) 5240 of the label 5210. Ask. If the distance 5260 in the X direction and the distance 5270 in the Y direction between the coordinates (o1x, o1y) and the coordinates (o2x, o2y) are within the allowable range, they are integrated as the same label. The allowable range of the distance 5260 in the X direction and the allowable range of the distance 5270 in the Y direction are, for example, about 5 to 10 in the X direction and about 5 in the Y direction when the moving object is a vertically long person.
Alternatively, the X direction may be about 5 to 15, and the Y direction may be about 5 to 15. In any case, an appropriate setting may be made depending on the range to be integrated.
【0028】図12は、本発明における物体検知部60
00の内部の一実施例を示すブロック図である。統合領
域の外接矩形算出部6010は、統合した物体の外接矩
形を算出する。入力画像と基準画像(背景画像や直前の
入力画像等でよい)の相関処理部6030は、現入力画
像と基準画像とを用いて、いずれか一方の外接矩形領域
をテンプレートパターンとして、残りの他方の画像に対
し外接矩形領域とほぼ同一位置(拡張サイズは±1画素
程度)で正規化相関を行う。入力画像と基準画像の類似
度算出部6050は、正規化相関による類似度を算出す
る。FIG. 12 shows an object detection unit 60 according to the present invention.
FIG. 3 is a block diagram showing an example of the internal configuration of the 00. The circumscribed rectangle calculation unit 6010 of the integrated area calculates a circumscribed rectangle of the integrated object. The correlation processing unit 6030 of the input image and the reference image (which may be a background image or the immediately preceding input image) uses the current input image and the reference image to set one of the circumscribed rectangular areas as a template pattern and The normalized correlation is performed on the image at a position substantially the same as the circumscribed rectangular area (the extended size is about ± 1 pixel). The similarity calculator 6050 between the input image and the reference image calculates the similarity based on the normalized correlation.
【0029】類似度の算出は、The similarity is calculated by
【0030】[0030]
【数1】 (Equation 1)
【0031】の正規化相関処理による。即ち、登録テン
プレートパターンと対象画像の明るさを正規化して明る
さの差を求める(3F−8車番認識システムの濃淡パタ
ーンマッチング処理の応用、情報処理学会第49回全国
大会、平成6年後期)ものであり、(数1)の演算をマ
ッチング領域全体にわたって実行し、類似度を算出す
る。By the normalized correlation processing. That is, a brightness difference is obtained by normalizing the brightness of the registered template pattern and the target image (an application of the light and shade pattern matching processing of the 3F-8 car number recognition system, the IPSJ 49th National Convention, late 1994) ), And performs the operation of (Expression 1) over the entire matching region to calculate the similarity.
【0032】物体判定部6060は、(数1)により算
出した類似度が所定値(例えば約0.5〜0.8程度)以
下の場合、物体として検知しその外接矩形の位置情報等
も算出する。それ以外の場合、外乱として除外する。When the similarity calculated by (Equation 1) is equal to or less than a predetermined value (for example, about 0.5 to 0.8), the object determining unit 6060 detects the object as an object and calculates the position information of the circumscribed rectangle. I do. Otherwise, it is excluded as a disturbance.
【0033】図13は、正規化相関処理を用いて物体を
検知する場合の一実施例を示す説明図である。入力画像
6300と基準画像6200との差分により抽出した物
体6220の統合領域の外接矩形6210や物体624
0の統合領域の外接矩形6230には、入力画像630
0又は基準画像6200のどちらか一方に抽出物体が含
まれ、残りの一方には抽出物体が含まれていない。い
ま、検知物体6220の基準画像と入力画像とで同一外接矩
形の領域同士で正規化相関を行うと、物体6220が、
入力画像に存在して基準画像には存在しないため、類似
度が低くなる。一方、検知物体6240の基準画像と入
力画像とで同一外接矩形の領域同士で正規化相関を行う
と、物体6240が、入力画像に存在して基準画像には
存在しても変化部が少ないため、類似度が低くなる。FIG. 13 is an explanatory diagram showing an embodiment in which an object is detected using the normalized correlation processing. The circumscribed rectangle 6210 or the object 624 of the integrated area of the object 6220 extracted by the difference between the input image 6300 and the reference image 6200
In the circumscribed rectangle 6230 of the integrated area 0, the input image 630
Either 0 or the reference image 6200 includes the extracted object, and the other does not include the extracted object. Now, when a normalization correlation is performed between the same circumscribed rectangular regions in the reference image and the input image of the detection object 6220, the object 6220 becomes
Since it is present in the input image but not in the reference image, the similarity is low. On the other hand, when the normalization correlation is performed between the same circumscribed rectangular regions between the reference image of the detected object 6240 and the input image, even if the object 6240 exists in the input image and exists in the reference image, there are few changed parts. , The similarity decreases.
【0034】これより、類似度が低い場合は検知対象物
体であり、類似度が高い物体は背景と類似しているため
変化が少ないので外乱と判定する。たとえば、照明のち
らつき,観葉植物の揺れ,機器のエッジのちらつき等の
外乱が除外できる。From this, when the similarity is low, the object is a detection target object, and since the object with a high similarity is similar to the background and has little change, it is determined to be a disturbance. For example, disturbances such as flickering of lighting, shaking of houseplants, and flickering of edges of equipment can be excluded.
【0035】図14は、本発明における物体検知部60
00の処理の一手順を示す説明図である。ステップ64
10は、現入力画像に対し、統合領域の外接矩形算出部
で算出した外接矩形領域の濃淡画像をテンプレートパタ
ーンとして登録する。ステップ6420は、基準画像に
対し、ステップ6410とほぼ同一位置の外接矩形領域
をパターンマッチング領域として設定する。ステップ6
430は、ステップ6410で登録したパターンとの濃
淡パターンマッチングを行い、類似度を算出する。ステ
ップ6440は、算出した類似度が所定値(約0.5〜
0.8程度)以上か否か判定する。所定値以上の場合、
ステップ6540は、背景と類似しているため、外乱と
判定する。所定値未満の場合、ステップ6450は、背
景と類似していないため、物体と判定する。ステップ6
550は、生成した外接矩形領域が全て終了したか否か
判定し、終了していない場合、ステップ6410へ戻
る。図15は、本発明における表示装置8000に検知
結果を表示した一実施例を示す説明図である。物体62
20を検知すると、表示制御部7000が、格納した検
知物体の検知位置を用いて、表示装置8000に外接矩
形枠6210を表示する。表示制御部7000が、検知
物体を表示装置8000に表示制御する場合、カラーで
もモノクロでも検知したことが監視員に視覚に明確にわ
かれば何でもよく、検知物体が明示できる表示方法なら
ば何でもよい。このように表示装置8000に表示する
ことにより、監視者は、例えば、人物監視の場合、検知
人物及び人物の状態を表示装置8000により画像でオ
ンラインに把握できる。また、表示装置8000が遠隔
地にあれば、テレビ電話等にRS−232C等の標準的
な通信手段で人物を検知したことを報知して、遠方の表
示装置に表示してもよい。FIG. 14 shows an object detection unit 60 according to the present invention.
It is an explanatory view showing one procedure of processing of 00. Step 64
Reference numeral 10 registers, as a template pattern, a grayscale image of a circumscribed rectangular area calculated by the integrated area circumscribed rectangle calculation unit with respect to the current input image. A step 6420 sets a circumscribed rectangular area at substantially the same position as the step 6410 for the reference image as a pattern matching area. Step 6
A step 430 calculates a similarity by performing light and shade pattern matching with the pattern registered in step 6410. In step 6440, the calculated similarity is set to a predetermined value (about 0.5 to 0.5).
0.8) or more. If it is more than the specified value
In step 6540, since it is similar to the background, it is determined that a disturbance has occurred. If it is less than the predetermined value, step 6450 determines that the object is an object because it is not similar to the background. Step 6
A step 550 determines whether or not all the generated circumscribed rectangular areas have been completed. If not, the process returns to step 6410. FIG. 15 is an explanatory view showing one embodiment in which the detection result is displayed on the display device 8000 according to the present invention. Object 62
When detecting the number 20, the display control unit 7000 displays a circumscribed rectangular frame 6210 on the display device 8000 using the stored detection position of the detection object. When the display control unit 7000 controls the display of the detected object on the display device 8000, any method can be used as long as it is clearly visible to the observer that the detection has been performed in color or monochrome, and any display method can be used as long as the detected object can be clearly indicated. By displaying on the display device 8000 in this manner, for example, in the case of monitoring a person, the monitor person can grasp the detected person and the state of the person online using the display device 8000 as an image. If the display device 8000 is at a remote place, the fact that a person has been detected by a standard communication means such as RS-232C may be notified to a videophone or the like, and displayed on a remote display device.
【0036】本発明によれば、まず、ITVカメラで監
視対象シーンを撮影して監視する場合、1つの監視対象
シーンには感知レベルを高くしたい重要な領域や感知レ
ベルを鈍くしたい重要でない領域等が複数混在する。更
に昼夜の監視を行う場合明るさ変動が激しい。更に、照
明装置等の点滅により局部的な照明変動も発生する。感
度レベルが異なる複数領域単位で処理を繰り返すと時間
がかかりオンライン監視ができなくなる。そこで、監視
領域毎に異なる感度レベルを設定すると、まず、領域毎
にレベルに対応した値を書き込んだ2値化用画面を作成
する。一方、入力画像と基準画像との画素毎の最小輝度
又は最大輝度を求めた画面Max/Min(B,F)を作成す
る。次に、Max/Min(B,F)の輝度値に応じた変換倍
率(例えば、倍率a=0〜2)の関数を作成して、Max
/Min(B,F)の輝度値の画素毎に対応させた変換倍
率を2値化用画面に乗ずる。これにより、感知レベルの
異なる複数領域があり照明変動が激しい環境でも、入力
画像の明暗に追随して画素毎にしきい値調整を行った2
値化画面を作成するので、感知レベルが異なる多数の監
視領域が存在しても領域数に関係なく画面全体の処理で
物体抽出が可能となるので、処理の高速化及び検知精度
の向上が図れる効果がある。According to the present invention, first, when monitoring and monitoring a scene to be monitored by an ITV camera, one monitored object scene includes, for example, an important region where a higher sensing level is required or an unimportant region where a lower sensing level is desired. Are mixed. In addition, when monitoring day and night, the brightness varies greatly. Furthermore, local illumination fluctuations also occur due to blinking of the illumination device and the like. If the process is repeated for a plurality of areas having different sensitivity levels, it takes time, and online monitoring cannot be performed. Therefore, when a different sensitivity level is set for each monitoring area, first, a binarization screen in which a value corresponding to the level is written for each area is created. On the other hand, a screen Max / Min (B, F) in which the minimum luminance or the maximum luminance of each pixel of the input image and the reference image is obtained is created. Next, a function of a conversion magnification (for example, magnification a = 0 to 2) according to the luminance value of Max / Min (B, F) is created, and Max.
The conversion magnification corresponding to each pixel of the luminance value of / Min (B, F) is multiplied by the binarization screen. As a result, even in an environment where there are a plurality of regions having different sensing levels and illumination fluctuations are severe, the threshold value is adjusted for each pixel by following the brightness of the input image.
Since a binarized screen is created, even if there are a large number of monitoring areas having different sensing levels, it is possible to extract an object by processing the entire screen regardless of the number of areas, so that processing can be speeded up and detection accuracy can be improved. effective.
【0037】[0037]
【発明の効果】本発明によれば、照明変化が激しい環境
下で感度レベルが異なる複数の監視領域が存在しても短
時間で高精度の2値化処理を行うことのできる画像監視
装置を提供することができる。According to the present invention, there is provided an image monitoring apparatus capable of performing high-precision binarization processing in a short time even when a plurality of monitoring areas having different sensitivity levels are present in an environment in which illumination changes are severe. Can be provided.
【図1】本発明における物体監視装置の一実施例を示す
ブロック図である。FIG. 1 is a block diagram showing an embodiment of an object monitoring device according to the present invention.
【図2】本発明の処理概要を示す説明図である。FIG. 2 is an explanatory diagram showing a processing outline of the present invention.
【図3】本発明における感度領域設定部の監視領域とそ
の感度レベル設定の一実施例を示す説明図である。FIG. 3 is an explanatory diagram showing an embodiment of a monitoring area of a sensitivity area setting unit and setting of its sensitivity level in the present invention.
【図4】本発明における画像入力処理部の内部の一実施
例を示すブロック図である。FIG. 4 is a block diagram showing an embodiment of the inside of an image input processing unit according to the present invention.
【図5】本発明における2値化用画面調整部の内部の一
実施例を示すブロック図である。FIG. 5 is a block diagram showing one embodiment of the inside of a binarizing screen adjustment unit in the present invention.
【図6】本発明における2値化用画面変換部の変換倍率
の一実施例を示す説明図である。FIG. 6 is an explanatory diagram showing an embodiment of a conversion magnification of a binarization screen conversion unit according to the present invention.
【図7】本発明における2値化用画面変換部で変換した
2値化しきい値用画面の一実施例を示す説明図である。FIG. 7 is an explanatory diagram showing one embodiment of a binarization threshold screen converted by a binarization screen conversion unit according to the present invention.
【図8】本発明における変化領域抽出部の内部の一実施
例を示すブロック図である。FIG. 8 is a block diagram showing one embodiment of the inside of a change area extraction unit according to the present invention.
【図9】本発明における抽出領域統合部の内部の一実施
例を示すブロック図である。FIG. 9 is a block diagram showing one embodiment of the inside of the extraction area integration unit in the present invention.
【図10】本発明における抽出領域統合部の抽出領域の
統合を行う一手順を示す説明図である。FIG. 10 is an explanatory diagram showing one procedure for integrating extraction regions by an extraction region integration unit according to the present invention.
【図11】本発明における中心からの距離が許容範囲で
あるか否かチェックする場合の一実施例を示す説明図で
ある。FIG. 11 is an explanatory diagram showing one embodiment of checking whether or not the distance from the center is within an allowable range according to the present invention.
【図12】本発明における物体検知部の内部の一実施例
を示すブロック図である。FIG. 12 is a block diagram showing one embodiment of the inside of the object detection unit in the present invention.
【図13】本発明における正規化相関処理を用いて物体
を検知する場合の一実施例を示す説明図である。FIG. 13 is an explanatory diagram showing an embodiment in which an object is detected using the normalized correlation processing according to the present invention.
【図14】本発明における物体検知部の処理の一手順を
示す説明図である。FIG. 14 is an explanatory diagram showing one procedure of processing of the object detection unit in the present invention.
【図15】本発明における表示装置に検知結果を表示し
た一実施例を示す説明図である。FIG. 15 is an explanatory diagram illustrating an example in which a detection result is displayed on a display device according to the present invention.
100…ITVカメラ、500…画像入力処理部、10
00…領域感度設定部、1500…2値化用画面作成
部、2000…2値化用画面調整部、3000…差分画
像作成部、4000…変化領域抽出部、5000…抽出
領域統合部、6000…物体検知部、7000…表示制御
部、8000…表示装置。100: ITV camera, 500: image input processing unit, 10
00: region sensitivity setting unit, 1500: binarization screen creation unit, 2000: binarization screen adjustment unit, 3000: difference image creation unit, 4000: changing region extraction unit, 5000: extraction region integration unit, 6000 ... Object detection unit, 7000: display control unit, 8000: display device.
───────────────────────────────────────────────────── フロントページの続き (72)発明者 小林 芳樹 茨城県日立市大みか町七丁目1番1号 株 式会社日立製作所日立研究所内 (72)発明者 豊島 修次 茨城県ひたちなか市大字稲田1410番地 株 式会社日立製作所映像情報メディア事業部 内 (72)発明者 荷口 康之 茨城県ひたちなか市大字稲田1410番地 株 式会社日立製作所映像情報メディア事業部 内 Fターム(参考) 5C054 ED03 FC01 FC04 FC05 FC12 FC13 FE16 GB01 HA18 5C084 AA01 AA06 BB06 BB31 CC19 DD11 GG56 GG57 GG78 ──────────────────────────────────────────────────続 き Continuing from the front page (72) Inventor Yoshiki Kobayashi 7-1-1, Omika-cho, Hitachi City, Ibaraki Prefecture Inside Hitachi, Ltd.Hitachi Research Laboratory Co., Ltd. Hitachi, Ltd. Video Information Media Division (72) Inventor Yasuyuki Kakuguchi 1410, Inada, Hitachinaka-shi, Ibaraki Pref. Hitachi, Ltd. Video Information Media Division, F-term (reference) GB01 HA18 5C084 AA01 AA06 BB06 BB31 CC19 DD11 GG56 GG57 GG78
Claims (3)
て、当該領域毎に異なる感度レベルを指定した2値化用
画面を作成し、取り込んだ画像信号の輝度に応じて2値
化用画面の感度レベルを調整した適応型2値化画面を作
成し、画像間の差分画像と該適応型2値化画面との差分
を行い、該適応型2値化画面との差分画像に対して2値
化を行う監視装置本体からなる画像監視装置。1. An image capturing section for capturing an image of a monitoring target, and a plurality of monitoring areas are set in an image captured by the image capturing section, and a binarizing screen in which a different sensitivity level is designated for each area is created. An adaptive binarization screen is created by adjusting the sensitivity level of the binarization screen according to the luminance of the captured image signal, and a difference between the difference image between the images and the adaptive binarization screen is calculated. An image monitoring device including a monitoring device main body that binarizes a difference image with an adaptive binarization screen.
行う2フレームの画素毎の最大輝度値又は最小輝度値に
依存して輝度調整を行う請求項1記載の画像監視装置。2. The image monitoring apparatus according to claim 1, wherein the adaptive binarization screen performs luminance adjustment depending on a maximum luminance value or a minimum luminance value for each pixel of two frames for performing a difference between images.
定し、該監視領域毎に感度レベルを指定する感度領域設
定部と、 該感度領域設定部で設定した領域と指定した感度レベル
から前記監視領域毎に感度レベル値を書き込んだ画面を
作成する2値化用画面作成部と、 前記撮像部により撮像された画像と予め用意した基準画
像との画素毎の最小輝度又は最大輝度を求めて輝度値の
画素毎に対応させた変換倍率を算出して2値化用画面の
感度レベル調整を行う2値化用画面調整部と、 前記撮像部により撮像された画像と前記基準画像との画
素毎の差分処理を行う差分画像作成部と、 該差分画像作成部の差分画像と前記2値化用画面調整部
で輝度調整した2値化用画面との差分・2値化を行って
変化領域を抽出する変化領域抽出部と、 該変化領域抽出部により抽出した領域に近隣を集合した
統合領域を生成する抽出領域統合部と、 前記基準画像における同一統合領域同士による正規化相
関による濃淡パターンマッチングを行い、正規化相関の
類似度を判定し、類似度の高い領域は外乱とみて除外
し、類似度の低い領域を監視対象の物体として検出する
物体検知部と、 該物体検知部により検知した物体の画像データを格納
し、その検知位置等の情報や検知日や時刻等の情報をリ
アルタイムで、または要求が発生した都度表示するよう
制御する表示制御部とからなる画像監視装置。3. An imaging section for imaging a monitoring target, a sensitivity area setting section for setting a plurality of monitoring areas on an image captured by the imaging section and designating a sensitivity level for each of the monitoring areas; A binarizing screen creating unit for creating a screen in which a sensitivity level value is written for each of the monitoring areas from the area set by the area setting unit and the designated sensitivity level; and an image captured by the imaging unit and a reference prepared in advance. A binarization screen adjustment unit that obtains a minimum luminance or a maximum luminance for each pixel with an image, calculates a conversion magnification corresponding to each pixel of a luminance value, and adjusts a sensitivity level of the binarization screen; A difference image creation unit that performs a difference process for each pixel between an image captured by an imaging unit and the reference image, and a binary image obtained by adjusting the brightness of the difference image of the difference image creation unit and the binarization screen adjustment unit Change by performing difference / binarization with screen for use A changing region extracting unit that extracts a region; an extracting region integrating unit that generates an integrated region in which neighborhoods are gathered in the region extracted by the changing region extracting unit; and a shading pattern based on a normalized correlation between the same integrated regions in the reference image. Performs matching, determines the similarity of the normalized correlation, excludes areas with high similarity as disturbances, and detects areas with low similarity as objects to be monitored. An image monitoring apparatus, comprising: a display control unit that stores image data of a detected object, and controls to display information such as a detection position and information such as a detection date and time in real time or whenever a request is generated.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP18367298A JP3532769B2 (en) | 1998-06-30 | 1998-06-30 | Image monitoring device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP18367298A JP3532769B2 (en) | 1998-06-30 | 1998-06-30 | Image monitoring device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| JP2000023142A true JP2000023142A (en) | 2000-01-21 |
| JP3532769B2 JP3532769B2 (en) | 2004-05-31 |
Family
ID=16139917
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| JP18367298A Expired - Fee Related JP3532769B2 (en) | 1998-06-30 | 1998-06-30 | Image monitoring device |
Country Status (1)
| Country | Link |
|---|---|
| JP (1) | JP3532769B2 (en) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2005024746A1 (en) * | 2003-09-08 | 2005-03-17 | Optex Co., Ltd. | Sensor-camera-ganged intrusion detecting apparatus |
| JP2007180932A (en) * | 2005-12-28 | 2007-07-12 | Secom Co Ltd | Image sensor |
| JP2010097265A (en) * | 2008-10-14 | 2010-04-30 | Nohmi Bosai Ltd | Smoke detecting apparatus |
| JP2010178235A (en) * | 2009-01-31 | 2010-08-12 | Keyence Corp | Safety photoelectric switch and safety control method using the same |
| JP2011215804A (en) * | 2010-03-31 | 2011-10-27 | Nohmi Bosai Ltd | Smoke detection device |
| JP2011215806A (en) * | 2010-03-31 | 2011-10-27 | Nohmi Bosai Ltd | Smoke detection device |
| CN115239611A (en) * | 2021-04-22 | 2022-10-25 | 北京君正集成电路股份有限公司 | Method for solving hot area detection false detection |
-
1998
- 1998-06-30 JP JP18367298A patent/JP3532769B2/en not_active Expired - Fee Related
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2005024746A1 (en) * | 2003-09-08 | 2005-03-17 | Optex Co., Ltd. | Sensor-camera-ganged intrusion detecting apparatus |
| JP2007180932A (en) * | 2005-12-28 | 2007-07-12 | Secom Co Ltd | Image sensor |
| JP2010097265A (en) * | 2008-10-14 | 2010-04-30 | Nohmi Bosai Ltd | Smoke detecting apparatus |
| JP2010178235A (en) * | 2009-01-31 | 2010-08-12 | Keyence Corp | Safety photoelectric switch and safety control method using the same |
| JP2011215804A (en) * | 2010-03-31 | 2011-10-27 | Nohmi Bosai Ltd | Smoke detection device |
| JP2011215806A (en) * | 2010-03-31 | 2011-10-27 | Nohmi Bosai Ltd | Smoke detection device |
| CN115239611A (en) * | 2021-04-22 | 2022-10-25 | 北京君正集成电路股份有限公司 | Method for solving hot area detection false detection |
Also Published As
| Publication number | Publication date |
|---|---|
| JP3532769B2 (en) | 2004-05-31 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP3423861B2 (en) | Method and apparatus for monitoring a moving object | |
| RU2484531C2 (en) | Apparatus for processing video information of security alarm system | |
| JP3123587B2 (en) | Moving object region extraction method using background subtraction | |
| US20060221181A1 (en) | Video ghost detection by outline | |
| CN109413411B (en) | Black screen identification method and device of monitoring line and server | |
| CN111539980B (en) | Multi-target tracking method based on visible light | |
| EP2124194B1 (en) | Method of detecting objects | |
| CN111259763A (en) | Target detection method, device, electronic device and readable storage medium | |
| US20060056702A1 (en) | Image processing apparatus and image processing method | |
| CN112801963B (en) | Video image occlusion detection method and system | |
| JP7092616B2 (en) | Object detection device, object detection method, and object detection program | |
| CN113989732A (en) | Real-time monitoring method, system, equipment and readable medium based on deep learning | |
| Avery et al. | Investigation into shadow removal from traffic images | |
| JP3532769B2 (en) | Image monitoring device | |
| CN112770090A (en) | Monitoring method based on transaction detection and target tracking | |
| JP3423886B2 (en) | Moving object monitoring device | |
| JPH1093957A (en) | Moving object detection method, apparatus, system, and storage medium | |
| CN118485582B (en) | Image defogging method and system | |
| KR102171384B1 (en) | Object recognition system and method using image correction filter | |
| CN117456371B (en) | A method, device, equipment and medium for detecting hot spots in strings | |
| JP3294468B2 (en) | Object detection method in video monitoring device | |
| JP3232502B2 (en) | Fog monitoring system | |
| CN112364884A (en) | Method for detecting moving object | |
| JP3736836B2 (en) | Object detection method, object detection apparatus, and program | |
| KR20170034607A (en) | System, Method for Extracting Color of Foreground and Computer Readable Record Medium Thereof |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| TRDD | Decision of grant or rejection written | ||
| A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20040302 |
|
| A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20040304 |
|
| R150 | Certificate of patent or registration of utility model |
Free format text: JAPANESE INTERMEDIATE CODE: R150 |
|
| FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20090312 Year of fee payment: 5 |
|
| FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20090312 Year of fee payment: 5 |
|
| FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20100312 Year of fee payment: 6 |
|
| FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20110312 Year of fee payment: 7 |
|
| FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20110312 Year of fee payment: 7 |
|
| FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20120312 Year of fee payment: 8 |
|
| FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20130312 Year of fee payment: 9 |
|
| FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20130312 Year of fee payment: 9 |
|
| FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20140312 Year of fee payment: 10 |
|
| S531 | Written request for registration of change of domicile |
Free format text: JAPANESE INTERMEDIATE CODE: R313531 |
|
| S111 | Request for change of ownership or part of ownership |
Free format text: JAPANESE INTERMEDIATE CODE: R313115 |
|
| R350 | Written notification of registration of transfer |
Free format text: JAPANESE INTERMEDIATE CODE: R350 |
|
| R350 | Written notification of registration of transfer |
Free format text: JAPANESE INTERMEDIATE CODE: R350 |
|
| LAPS | Cancellation because of no payment of annual fees |