WO1996002043A1 - Separation d'objet dans le traitement d'images multispectrales - Google Patents
Separation d'objet dans le traitement d'images multispectrales Download PDFInfo
- Publication number
- WO1996002043A1 WO1996002043A1 PCT/AU1995/000412 AU9500412W WO9602043A1 WO 1996002043 A1 WO1996002043 A1 WO 1996002043A1 AU 9500412 W AU9500412 W AU 9500412W WO 9602043 A1 WO9602043 A1 WO 9602043A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- class
- objects
- transformed
- natural
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10036—Multispectral image; Hyperspectral image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20101—Interactive definition of point of interest, landmark or seed
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
Definitions
- This invention relates to the general field of image processing and in particular to a method of processing images to separate artificial objects from natural objects.
- the method can be applied in real time or near-real time and can operate semi-automatically.
- Processing of remotely obtained images has received considerable attention in recent years, particularly since the general availability of satellite images such as those known as Landsat.
- One aspect of the processing is the separation of artificial (man-made) objects from natural objects (surroundings).
- the image processing can be done off-line.
- surveillance or reconnaissance it may be necessary to perform processing and analysis in real time or near real time.
- a pixel-by-pixel method of image analysis for image segmentation, target recognition and image interpretation has been described in British Patent Number 2264205 assigned to Thomson CSF.
- the method relies on characterisation of texture by determining the modulus and orientation of the gradient of the luminance of each element
- the method is not suitable for colour object separation.
- the inventor is aware of other image processing methods based on models in which objects are represented by chains of straight lines and arcs. These methods all require extensive knowledge of the area being investigated and are computationally expensive. They are useful for post-mission analyses but are not suitable for real time or near-real time image processing.
- One object of the present invention is to provide an image processing method for separating artificial objects from natural objects.
- the method is applicable to multispectral data consisting of optical and/or infrared bands and/or synthetic aperture radar.
- a further object of the invention is to provide an object separation method which can separate objects in real time or near-real time.
- a still further object of the invention is to provide an image processing method which is useful for detecting small objects in images.
- a yet further object of the invention is to provide a method of image processing which is suitable for fusing dissimilar images.
- Yet another object is to provide the public with a useful alternative to existing image processing techniques.
- the invention resides in a method of processing multispectral images to separate artificial objects from natural objects by background discriminant transformation including the steps of : applying a linear transformation to an image containing objects in a natural class, being the background class, and objects in an artificial class to produce a transformed image in which artificial class variance is maximised relative to natural class variance; and displaying the transformed image on a colour display means.
- the step of applying a transformation to the image preferably further includes the steps of : selecting a first area of the image that contains primarily objects in the natural class; selecting a second area of the image containing objects of both classes in which artificial objects are to be enhanced relative to natural objects; calculating a mean vector and co variance matrix of the first area which is the mean vector and co variance matrix of the natural class; calculating a mean vector and co variance matrix of the second area which is the mean vector and co variance matrix of the area of the image to be enhanced; calculating a mean vector and co variance matrix of the artificial class from the mean vector and co variance matrix of the natural class and the mean vector and co variance matrix of the area of the image to be enhanced; calculating transformation vectors for maximising the variance of the artificial class relative to the natural class; and applying the transformation vectors to the image to produce a transformed image.
- the step of selecting the second area of the image may be simplified by including the step of selecting the whole image.
- the method is preferably implemented on a system with known image display means which incorporate means for the selection of areas of an image. Typically, selection of an area is done by drawing a region on a screen under control of a cursor control device such as a mouse or trackball.
- a cursor control device such as a mouse or trackball.
- the basis of the invention is the idea that an image of an area can be modelled as having two classes, namely background class and non-background class.
- the bulk of the image is background class and the objects to be enhanced are non-background class.
- the preferred method requires the user to identify the background class that needs to be suppressed in order to enhance the useful information.
- the background class is visually chosen by identifying a few training areas.
- the method requires mean vectors and co variance matrices of the background class and of the whole image.
- the procedure automatically computes the percentage coverage of the background class in the image and also computes the mean vector and co variance matrix for the non- background class.
- the mathematical theory behind the invention is described in "Image enhancement using background discriminant transformation" published by the author in International Journal of Remote Sensing, 1991, Vol 12, No. 10, pgs 2153-2167 which is incorporated herein by reference.
- the natural objects are chosen to comprise the background class and artificial objects are non- background.
- the coordinate axes are rotated so as to reduce the natural class variability and to increase the artificial class variability. In practical terms this means maximising the co-variance of the artificial class relative to the natural class.
- the images are recorded from a multispectral sensor in the form of multiple data bands.
- the step of applying the transformation vectors to the image preferably involves applying the transformation vectors to the multiple data bands comprising the image to produce transformed data bands.
- the procedure preferably computes the same number of new bands (axes) as the original bands in the multispectral image.
- the information content of artificial objects decrease, in relation to natural (background) class as the sequence number of the axes increases. That is, in the first and the second new bands the artificial objects dominate and in the last new band natural objects dominate. This is where the original image has three or more bands.
- the step of displaying the transformed image in a monitor involves displaying transformed bands and includes the steps of : selecting the first transformed band containing maximum artificial class information; selecting the last transformed band containing maximum natural class information; selecting a second transformed band that has the second highest ratio of variances of artificial class information to natural class information; and applying the first second and thrid transformed bands to colour gunds of the colour display means.
- the colour display means is preferably an RGB (Red, Green, Blue) monitor and the data of the three selected bands are applied to the three colour guns.
- RGB Red, Green, Blue
- the step of displaying the transformed image in a colour monitor involves displaying transformed bands and includes the steps of : selecting the first transformed band containing maximum artificial class information for red colour guns of the colour display means; selecting the last transformed band containing maximum natural class information for green colour guns of the colour display means; selecting a second transformed band that has the second highest ratio of variances of artificial class information to natural class information for blue colour guns of the colour display means.
- Displaying the transformed data sets may include further processing such as stretching and inverting so as to achieve optimal display of the transformed image.
- One advantage of the preferred form of the present invention is that the background is chosen by the user depending on the application. This provides control on the type of enhancement the user requires,
- the method is scale invariant.
- the gain and offset of the sensor system do not affect the quality of the transformed image obtained from the method. This means that images of the same area taken by different instruments or images taken of the same area at different times can be merged without affecting the analysis significantly. Furthermore, data from dissimilar sensors can be fused.
- the invention resides in an apparatus for the separation of artificial objects from natural objects by processing multispectral images in the form of multiple data bands containing objects in a natural class and objects in an artificial class
- image display means adapted to display colour images of a selected area
- background selection means associated with the image display means and adapted to delineate selected sub-areas of the image comprising primarily natural class
- matrix calculation means adapted to calculate mean vectors and co variance matrices for selected images or sub-images and to calculate a mean vector and co variance matrix for the artificial class
- transformation vector calculation means adapted to calculate transformation vectors by maximising the variance of the artificial class relative to the variance of the natural class
- transformed data band generating means adapted to generate transformed data bands from the transformation vectors and the original data bands, said transformed data bands being displayed on the image display means.
- the apparatus may consist of a number of purpose-built modules designed for fast matrix processing such as array processors.
- the apparatus may be a general purpose computer programmed so as to perform each of the tasks of the individual means.
- the preferred method involves the application of three pre-computed filters (transforming vectors discussed earlier) to an incoming stream of image data.
- the three filters feed data to the three colour guns of a colour display means.
- Each filter has a number of stored floating point coefficients and the number of coefficients is the same as the number of input bands of the image.
- Each filter modulates each input band with a filter coefficient and then adds them to produce a new synthetic band or image.
- a new image is created. Images created by first filter, that last filter and the second filter are displayed in a colour monitor in red, green and blue colour respectively. If the images are displayed in this way artificial objects can be made to always appear in reddish-pinkish colour. The natural objects will appear greenish or bluish. Because this process is not computationally intensive it can be implemented in hardware and operate in real time.
- the pixels of artificial objects in images enhanced by the present invention can be labelled by providing seed pixels for reddish-pinkish objects.
- the natural objects can be labelled by providing seed pixels in greenish-bluish areas.
- the well-known procedure like ISODATA clustering programs can be used for this purpose.
- the boundaries of the objects can be drawn by any one of a number of image processing programs, which process is called vectorising the image. These vectors form the objects which are then ready to be put into a GIS (Geographic Information System).
- GIS Geographic Information System
- FIG 1 shows the typical spectral profiles of vegetation and artificial objects
- FIG 2 is a plot of mean spectral differences of natural and artificial objects
- FIG 3 shows a scenario for real time object separation and detection
- FIG 4 shows a comparison of a satellite image before and after BDT
- FIG 5 shows the result of extracting artificial objects from an aerial photographic image, it also shows the extration of artificial objects as vectors for data entry into GIS.
- the invention is based on the spectral characteristics of objects and the discovery that artificial and natural objects have different spectral characteristics.
- the different spectral characteristics can be exploited as a basis for object separation during image processing.
- Vegetation generally comprises the bulk of the natural component of images.
- a characteristic spectral profile of vegetation is shown in FIG 1 with a characteristic spectral profile of artificial objects.
- the vegetation profile shows a slight decline in reflectance from the green to the red followed by a sharp rise from the red to near infrared.
- the spectral profile of an artificial object displays a steady decline in reflectance as the wavelength increases. From FIG 1 it is evident that natural objects are good reflectors in the infrared and artificial objects are good absorbers.
- FIG 2 shows a plot of a variety of natural and artificial objects which are classified by differencing the infrared band with the red band and the green band.
- the equations used for this purpose are :
- the invention employs background discriminant transformation to suppress the dominance of the background in multispectral image space thereby enhancing non- background objects.
- Multispectral images are obtained from sensors such as Systeme probatoire de l'Observation de la Terra satellite (SPOT) and Landsat. The method assumes that images obtained from these sources consist of two main classes : background and non-background.
- the multispectral images are linearly transformed with the linear transformation coefficients being computed to maximise the variance (information content) of the non-background objects relative to the background objects.
- FIG 3 depicts the implementation of the invention in a surveillance scenario.
- Images from one or more scanners are received at a ground station.
- the received images are in bands (channels) of data (a Daedalus scanner provides 11 bands of data).
- a Daedalus scanner provides 11 bands of data.
- an operator selects three of the 11 bands for display on a colour monitor. The number of possible combinations is 990 and it is impractical to scan every combination for the best object separation.
- a background discriminant transformation three optimal bands for object separation are computed and displayed using linear combinations of all of the original 11 bands. Because the image transformation is not computationally intensive it can be done in real time.
- FIG 4a shows the original SPOT image
- FIG 4b shows the enhanced image.
- the original image includes features such as the main harbour, a series of islands and some ships outside the harbour in the ocean. The presence of small ships is difficult to detect
- the ocean was selected as the natural class for application of the invention with everything else being considered as artificial class. As can be seen in FIG 4b a large number of ships are visible that were not detectable in the unenhanced image. A number of pontoons and piers are also visible which were not visible in the unenhanced image.
- the invention can be also used on colour aerial photographs, as shown in Figure 5.
- Figure 5a the red band of a colour photograph of an urban area is shown in back and white.
- the colour aerial photographs was transformed using this invention. All artificial objects appeared in pinkish colour in the transformed image.
- vectorising software boundary lines are drawn around the artificial (white) objects as shown in black in figure 5b. These lines are sent to a GIS as a line drawing. This exercise may be considered a semiautomatic digitising of artificial objects.
- RMATRX is a 3D matrix containing covariance c matrices of NCLASS.
- VECTOR is a 2D matrix containing MEAN
- SAMP is a vector containing sample c sizes of each class.
- VECTO(Il) - VECTOR ⁇ i,2>VECTOR(Il,l)
- TWEEN1 PERC5*VECT0(I1)*VECTO(I2)
- TWEEN PERC0*VECT ⁇ i)*VECTO(I2)-TWEENl
- AAA C store the non background covariance matrix in AAA CALL TRP0SE(NF,NF,SC1,1,AAA,XXX,IER)
- PERC1 PERC1 - 1.0
- VECTMOK VECTM(IK)+SC2uX,IK)*SMi ⁇ L
- SM(IK) SM ⁇ K)+SCl(IL,IK)*SMl(IL)
- AMBDAOJ AMBDA(U)+VECTM(TL)*SM1(IL)
- AMBDATOJ AMBDAT(U)+SM(IL)*SM1(IL)
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
Abstract
L'invention concerne un système de traitement d'images multispectrales destiné à séparer des objets artificiels d'objets naturels au moyen d'une transformation discriminante de l'arrière-plan. Le traitement consiste à identifier sur l'image un objet de la catégorie naturelle (ou de la catégorie d'arrière-plan), à identifier un objet de la catégorie artificielle, à calculer un vecteur moyen et la matrice de covariance des deux zones, à calculer le vecteur de transformation afin de maximiser la covariance de la catégorie artificielle par rapport à la catégorie naturelle, et à appliquer les vecteurs de transformation à l'image afin de produire une image transformée.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU28755/95A AU2875595A (en) | 1994-07-07 | 1995-07-07 | Object separation in multispectral image processing |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AUPM667594 | 1994-07-07 | ||
| AUPM6675 | 1994-07-07 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO1996002043A1 true WO1996002043A1 (fr) | 1996-01-25 |
Family
ID=3781243
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/AU1995/000412 WO1996002043A1 (fr) | 1994-07-07 | 1995-07-07 | Separation d'objet dans le traitement d'images multispectrales |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO1996002043A1 (fr) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2002080097A1 (fr) * | 2001-03-28 | 2002-10-10 | Koninklijke Philips Electronics N.V. | Detection de zone herbacee basee sur une segmentation automatique pour video en temps reel |
| CN101320475B (zh) * | 2008-06-10 | 2010-12-29 | 北京航空航天大学 | 复杂背景条件下红外成像系统作用距离估算方法 |
| JP2016142633A (ja) * | 2015-02-02 | 2016-08-08 | 富士通株式会社 | 植物判別装置、植物判別方法及び植物判別用プログラム |
| CN113640445A (zh) * | 2021-08-11 | 2021-11-12 | 贵州中烟工业有限责任公司 | 基于图像处理的特征峰识别方法及计算设备、存储介质 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH01200357A (ja) * | 1988-02-05 | 1989-08-11 | Dainippon Printing Co Ltd | 自動切抜きシステム |
| JPH02224466A (ja) * | 1989-02-25 | 1990-09-06 | Minolta Camera Co Ltd | 画像処理装置 |
| JPH05135172A (ja) * | 1991-11-08 | 1993-06-01 | Olympus Optical Co Ltd | 画像処理装置 |
| JPH05342348A (ja) * | 1992-06-08 | 1993-12-24 | Tsubakimoto Chain Co | 色識別方法 |
-
1995
- 1995-07-07 WO PCT/AU1995/000412 patent/WO1996002043A1/fr active Application Filing
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH01200357A (ja) * | 1988-02-05 | 1989-08-11 | Dainippon Printing Co Ltd | 自動切抜きシステム |
| JPH02224466A (ja) * | 1989-02-25 | 1990-09-06 | Minolta Camera Co Ltd | 画像処理装置 |
| JPH05135172A (ja) * | 1991-11-08 | 1993-06-01 | Olympus Optical Co Ltd | 画像処理装置 |
| JPH05342348A (ja) * | 1992-06-08 | 1993-12-24 | Tsubakimoto Chain Co | 色識別方法 |
Non-Patent Citations (6)
| Title |
|---|
| INTERNATIONAL JOURNAL OF REMOTE SENSING, 1991, Vol. 12, No. 10, pages 2153-2167, K.V. SHETTIGARA, "Image Enhancement Using Background Discriminant Transformation". * |
| JAPIO, JPAT ONLINE ABSTRACT, Accession No. 89-200357; & JP,A,01 200 357 (DAINIPPON PRINTING CO. LTD.), 11 August 1989. * |
| JAPIO, JPAT ONLINE ABSTRACT, Accession No. 90-224466; & JP,A,02 224 466 (MINOLTA CAMERA CO. LTD.) 6 September 1990. * |
| JAPIO, JPAT ONLINE ABSTRACT, Accession No. 93-135172; & JP,A,05 135 172 (OLYMPUS OPTICAL CO. LTD.), 1 June 1993. * |
| JAPIO, JPAT ONLINE ABSTRACT, Accession No. 93-342348; & JP,A,05 342 348 (TSUBAKIMOTO CHAIN CO.) 24 December 1993. * |
| PROCEEDING OF THE 1994 INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, Vol. 4, IEEE, PISCATAWAY, NJ, USA, pages 2372-2374, SMITH M. et al., "A New Approach to Quantitative Abundance of Materials in Multispectral Images". * |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2002080097A1 (fr) * | 2001-03-28 | 2002-10-10 | Koninklijke Philips Electronics N.V. | Detection de zone herbacee basee sur une segmentation automatique pour video en temps reel |
| CN101320475B (zh) * | 2008-06-10 | 2010-12-29 | 北京航空航天大学 | 复杂背景条件下红外成像系统作用距离估算方法 |
| JP2016142633A (ja) * | 2015-02-02 | 2016-08-08 | 富士通株式会社 | 植物判別装置、植物判別方法及び植物判別用プログラム |
| CN113640445A (zh) * | 2021-08-11 | 2021-11-12 | 贵州中烟工业有限责任公司 | 基于图像处理的特征峰识别方法及计算设备、存储介质 |
| CN113640445B (zh) * | 2021-08-11 | 2024-06-11 | 贵州中烟工业有限责任公司 | 基于图像处理的特征峰识别方法及计算设备、存储介质 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP1842153B1 (fr) | Differentiation des limites d'eclairement et de reflexion | |
| US7505608B2 (en) | Methods and apparatus for adaptive foreground background analysis | |
| JP4194025B2 (ja) | 照明不変の客体追跡方法及びこれを用いた映像編集装置 | |
| US8457437B2 (en) | System and method for enhancing registered images using edge overlays | |
| CN109034184B (zh) | 一种基于深度学习的均压环检测识别方法 | |
| US8879849B2 (en) | System and method for digital image signal compression using intrinsic images | |
| CN112686172A (zh) | 机场跑道异物检测方法、装置及存储介质 | |
| CN110555877B (zh) | 一种图像处理方法、装置及设备、可读介质 | |
| Bowles et al. | Real-time analysis of hyperspectral data sets using NRL's ORASIS algorithm | |
| King et al. | Development of a multispectral video system and its application in forestry | |
| Congalton | Remote sensing: an overview | |
| CN117350925A (zh) | 一种巡检图像红外可见光图像融合方法、装置及设备 | |
| Stow et al. | Potential of colour-infrared digital camera imagery for inventory and mapping of alien plant invasions in South African shrublands | |
| CN117456371B (zh) | 一种组串热斑检测方法、装置、设备及介质 | |
| WO1996002043A1 (fr) | Separation d'objet dans le traitement d'images multispectrales | |
| Gerylo et al. | Hierarchical image classification and extraction of forest species composition and crown closure from airborne multispectral images | |
| KR102570081B1 (ko) | 딥러닝 알고리즘을 이용하여 지문중첩영상에서 지문을 분리하는 방법 및 그 장치 | |
| Tien et al. | Swimming pool identification from digital sensor imagery using SVM | |
| JP3037495B2 (ja) | 物体像の抽出処理方法 | |
| JPH08272967A (ja) | 画像処理方法および装置 | |
| Toet | Colorizing grayscale intensified nightvision images | |
| Pan et al. | Polarization-enhanced GFNet for glint-free water surface object segmentation | |
| Dorise et al. | Explaining raw data complexity to improve satellite onboard processing | |
| Wibom | Camera-Based Sensor Model: Bridging the Domain Gap between Simulated and Real Images | |
| Hirose et al. | Acquisition of Color Reproduction Technique based on Deep Learning Using a Database of Color-converted Images in the Printing Industry |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A1 Designated state(s): AM AT AU BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE HU IS JP KE KG KP KR KZ LK LR LT LU LV MD MG MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TT UA UG US UZ VN |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): KE MW SD SZ UG AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG |
|
| DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
| 122 | Ep: pct application non-entry in european phase |