[go: up one dir, main page]

HK1085292B - Image fusion system and method - Google Patents

Image fusion system and method Download PDF

Info

Publication number
HK1085292B
HK1085292B HK06105384.7A HK06105384A HK1085292B HK 1085292 B HK1085292 B HK 1085292B HK 06105384 A HK06105384 A HK 06105384A HK 1085292 B HK1085292 B HK 1085292B
Authority
HK
Hong Kong
Prior art keywords
image
sensor
contrast
sensors
regions
Prior art date
Application number
HK06105384.7A
Other languages
Chinese (zh)
Other versions
HK1085292A1 (en
Inventor
卡洛.蒂那
Original Assignee
Bae系统航空控制公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/229,574 external-priority patent/US6898331B2/en
Application filed by Bae系统航空控制公司 filed Critical Bae系统航空控制公司
Publication of HK1085292A1 publication Critical patent/HK1085292A1/en
Publication of HK1085292B publication Critical patent/HK1085292B/en

Links

Description

Image fusion system and method
Technical Field
The present invention relates to imaging systems and methods, and more particularly to an imaging system and method that selectively fuses or combines image regions from two or more sensors to form a single processed image.
Background
Image fusion generally refers to combining or merging two or more image portions into a single processed image. Image fusion is typically used when producing an image with more than two detectors to combine the information provided by each sensor into an image that is displayed to the user or provided to an automated processing system.
In one method of combining images from different sensors with known systems, only the pixels of the two images are superimposed one by one. Thus, for example, to obtain a two-dimensional (2-D) processed image having pixels arranged in an n x m matrix, where the position of each pixel in the processed image is identified by position (x, y), the value or data of pixel (1, 1) of the first image is added to the value or data of pixel (1, 1) of the second image, the value or data of pixel (1, 2) of the first image is added to the value or data of pixel (1, 2) of the second image, and so on until the value or data of pixel (n, m) in both images is added. Another known system changes this technique by calculating the average of each pixel value instead of adding the two values. The final image thus contains the average pixel values.
However, these systems have a number of drawbacks. First, known image fusion techniques often result in undesirable and unnecessary distortions. For example, if a portion of one image is sharp and understandable to the user and the corresponding portion of the second image is blurred, adding or averaging pixel values may distort the sharp image into a less sharp image. This undesirable effect is the result of combining the elements of the blurred pixel into the clean pixel by addition or averaging. As another example, adding unwanted background regions to a bright image region may reduce the contrast and quality of the bright image region. For example, if both image regions have a high dominance or are bright, adding the two bright regions together may result in the final image being "over-exposed" or too bright. This results in a saturated image. Finally, averaging two dimmed image regions may result in a dimmer image and further decrease the brightness of the originally dimmed image region.
Other known systems have attempted to overcome these deficiencies using techniques that identify patterns in the image and form a fused image based on the patterns. Each source or original image is decomposed into multiple low resolution images using filters with different bandwidths (e.g., based on the roll-off method or the laplacian "pyramid" method). The pyramidal approach uses different resolutions for different image regions-a coarse feature is analyzed with a low resolution and a fine feature is analyzed with a high resolution. However, these systems have the disadvantage of receiving a complete image from each sensor before the start of the process of constructing a pyramid. This requirement typically results in a time lag in the image from the slowest sensor. Such a time lag in the sensors is unacceptable for placement on a platform that is rapidly moving, such as an aircraft or other vehicle, or that is intended to operate in real time.
Other known systems use a modified laplacian method to decompose a source image into images with salient values or weights. A graphic is "highlighted" if it carries information for understanding the image. The final image is formed according to a "weighting" pattern. However, a disadvantage of these techniques is that they generally involve analysis and assignment of emphasis weights to each pixel or the entire image area. The entire image is then processed. Next, a highlight pattern is selected. As a result, excessive time is wasted analyzing the entire image area and their corresponding saliency values.
These disadvantages are even more problematic when the known image system is used in combination with time-sensitive behaviour of the aircraft, such as landing, piloting a tank, etc. In these cases, it is desirable to produce a clear image quickly. However, known techniques generally do not produce clear images within these limited times or only after the entire image is available for processing.
Therefore, there is a need for a method and system that can efficiently and quickly select useful, appropriate, or relevant information from source images to form a more informative or useful processed image in a time-efficient manner that includes relevant, appropriate, and useful information from each source image. Furthermore, it is desirable to be able to apply the selected image fusion technique to a variety of different detectors or image generators to allow flexibility of use in different applications.
Disclosure of Invention
The present invention provides a method and system for selectively combining image regions produced by different sensors (referred to herein as sensors or source images) to form a processed or fused image using information about the sensor images. The method and system are implemented by dividing each sensor image into image regions and generating a map of contrast values for each image region by means such as convolution. The map of the contrast values of one sensor image is then compared with the corresponding maps of the contrast values of the other sensors. One of the comparison values or between the comparison values is selected according to a selection criterion, e.g. the larger one of the two or more comparison values is selected. The processed image is then formed using the image area corresponding to the selected contrast value. According to the present invention, the image area may be divided on a pixel-by-pixel basis, a pixel group basis, or an arbitrarily shaped area basis.
According to an aspect of the present invention, there is provided a method of processing an image to form a scene in real time using a plurality of images, the method comprising: configuring at least two sensors, wherein the sensors have different detection capabilities on objects in a scene and generate corresponding images of the scene according to the different detection capabilities; dividing each image into a plurality of image areas; generating a contrast map for each image, each contrast map comprising contrast values for each image region; applying a selection process to the contrast values to select an image region for use in processing the image; and forming the processed image of the scene using the selected image region.
According to another aspect of the present invention there is provided a system for combining a plurality of images to form a final image of a scene in real time, comprising: a plurality of sensors, each sensor having a different detection capability for an object in a scene and generating a corresponding image of the scene according to the different detection capabilities; a processor configured to divide each image into a plurality of image regions; generating a contrast map for each image, and each contrast map comprising contrast values for each image region; applying a selection criterion to the contrast values to select an image region, and forming a processed image of the scene using the selected image region.
Also in accordance with the present invention, each sensor detects a different wavelength.
Also in accordance with the present invention, images from different types, numbers, and combinations of sensors may be processed. Sensors that may be used include Infrared (IR), radio frequency (rf) sensors (e.g., active sensors such as radar, or passive sensors such as radiometers).
Also in accordance with the present invention, image areas from multiple sensors are combined to form a processed image.
Further in accordance with the invention, the contrast maps from the images of the first and second sensors are combined to form an intermediate contrast map, which is then compared to the contrast map of the third image to form the processed image.
Further in accordance with the present invention, the image fusion method and system is used in conjunction with the command of a moving vehicle, such as an aircraft, ship, automobile, or train.
Further in accordance with the invention, the intensity of one or more image portions is adjusted across the processed image. One sensor is selected as a reference sensor and the average intensity of the regions of the reference sensor image is determined. The intensity of the same or corresponding or adjacent region in the processed image is adjusted by combining the determined average luminance value of the reference image and the intensity value of the processed image.
Also in accordance with the present invention, the present method and system are used in the filtering portion of the sensor image before the comparison is performed.
Drawings
FIG. 1 is a schematic diagram of an embodiment of the system of the present invention, including a processor or computer, two sensors, and a display located in a moving vehicle such as an aircraft;
FIG. 2 is a flow chart illustrating the processing of an image generated by a sensor to form a processed or fused image;
FIG. 3 is a flow chart illustrating a method of comparing contrast values;
FIG. 4 is a flow chart illustrating a method of adjusting the intensity of a processed image;
FIGS. 5A-C are a set of black and white photographs showing corresponding images of a radar sensor and an Infrared (IR) sensor, respectively, and a processed image containing a selected region from the radar and IR images according to a selected process or criteria;
6A-F illustrate the division of an image into different image regions, including regions on a pixel-by-pixel basis, pixel group basis, or arbitrarily defined;
FIGS. 7A-B are black and white photographs illustrating the contrast map generated for each image;
FIGS. 8A-B are black and white photographs illustrating contrast values selected from the contrast map of FIGS. 7A-B according to a selection criterion;
FIG. 9 is a flow chart illustrating processing of multiple images by comparing contrast values of all images to form a processed or fused image;
FIG. 10 is a flow chart illustrating processing of multiple images by performing multiple comparisons of contrast values to form a processed or fused image;
11A-B are black and white photographs illustrating a processed or fused image before and after brightness correction;
FIGS. 12A-B are black and white photographs showing a spatial filter;
13A-B show filtered graphs of radar and IR sensors, respectively;
14A-F are black and white photographs illustrating radar and IR images, filter functions or effects, and filter functions applied to the radar and IR images; and
fig. 15A-E are black and white photographs illustrating a comparison of weighting functions with and without a scrolling effect.
Detailed Description
In the following description of the embodiments of the present invention, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized, as structural changes may be made without departing from the scope of the present invention.
Referring to fig. 1, which is a view of a cockpit in an aircraft, there is shown a system S of the present invention having sensors 100, 102, as well as a processor 110 and a display 120. The sensors 100, 102 provide corresponding image data or image streams 104, 106 (i.e., sensor or source images) to a processor 110, such as a computer, microcontroller, or other control element or system. The sensors may detect the same, overlapping, or different wavelengths. In addition, the sensors can also detect the same field of view, or overlapping fields of view.
The processor 110 is programmed to selectively combine the regions from each image into a processed or fused image 115. More specifically, the processor 110 compares regions of the images 104, 106 and selects image regions according to a selection criterion, e.g., according to a contrast value representing a significant difference in brightness between light and dark of the sensor image. The processor may be programmed to consider different selection criteria including, but not limited to, a larger or maximum contrast value for each comparison. Thus, the processing system is essentially capable of extracting a desired region, or a region selected according to selection criteria, from one, more or all images. The selected regions are stitched together to form a fused image 115 (much like a puzzle formed with multiple puzzle pieces, except that each puzzle piece may be selected from multiple sources). A "puzzle piece" or image region can be from a single image, some images, or all images. The fused image 115 is then provided to the driver or user via the visual display 120. The fused image may also be provided to an image processor or computer for further processing.
Although fig. 1 illustrates the use of the system S in an aircraft, it will be appreciated by those skilled in the art that the system may be applied to many other vehicles and used in a variety of applications as will be described.
The technique of fusing or selectively combining the various portions of the graphs 104, 106 into one processed image 115 is illustrated in the flow diagrams of fig. 2-4. As shown in fig. 2, at step 200, each sensor generates an image and provides the image data to the processor. At step 202, image regions may be filtered, if desired, to exclude filtered images from processing, to exclude processed images, or to reduce their contribution to processed images. At step 204, the contrast values of the corresponding regions of each sensor image are compared. At step 206, a selection criterion is used to select or identify a particular contrast value. In one embodiment of system S, the selection criteria may be to select or identify a larger or largest contrast value; however, the selection criteria, specifications or processes may all be different in another embodiment of the system S, depending on how the system is applied. At step 208, an image region corresponding to the selected or identified contrast value is identified or selected. At step 210, the selected image regions are combined, i.e., effectively "stitched together" to form a fused or processed image. Then, at step 212, the brightness of the processed or fused image is adjusted or corrected, if necessary, to produce a clearer image.
Fig. 3 further illustrates step 204 or the process of comparing contrast values. In step 300, each sensor image is divided into a plurality of image regions. Then, at step 302, a contrast map is generated for each sensor image. Each contrast map includes contrast values for each defined image region. At step 304, the contrast value of the image region of one sensor image is compared to the contrast value of the corresponding image region of the other sensor image. Corresponding image areas as used herein refer to at least overlapping sensor images. For example, if the field of view of one sensor image includes an airport runway, if the other sensor image also includes the same airport runway, then this sensor image overlaps the field of view of the latter. If the fields of view of the two sensor images are the same (or nearly the same) as each other, then the two images are considered to have 100% overlap (as follows).
Turning now to FIG. 4, step 212 or adjusting the brightness of the fused image is illustrated in further detail. In step 400, a sensor is selected as a reference sensor, i.e. a sensor to which a brightness value is to be matched. Then, at step 402, the average brightness of the image area of the reference sensor image (e.g., the cross-sectional line) over the entire image is determined. Next, at step 404, the brightness of one or more regions of the fused or processed image is adjusted by combining the determined average brightness value with the brightness value of the fused image to form a brightness corrected fused image. Brightness adjustment may be applied to the same region, or to adjacent regions, or to subsequent regions. For example, the adjustment may be applied to the same line, or an adjacent or subsequent line 406, or an adjacent or subsequent region or line 408 in the fused image, for which the intensity is determined.
Those skilled in the art will appreciate that the image fusion method and system may be used in many different environments and applications for processing multiple images. For example, in addition to aircraft (e.g., airplanes, jets, helicopters, etc.), the method and system may also be implemented in other moving vehicles such as boats, cars, or trains. In addition, the image fusion method and system can be used to display images from medical instruments (using sensors such as ultrasound, infrared, laser imaging, or tomography) and monitoring systems. Indeed, many applications may benefit from selective fusion of image regions to form a processed or fused image that includes pertinent information or selective information from each sensed image.
However, for purposes of illustration, this description will be primarily directed to images relating to aircraft. Such images may relate to landing, taxiing, takeoff, or cruising of an aircraft, and are relevant to applications that prevent Terrain-Controlled Flight (CFIT). As one specific example of how the system can be used in aircraft applications, the present description is directed to processing images produced by radar sensors and IR sensors. However, as will be explained below, many different types, numbers, and combinations of sensors and sensor images may be processed. Thus, the example systems and methods described in this specification can be used in many different application domains.
Image and sensor
Turning now to fig. 5A-C, the sensors 100, 102 generate corresponding images 104, 106, such as the images 500, 510 shown in fig. 5A-B. Selected regions of one or both images are used, that is, effectively joined or stitched together, to form a fused or processed image 115, such as the fused image 520 shown in fig. 5C. Depending on the content of the source images, it may be desirable to further process the fused image, for example, as will be described later in connection with FIGS. 11A-B.
More specifically, fig. 5A shows an image 500 of a runway produced by an Infrared (IR) sensor. The IR sensor may operate in a variety of different IR wavelength ranges, for example, 0.8 to 2 μm, 3-5 μm, 8-12 μm, or combinations and extensions thereof. One example of an IR sensor that may be used is the Infrared Imaging system (Infrared Imaging SYSTEMS) manufactured by BAE SYSTEMS of liechstan (Lexington), Massachusetts. Fig. 5B shows an image 510 generated by the same runway, i.e., radar sensor, in the same or an approximate runway scene. The radar sensors may be X, K, Ka or other area radar sensors. Radar sensors suitable for use with the present invention are, for example, the Aircraft control (Aircraft Controls) series manufactured by BAESYSTEMS corporation of Santa Monica (Santa Monica), California.
In this case, both the IR sensor and the radar sensor generally provide the same or overlapping fields of view, thereby allowing one sensor to better detect objects or conditions visible in both fields of view. It will be appreciated by those skilled in the art that the system and method can be applied to images having different degrees of overlap or field of view, as will be described later. Further, while the illustrated embodiment provides one specific example of a system including radar, IR sensors, and images, different types, numbers, and combinations of sensors and images may be used. For example, the system may also use an Ultraviolet (UV) sensor, one example UV sensor being manufactured by PulnixAmerica, Sunnyvale, Calif. Further, one of the sensors may be based on an active or passive Radio Frequency (RF) system, such as an imaging radar or radiometer, which are capable of operating in a variety of different RF bands, including but not limited to 10, 35, 76, 94, and 220 GHz. One example of such a sensor is manufactured by TRW corporation of Redondo Beach, california. As yet another example, the sensor may be an ultrasonic sensor, such as those used in Medical imaging applications manufactured by General Electric Medical Systems Division of Waukesha, Wisconsin. The sensor may also be a visible band sensor, such as a low-light visible band sensor manufactured by Panasonic corporation of Secaucus, New Jersey, a Charge Coupled Device (CCD), or a color or grayscale camera that may use natural or artificial illumination.
Further, the image fusion system may be configured to process images from multiple sensors, e.g., three, four, or other numbers of sensors. One possible combination of sensors includes two IR sensors and one radar sensor. The images from all sensors can be combined for processing and optionally combined into one processed image. For example, images A, B, and C may be selectively combined into a processed or fused image D. Alternatively, two sensor images may be processed and the results processed with a third sensor image to form a processed or fused image, or a contrast map represented by the image. For example, images A and B may be combined into image C or an intermediate contrast map C, and then C is selectively combined with image D or contrast map D to form a fused image E or a further intermediate contrast map, and so on, until all images have been processed to form a fused image. Indeed, different combinations of different numbers of sensor images may be processed through different repeated comparisons, as desired or needed.
The choice of sensor type depends on the conditions and circumstances in which the sensor is used. As discussed previously, one type of sensor may be better suited for one environment, while another type of sensor may be better suited for a different environment. More specifically, some types of sensors may provide clearer images depending on whether the environment is daytime, nighttime, foggy, rainy, etc., and whether the image is far or near. For example, radar sensors may generally provide better images than IR sensors in foggy conditions, but may lack the photo-like quality of IR sensors.
Comparing contrast values of image regions
The contrast values of the image regions are compared by dividing the image into regions, generating a contrast map based on the defined regions, and comparing the corresponding contrast map values using a selection criterion or criteria (step 204). The comparison is based on images that are aligned or pre-aligned, or images that are arranged to allow comparison of the relevant image regions. Thus, if non-overlapping images are processed, they are pre-aligned or aligned so that the regions of interest can be compared as described in further detail below. The comparison value is then selected (step 206), for example, based on a selection criterion that selects a larger or maximum comparison value. Other selection criteria may also be used, such as duration, brightness, color, and so forth.
Dividing an image into regions
Initially, the sensor image is divided into image regions as shown in FIGS. 6A-F. The image may be divided on a pixel-by-pixel 600a-B, 601a-B basis (fig. 6A-B), or divided according to pixel groups 602a-B, 604a-B (fig. 6C-D). A pixel or group of pixels may be "black or white" to represent a monochrome image, with different shades of gray (gray) representing an image with different intensity levels. A pixel or group of pixels may also have red, green, and blue dots that function to form part of a color image. Further, the image regions may be defined as arbitrarily shaped regions or boundaries 606a-b, 608a-b, 610a-b, 612a-b (FIGS. 6E-F). As a result, for each region in each sensor image, one image region can be compared to another corresponding image region. For example, referring to FIGS. 6A-B, region 600a (x) may be divided1=1,y112) and region 600b (x)2=1,y212) comparison; and a region 601a (x)1=17,y110) and region 601b (x)2=17,y210) comparison.
For ease of illustration, the relevant example image regions shown in fig. 5A-B and 6A-F include the same or substantially the same image with generally aligned or pre-aligned image regions, e.g., aligned or pre-aligned pixels, groups of pixels, or arbitrarily shaped regions. That is, fig. 5A-B show images that are overlapping (100% overlap), or images with a high degree of overlap (nearly identical sensor images). As a result, the image areas in FIGS. 6A-F are aligned with one another in a series of corresponding images. Thus, regardless of how the sensor image is divided into image regions, objects (e.g., trees 607) are always present in the same image region of both sensor images at nearly the same relative position within the sensor image.
However, one skilled in the art will appreciate that the present system and method may be used with different numbers, types, and combinations of sensor images having different degrees of overlap depending on the location, position, field of view, and detection capabilities of the sensors. Where different degrees of overlap are involved, the image regions may be aligned or pre-aligned so that the comparison can be performed.
For example, the sensors may be located in close proximity together (e.g., near the front or bottom of the aircraft) to detect substantially identical images, such as the runway scene shown in fig. 5A-B. As a result, as shown in FIGS. 6A-F, image regions in the same or similar images are aligned with one another in a corresponding manner. In those cases where the images are generally considered to be the same with the same boundaries and field of view, the image areas to which the selection process or criteria is applied (or image areas that "compete" for selection and use in forming the processed image) may be considered all of the aligned image areas in fig. 6A-F.
As a further example, one sensor may detect the first image, while a different sensor may detect an image that is nearly the same as the first image, in addition to detecting additional scene factors. This may occur, for example, when the sensors are at locations remote from each other, or at locations with different fields of view. In this case, the selection process may be applied to some or all of the overlapping regions. The image area is processed by using a selection process or criteria such as a comparison. The competing regions are compared and the image regions are selected to form a processed or fused image. Non-overlapping or non-competing image regions may be processed in different ways, for example, depending on the source and quality of the fused or processed image, the sensor type, and the user and system requirements. For example, a non-overlapping image may be added to the processed image as a fill scene or background. Alternatively, non-overlapping images may be discarded or excluded from processing or fusing the images. In some cases, depending on the particular system and application, the overlap region may not be processed.
Thus, the present method and system can be used for images with different degrees of overlap, as well as for image regions with different degrees of alignment. Overlay and alignment variations may come from sensors with different detection capabilities and locations. However, for ease of illustration, the present specification and drawings relate to and show images with a high degree of overlap of corresponding image regions in alignment. As a result, most or all of the image areas are competing image areas and are processed with selection criteria. However, the present method and system can be configured to handle other image region configurations having different degrees of overlap, alignment, and correspondence.
Generating contrast maps
As shown in fig. 7A-B, contrast maps 700, 710 are generated for corresponding radar and IR images. Each contrast map includes contrast values for identifying each defined image region within the contrast map. Continuing with the example of radar and IR sensors, FIG. 7A shows a contrast map 700 of a radar image, including contrast values, for each image region into which the radar image is divided. Similarly, FIG. 7B shows a contrast map 710 of an IR image, including contrast values, for each image region into which the IR image is divided. According to the present invention, each contrast map 700 and 710 can have any number of image areas, where the number is preferably equal, and the image areas correspond to locations where the radar and IR sensors provide 100% overlap of the images.
For this radar map example, the contrast values in the top and bottom portions 702, 706 of the image/map are relatively low values, and the contrast value in the middle portion 704 is a relatively high value. For the IR mapping example, the contrast value in the middle portion 714 is a relatively low value, while the contrast values in the top and bottom portions 712, 716 are relatively high values.
According to the present invention, the contrast map, including the contrast value for each image region, is generated, for example, by convolution with an appropriate kernel. An example of a convolution and kernel that can be used is a 2-dimensional (3 x 3) normalized convolution kernel:
Kc*S1(x,y),Kc*S2(x,y)
wherein
*Represents a convolution;
x, y are the spatial coordinates of the image, ranging from 0 to the image width (w) and height (h), respectively;
s1 is a first sensor image, e.g., a stream of mmW radar images; and
s2 is a second sensor image, e.g., a stream of IR images, assumed to be spatially pre-aligned or aligned with the first or radar image.
Example core KcIncluding a value reflecting the amount of distance from its center. As a result of the convolution, a contrast map is generated that includes the image area contrast values for each image.
The processor may utilize programs in the C language or another programming language, or perform convolution operations in application specific integrated circuit hardware. Real-time implementation of convolution operations can be achieved through the use of Digital Signal Processors (DSPs), Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), or other hardware-based devices.
Selection of contrast values
FIGS. 8A-B illustrate pixel values used in forming a processed image, the pixel values being selected based on selection criteria used to compare the contrast values of the contrast maps of FIGS. 7A-B. In this example, the selection criteria is applied to select the larger of the contrast values between the image region of the radar image and the corresponding image region of the IR image, and FIG. 8A shows the pixel values of the (radar) contrast value selected in FIG. 7A, which, as noted above, are typically present in the middle portion 800 of the radar image. Likewise, by operating system S under the same selection criteria, FIG. 8B shows the pixel values of the (IR) contrast values selected in FIG. 7B, which, as described above, are generally present at the top 810 and bottom 820 of the IR image.
The image region associated with the selected contrast value is selected from each image and then combined (or "stitched") with other such selected image regions to form a processed or fused image, such as the fused image shown in fig. 5C. Thus, in this example, the criteria for selecting an image region based on the maximum contrast value may be expressed as follows:
Fmax-con(x,y)=max{Kc*S1(x,y),Kc*S2(x,y)}
where the "maximum criterion" operation is performed on a per-sector basis, for example, on a per-pixel or per-arbitrarily shaped sector basis. Thus, the selection of the image region according to the greatest contrast essentially serves as the pixel value for generating a fused image comprising a combination or superset of image regions from different images. As a result of the selection process, the image regions may all be selected from a single image, or from multiple images, depending on the content and contrast value of the images. Some sensor images may not provide any image area to the fused or processed image. For example, if all contrast values of the first image are determined or selected from themselves, then the processed or fused image is the same as the first image. As another example, if the contrast value is selected from the second and third images, but not from the first image, then the fused image includes regions from the second and third images, but no region of the first image. Thus, in the processed image having image areas A, B and C, image area A is from sensor image 1, image area B is from sensor image 2, and image area C is from sensor image 3.
The above example involving the application of a convolution operation results in the generation of two contrast maps. Indeed, other numbers and combinations of convolutions may be performed to produce multiple contrast maps for use in multiple comparisons or multiple sensor images. For example, referring to FIG. 9, images 900 and 912 are generated by corresponding sensors. A convolution with the appropriate kernel 910-:
Kc*S1(x,y),Kc*S2(x,y),Kc*S3(x,y)
wherein the third sensor S3 is also an IR sensor. One skilled in the art will appreciate that different cores may be used for the same or different sensors. Thus, for example, a process involving three convolutions may use three different convolution kernels.
The corresponding contrast values for the three images are then compared 930 and the contrast value is selected 940 according to a selection criterion. The selected image region 945 corresponds to the selected contrast value. The selection of an image region according to the maximum contrast selection criteria may be expressed as follows:
Fmax-con(x,y)=max{Kc*S1(x,y),Kc*S2(x,y),Kc*S3(x,y)}
selected regions from the one or more sensor images are then stitched into a processed or fused image 950. Thus, in this example, all corresponding contrast values are compared together (three contrast values are compared simultaneously) to select the image region with the largest contrast value.
In an alternative embodiment, the convolution may be performed a number of times repeatedly to produce corresponding contrast maps, the values of which are compared repeatedly to ultimately form the processed image. For example, referring to FIG. 10, a contrast map 920-. However, rather than comparing all of the corresponding values of each contrast map together, iterative comparisons of contrast maps may be performed using different contrast-selection kernels.
Thus, for example, performing a comparison 1000 of the contrast values in the contrast maps 920 and 921 results in selecting a set of contrast values 1010 based on, for example, a larger or maximum contrast value. The selected contrast values are selectively combined to form an intermediate image or contrast map 1030.
The contrast value in the contrast image 1030 may then be compared 1040 with the contrast value from the contrast image 922 in the third image 902. A contrast value 1050 is selected or determined and an image region 1055 corresponding to the selected contrast value is selected. The selected regions form a processed or fused image 1060. Those skilled in the art will appreciate that different numbers of iterations or comparisons may be performed for different numbers of contrast maps using the same or different convolution kernels. Thus, the present image processing system and method provide flexible image fusion suitable for different applications utilizing convolution.
Correcting luminance of fused image
The brightness of the fused image may be corrected or adjusted, if desired, depending on the type of sensor used and the quality of the sensor image and the fused image produced. Luminance correction is particularly useful when the fused image is not clear enough for the pilot.
In the example involving radar and IR images, as shown in fig. 5C, significant artifacts exist in the fused image. Artifacts result from inconsistent intensities in the fused image, resulting in discontinuous intensities in the fused image. In this example, the high contrast region selected from the radar image (the central horizontal band in this example) is generally darker than the high contrast region from the IR image. The luminance distribution of the resulting processed or fused image varies between the luminance of the two input sensors. For example, a dark band is typically selected from the radar image across the center of the image, in which region the dark band has a higher contrast than the IR image, but has a lower brightness. This reduces the overall sharpness of the fused image.
The intensity distribution within the fused image may be adjusted to produce a clearer fused image. The brightness adjustment is performed by determining an average brightness value in the image region generated by the reference sensor and adjusting the intensity of the region of the fused image in accordance with the corresponding determined value. In the example images of fig. 5A and 5B, the brightness adjustment technique is based on brightness that varies in vertical sections of the sensor image (e.g., the sky from the horizon to the near horizon), but is unpredictable in any horizontal section (e.g., across the image at any particular elevation angle).
Reference sensor
Luminance correction can be performed by selecting one sensor as the reference sensor and adjusting the luminance of the fused image to match or approximate the luminance distribution of the reference sensor. The reference sensor may be arbitrarily selected or selected according to the intended application of the sensor in a particular situation. For example, radar sensors are generally capable of providing more image detail than IR sensors in low visibility conditions. However, IR sensors can provide more natural or photo-like images, at least in a closer range.
For ease of illustration, the present description refers to the IR sensor I (x, y) as a reference sensor for the luminance distribution in order to capture the natural characteristics of the image from the sensor.
Determining average brightness
Adjusting the brightness involves determining the average brightness of the reference sensor in a particular image area, e.g., a band along a section of each image parallel to the scene horizon. The scene horizon refers to the "actual" real world horizon. During aircraft roll, bank, or other motions, the scene horizon may be at an angle relative to the image level.
The average luminance of each such strip for each reference sensor image is determined. Then, the obtained luminance value is added to each corresponding band of the fused image to adjust the luminance of the fused image. In addition, the brightness may be weighted, if necessary, to obtain a particular brightness adjustment effect. The weight λ may be used to reduce the effect of illumination compensation, although it has been determined that λ -1 can provide a sufficiently clear adjustment of the fused image in most cases.
Therefore, the method of adjusting the brightness in the fused image can be expressed as follows:
wherein F (x, y) is the luminance value of the fused image;
λ is the weighting factor for different degrees of brightness adjustment;
w is the width of the image from x-0 to x-w; and
FLC(x, y) is the luminance compensated fused image.
Those skilled in the art will appreciate that the reference sensor image may be sampled along different cross-sections other than horizontal cross-sections, and with different slices other than across the image strip. The cross-sections and sampling segments may be selected based on a variety of different factors, including the type of sensor, the image of the sensor, the orientation of the image, and the system or method used. However, for ease of explanation, this description will refer to sampling of slice sections in a sensor image and correction of corresponding slices in a processed image.
One example of applying brightness adjustment is shown in fig. 11. The runway scene depicted in the fused image 1100 prior to brightness correction includes a number of artifacts that distort the processed or fused image. As a result, the runway scene is somewhat less clear, particularly in the middle portion of the image. Image 1110 represents the same image 1100 after brightness correction and selection of the IR sensor as the reference sensor.
As can be seen by comparing graphs 1100 (before brightness correction) and 1110 (after brightness correction), the brightness compensated image shows a small sudden brightness change in altitude, which would otherwise produce a noisy image. The result is a clearer processed or fused image.
The brightness correction of the fused image may be performed by correcting different bands or regions of the fused image. For example, the average luminance of the reference sensor is determined for one image line or strip in the reference sensor image. The average luminance value determined from the reference sensor image is processed, such as by the luminance adjustment method described previously, and added to each pixel in the corresponding fused image line or strip.
In an alternative embodiment, processing efficiency may be improved by using the average or determined luminance value from a line in the reference sensor image, and applying it as a correction to a line in the processed or fused image that is adjacent to the line in the fused image that corresponds to the determined line in the reference sensor image (e.g., the line immediately above or below the corresponding determined line). It is generally acceptable to apply the luminance values to subsequent lines, since the average value between successive image lines generally does not vary substantially. This technique can also be used to adjust lines immediately above or below the subject line, or to adjust lines that are separated from the reference line according to brightness variations.
Luminance correction is also applicable when the scene horizon is not parallel to the image horizon, for example, when the aircraft rolls or turns to one side. In this case, the scene horizon angle and altitude may be generally known from the aircraft's position sensors. The luminance correction may be calculated from the reference sensor and stored as a two-dimensional look-up table. The correction values derived from the look-up table are applied to the fused image on a pixel-by-pixel basis. To minimize latency and processing time, if there is sufficient memory storage resources for the full-map lookup table, the table values may be applied to the current frame based on the values calculated in the previous frame. These requirements may be approximately equal to the size of the image frame or other sizes depending on the details of the image produced by each sensor, e.g., the size of an image frame is 320 x 240 bytes for an 8-bit per pixel sensor.
Spatial pre-filtering of sensor images
Regions or portions of the sensor image may also be filtered to simplify the processing of comparing contrast values and the application of selection criteria. The filtered regions can be represented as a number less than 1 to reduce their contribution to the fused image, or as zeros to completely exclude their contribution to the fused image, thereby simplifying and reducing processing time.
The image regions that can be filtered include image portions that are outside the fused image, e.g., regions above the radar horizon in the case of a radar sensor. If a radar sensor is used, there is generally no useful information above the radar horizon (i.e. beyond the detection limit of the radar sensor) and there is little information in the near field (at least at higher altitudes). IR sensors are generally most effective in the shorter range (near field), especially in climatic conditions where the sensor cannot detect the far field due to the inability of the sensor to penetrate obstacles such as rain or fog. Thus, with radar or IR sensors, the radar image area above the radar horizon and in the near field can be removed by pre-filtering, and the IR image area in the far field can also be removed by pre-filtering. Other fields and regions may be removed by filtering, depending on the sensor, the resulting image produced, and the needs of the user or system.
Fig. 12A-B show a general spatial filter. FIG. 12A shows a filter for an image produced by a radar sensor. More specifically, the filter removes the information that the radar sensor is the least effective, i.e., the information above the radar horizon 1200 or near field 1204 location, while allowing the remaining radar sensor information 1202 to pass through and be included in the contrast map. The filtered out data is represented as dark regions 1200, 1204. Likewise, in FIG. 12B, the filter removes the information that the IR sensor is the least effective, i.e., the far field 1212, while allowing the remaining information 1210 and 1214 to pass through and be included in the contrast map. Although fig. 12A-B specifically illustrate complementary filters, those skilled in the art will appreciate that different sensor/image combinations are not always the case. Different sensors may require different filter functions.
One technique for filtering image regions is by selecting spatially dependent alpha and beta weighting functions. Continuing with the example of radar and IR images, the weighting function may be selected to emphasize the effect of the radar image in areas where radar signals are strongest, and to emphasize IR signals elsewhere.
The weighting function may be implemented by spatial filtering or other smooth functions that do not introduce unnecessary artifacts, such as the following one-dimensional gaussian weighting function:
wherein:
αmand alpha1Determining the maximum amplitude of the Gaussian function (usually 1, but other values may be used to emphasize a sensor, or to compensate for blanking level values, PMAnd P1);
bMAnd b1Determining the width of a Gaussian function, namely a sensor benefit area or a sensor information clustering area; and
y0the center of the gaussian function is shifted vertically up and down in the image as needed.
Fig. 13A-B show a more detailed example of such a weighting function. Fig. 13A-B show graphs 1300, 1310 of example filter transparency profiles corresponding to radar and IR sensors. In the graphs 1300, 1310, the horizontal or "x" axis represents a line or section along the corresponding image. The vertical or "y" axis represents filter transparency or transmission capability.
Referring to fig. 13A, a filter curve 1300 shows filter weighting as a function of vertical position in fig. 13C. The curves show transmission values, percentages, or ratios: 0.0 (no data transmission), 0.2, 0.4.. 1.0 (all data transmission). Thus, this example filter is designed to reduce the importance of the least significant portion of the radar image, i.e., above the radar horizon 1320 and near field 1324 portions. As a result, a filter with high transmissivity (i.e., 1.0) is applied to the most significant portion of the radar image, i.e., the far field or middle portion 1322 of the image.
More specifically, an example of configuring a radar filter with a full contrast period: in the center of the image, there is a maximum transparency of 100%, while the transparency at the upper and lower edges of the image is 0%. An example filter 1300 with a standard deviation of 50 pixels was constructed. Different filter configurations and functions may be used depending on the sensor used and the desired filtering effect.
Fig. 13B shows the filter weights as a function of vertical position in fig. 13D. This filter 1310 is designed to reduce the least significant portion of the IR filter, i.e., to reduce the importance of the center image or far field band 1332, and to emphasize the importance of the stronger regions 1330, 1334. The example IR filter has a maximum contrast of 75%: it varies from approximately 25% transparency in the center of the image to 100% transparency at the top and bottom edges and has the same standard deviation of 50 pixels as the filter function 1300.
Weighting the sensor image in this manner essentially preselects the image region containing the useful and relevant information and is therefore a candidate region for inclusion in the fused image. Furthermore, by filtering regions of less available information, processing time may be reduced.
With continued reference to the example of radar and IR images, preselection or filtering of image regions is further illustrated in FIGS. 14A-F.
Fig. 14A shows a raw radar image 1400 produced by a radar sensor. As can be seen in the image 1400, the middle zone 1404 or far field contains the most information compared to zones 1402 (above the radar horizon) and 1406 (near field). Fig. 14B shows a filter 1410. The filter includes a high transmission portion 1414 corresponding to radar image area 1404, and low transmission portions 1412 and 1416 corresponding to radar image areas 1402 and 1406. Thus, the filter reduces the importance of the region 1402, 1406 where the radar is least effective. Fig. 14C shows a filtered radar image 1420 with far-field or intermediate regions 1404 highlighted to provide the most relevant information.
Likewise, FIG. 14D shows a raw IR image 1430 produced by the IR sensor. From image 1430, it can be seen that top and bottom areas 1432 (above the radar horizon) and 1436 (near field) contain the most information compared to area 1434 (far field). Fig. 14E shows the filter 1440. The filter includes high transmission portions 1442 and 1446 corresponding to the regions 1432 and 1436 of the IR image, and a low transmission portion 1444 corresponding to the region 1434 of the IR image. Thus, the filter reduces the importance of the region 1434 where IR is least effective. Fig. 14F shows a filtered IR image 1450 in which the above-described radar horizon area 1432 and near field area 1436 are emphasized to provide the most relevant information.
For optimal filtering, the weighting function should account for the state or operating parameters, depending on the needs and design of the specific system. For example, as shown in fig. 15A-E, in an aircraft, the filtering may be a function of aircraft roll-over or other movement or orientation that causes the scene horizon to rotate. Thus, the orientation of the weighting function may be matched filtered. In addition, the filtering may be a function of aircraft pitch and altitude, both of which affect the effective field of view of the radar and typically affect the standard deviation and vertical position of the weighting function.
Thus, for example, FIG. 15A shows an original radar image 1500. Fig. 15B shows a normal condition, i.e., no weighting or filtering function 1510 of aircraft roll. Fig. 15C shows a filtered radar image 1520. As a result, both the filter 1510 and the filtered radar image 1520 are parallel to the scene horizon and do not show any angular adjustment.
Fig. 15D shows a weighting or filtering function 1530 reflecting about 5 degrees of aircraft roll. More specifically, the transmissive portion of the filter is rotated by about 5 degrees. Fig. 16E shows a filtered radar image 1540 reflecting that the filter function was rotated approximately 5 degrees to cause the aircraft to roll approximately 5 degrees.
Combination of pre-filtering, contrast-based image fusion, and brightness correction
Depending on the sensor and the quality of the combination of the sensor and the fused image, spatial pre-filtering and/or intensity correction may be applied to the image as part of the image fusion process.
If only contrast-based image fusion and brightness correction are performed, they are usually done in the order described. If three processes are performed, then spatial pre-filtering is typically performed first, followed by contrast-based sensor fusion, and finally luminance correction. This sequence generally results in a more efficient fused image while reducing processing time. The brightness correction should normally follow both pre-filtering and contrast-based fusion to achieve the closest desired brightness distribution and to prevent image brightness distribution variations due to the results of subsequent processing. By using these techniques in this manner, system performance is improved by minimizing pipeline delays and data latency. These enhancements are particularly useful in situations where image processing time is under strain, such as air traffic, in-loop guidance applications, or other applications that use real-time image processing.
Although reference has been made in the foregoing description to preferred embodiments, it will be apparent to those skilled in the art of designing image processing systems that insubstantial modifications, alterations, and substitutions can be made to the preferred embodiments without departing from the invention as set forth in the appended claims.
Thus, although the description of the preferred embodiment deals with two images from radar and IR sensors in combination with an aircraft, one skilled in the art will recognize that images from other types, combinations, and numbers of sensors may be used. For example, the system may be implemented with three, four, five, or other number of sensors, rather than two sensors. Furthermore, instead of radar and IR sensors, the system can process multiple images from the same type of sensor at different wavelengths, Ultraviolet (UV) sensors, sensors based on active or passive Radio Frequency (RF) systems; ultrasonic sensors, such as visible band sensors, e.g., low-light visible band sensors, Charge Coupled Devices (CCDs), or color or grayscale cameras. Further, it will be appreciated by those skilled in the art that the present image fusion system and method may be used in other applications besides processing aircraft images. For example, the system and method may be used in conjunction with other moving vehicles, medical procedures, monitoring, and other monitoring and image processing applications involving multiple images or sensors. Further, those skilled in the art will appreciate that the fused or processed image may be formed according to a variety of different selection criteria or processes, with larger or maximum contrast values being exemplary only.

Claims (60)

1. A method of processing images to form a scene in real time using a plurality of images, the method comprising:
configuring at least two sensors, wherein the sensors have different detection capabilities on objects in a scene and generate corresponding images of the scene according to the different detection capabilities;
dividing each image into a plurality of image areas;
generating a contrast map for each image, each contrast map comprising contrast values for each image region;
applying a selection process to the contrast values to select an image region for use in processing the image; and
forming the processed image of the scene using the selected image region.
2. The method of claim 1, wherein the method of dividing the image into a plurality of image regions further comprises: each image is divided into blocks of pixels, or into regions of arbitrary shape, on a pixel-by-pixel basis.
3. The method of claim 1, wherein the method of generating a contrast map further comprises: a convolution operation is performed to determine a contrast value for the contrast map.
4. The method of claim 3, wherein performing a convolution operation further comprises: using a nucleus KcPerforming a convolution operation wherein
[{KC*S1(x,y),KC*S2(x,y)}]Represents a convolution;
s1 represents an image area of the first image;
s2 represents an image area of the second image; and
(x, y) represents the spatial coordinates of the image.
5. The method of claim 1, wherein each sensor detects a different wavelength.
6. The method of claim 1, wherein the plurality of sensors comprises infrared sensors and radar sensors.
7. The method of claim 1, wherein the plurality of sensors comprises an infrared sensor and an ultraviolet sensor.
8. The method of claim 1, wherein the plurality of sensors comprises a radar sensor and an ultraviolet sensor.
9. The method of claim 1, wherein the plurality of images are generated by two or more infrared sensors, each IR sensor detecting a different wavelength.
10. The method of claim 1, wherein applying a selection process comprises: the competing contrast values of two corresponding image regions from the two images, respectively, are compared.
11. The method of claim 10, wherein the selection process is performed to select a larger competitive contrast value.
12. The method of claim 10, wherein comparing competing contrast values further comprises comparing corresponding contrast values of overlapping image regions.
13. The method of claim 1, wherein contrast values of the contrast maps of the first, second and third sensors are compared together to form the processed image.
14. The method of claim 13, further comprising:
identifying contrast values from the first and second sensor images to form a median contrast map;
applying a selection process to the contrast value of the intermediate contrast map and the contrast value of the contrast map of the third sensor image; and
the processed image is formed using the selected image area.
15. The method of claim 14, wherein the first and second sensors are infrared sensors and the third sensor is a radar sensor.
16. The method of claim 1, wherein the sensor image displays a view from a moving vehicle.
17. The method of claim 16, wherein the moving vehicle is an aircraft, a watercraft, an automobile, or a train.
18. The method of claim 1, further comprising adjusting an intensity of one or more regions of the processed image.
19. The method of claim 18, further comprising weighting the degree of intensity adjustment.
20. The method of claim 18, wherein adjusting intensity further comprises adjusting brightness across the processed image.
21. The method of claim 18, wherein adjusting an intensity across the processed image further comprises:
selecting one sensor as a reference sensor;
determining an average intensity for each region of the reference sensor image; and
adjusting the intensity of one or more regions in the processed image by combining the determined average intensity value and the intensity value of the processed image.
22. The method of claim 21, wherein the sensors comprise radar sensors and infrared sensors, and wherein the reference sensor is comprised of a radar sensor.
23. The method of claim 21, wherein the sensors comprise radar sensors and infrared sensors, and wherein the reference sensor is comprised of an infrared sensor.
24. The method of claim 21, wherein adjusting the intensity of one or more regions in the processed image further comprises adjusting the intensity of a line in the processed image that corresponds to a line in the reference sensor image for which an average intensity is determined.
25. The method of claim 24, wherein adjusting the intensity of one or more regions in the processed image further comprises adjusting the intensity of a line in the processed image that is adjacent to a line in the processed image that corresponds to the same line in the reference sensor image for which the average intensity is determined.
26. The method of claim 21, wherein the scene horizon is repositioned at an angle relative to the image horizon, further comprising:
determining an average intensity of the reference sensor image on a pixel-by-pixel basis; and
adjusting the intensity of the processed image on a pixel-by-pixel basis.
27. The method of claim 26, wherein the scene horizon is repositioned due to roll, bank, yaw, or pitch.
28. The method of claim 1, further comprising filtering regions of one or more images prior to generating the contrast map for each image.
29. The method of claim 28, wherein filtering further comprises spatially filtering regions of each image by weighting selected image regions.
30. The method of claim 29, wherein the sensor comprises a radar sensor, and wherein the spatial filtering is performed by filtering regions of the image above a radar horizon.
31. A system for combining a plurality of images to form a final image of a scene in real time, comprising:
a plurality of sensors, each sensor having a different detection capability for an object in a scene and generating a corresponding image of the scene according to the different detection capabilities;
a processor configured to divide each image into a plurality of image regions; generating a contrast map for each image, and each contrast map comprising contrast values for each image region; applying a selection criterion to the contrast values to select an image region, and forming a processed image of the scene using the selected image region.
32. The system of claim 31, wherein the processor is configured to divide each image into individual pixels, blocks of pixels, or arbitrarily shaped regions.
33. The system of claim 31, wherein the processor is configured to generate the contrast map by performing a convolution operation to determine a contrast value for the contrast map.
34. The system of claim 33, wherein the processor is configured to utilize core KcPerforming a convolution operation wherein
[{KC*S1(x,y),KC*S2(x,y)}]Represents a convolution;
s1 represents an image area of the first image;
s2 represents an image area of the second image; and
(x, y) represents the spatial coordinates of the image.
35. The system of claim 31, wherein each sensor detects a different wavelength.
36. The system of claim 31, wherein the plurality of sensors includes an infrared sensor and a radar sensor.
37. The system of claim 31, wherein the plurality of sensors includes an infrared sensor and an ultraviolet sensor.
38. The system of claim 31, wherein the plurality of sensors comprises a radar sensor and an ultraviolet sensor.
39. The system of claim 31, wherein the plurality of sensors includes two or more infrared sensors, each IR sensor detecting a different wavelength.
40. The system of claim 31, wherein the processor is further configured to compare competing contrast values of two corresponding image regions from the two images, respectively.
41. The system of claim 40, wherein the processor is further configured to select a higher competitive contrast value.
42. A system according to claim 40, wherein the processor is configured to compare corresponding contrast values of overlapping image regions.
43. The system of claim 31, wherein the contrast values of the contrast maps of the first, second, and third sensors are compared together to form a final image.
44. The system of claim 43, wherein the processor is further configured to
Identifying contrast values from the first and second sensor images to form a median contrast map,
applying a selection process to the contrast value of the intermediate contrast map and the contrast value of the contrast map of the third sensor image, an
The processed image is formed using the selected image area.
45. The system of claim 44, wherein the first and second sensors are infrared sensors and the third sensor is a radar sensor.
46. The system of claim 31, wherein one sensor image displays a view from one moving vehicle.
47. The system of claim 46, wherein the moving vehicle comprises an aircraft, a watercraft, an automobile, or a train.
48. The system of claim 31, wherein the processor is further configured to adjust an intensity of one or more regions of the processed image.
49. A system according to claim 48, wherein the processor is further configured to adjust an intensity across the processed images.
50. A system as in claim 49, where the processor is configured to weight a degree of intensity adjustment.
51. The system of claim 49, wherein the processor is further configured to
One of the sensors is selected as the reference sensor,
determining an average intensity for each region of the reference sensor image, an
Adjusting the intensity of one or more regions in the processed image by combining the determined average intensity value and the intensity value of the processed image.
52. The system of claim 51, wherein the sensor comprises a radar sensor and an infrared sensor, and wherein the reference sensor is comprised of a radar sensor.
53. The system of claim 51, wherein the sensor comprises a radar sensor and an infrared sensor, and wherein the reference sensor is comprised of an infrared sensor.
54. The system of claim 51, wherein the processor is configured to adjust an intensity of a line in the processed image corresponding to a line in the reference sensor image for which an average intensity is determined.
55. The system of claim 51, wherein the processor is configured to adjust the intensity of a line in the processed image that is adjacent to a line in the processed image that corresponds to the same line in the reference sensor image for which the average intensity is determined.
56. The system of claim 51, wherein the scene horizon is repositioned at an angle relative to the image horizon, the processor being further configured to
Determining an average intensity of the reference sensor image on a pixel-by-pixel basis, an
The intensity of the processed image is adjusted on a pixel-by-pixel basis.
57. The system of claim 56, wherein the scene horizon is repositioned relative to the image horizon as a result of roll, bank, pitch, or yaw motion.
58. The system of claim 31, wherein the processor is configured to filter one or more image regions.
59. A system according to claim 58, wherein the processor is configured to filter one or more image regions by weighting selected image regions.
60. The system of claim 58, wherein the sensor comprises a radar sensor, and wherein the processor is further configured to spatially filter image regions above a radar horizon.
HK06105384.7A 2002-08-28 2003-08-27 Image fusion system and method HK1085292B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US10/229,574 US6898331B2 (en) 2002-08-28 2002-08-28 Image fusion system and method
US10/229,574 2002-08-28
PCT/US2003/027046 WO2004021264A1 (en) 2002-08-28 2003-08-27 Image fusion system and method

Publications (2)

Publication Number Publication Date
HK1085292A1 HK1085292A1 (en) 2006-08-18
HK1085292B true HK1085292B (en) 2009-05-29

Family

ID=

Similar Documents

Publication Publication Date Title
USRE41447E1 (en) Image fusion system and method
US10315776B2 (en) Vehicle navigation methods, systems and computer program products
US7148861B2 (en) Systems and methods for providing enhanced vision imaging with decreased latency
EP2916540B1 (en) Image processing system and image processing method
US8515196B1 (en) Systems and methods for processing infrared images
CN102461156B (en) For the infrared camera system and method for dual sensor application
US7619626B2 (en) Mapping images from one or more sources into an image for display
DE69618192T2 (en) VEHICLE REVIEW SYSTEM WITH PANORAMIC VIEW
US20110115812A1 (en) Method for colorization of point cloud data based on radiometric imagery
US10896489B1 (en) Enhancing image quality based on characteristics of a reference region of interest
KR20040053344A (en) Method and system for improving car safety using image-enhancement
WO2002080097A1 (en) Automatic segmentation-based grass detection for real-time video
KR102697037B1 (en) Image Improving Apparatus for AVM System and Improving Method thereof
JP4952499B2 (en) Image processing device
HK1085292B (en) Image fusion system and method
JP7345035B1 (en) Aerial image change extraction device
DE102013220839A1 (en) Imaging system for car, has processor for dynamically adjusting brightness of captured image based on brightness of virtual image, and rearview mirror display device for displaying brightness adjusted image
Suiter et al. Multispectral image fusion for the small aircraft transportation system
Gogineni Integer Ratio Logarithm Based Edge Detection for Runway Hazard Detection
de Albuquerque Nóbrega et al. COMPARATIVE ANALYSIS OF AUTOMATIC DIGITAL IMAGE BALANCING AND STANDARD HISTOGRAM ENHANCEMENT TECHNIQUES IN REMOTE SENSING IMAGERY