[go: up one dir, main page]

WO2018137247A1 - Relative illuminance compensation method and device by multiple exposure - Google Patents

Relative illuminance compensation method and device by multiple exposure Download PDF

Info

Publication number
WO2018137247A1
WO2018137247A1 PCT/CN2017/072768 CN2017072768W WO2018137247A1 WO 2018137247 A1 WO2018137247 A1 WO 2018137247A1 CN 2017072768 W CN2017072768 W CN 2017072768W WO 2018137247 A1 WO2018137247 A1 WO 2018137247A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
temporally
frame
frames
sequential frames
Prior art date
Application number
PCT/CN2017/072768
Other languages
French (fr)
Inventor
Atsushi Kobayashi
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to CN201780005068.0A priority Critical patent/CN108496355B/en
Priority to PCT/CN2017/072768 priority patent/WO2018137247A1/en
Publication of WO2018137247A1 publication Critical patent/WO2018137247A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/743Bracketing, i.e. taking a series of images with varying exposure conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20216Image averaging

Definitions

  • Embodiments of the present invention relate to relative illuminance (RI) compensation methods and devices by multiple exposure.
  • the lens portion includes a number of lenses.
  • the configuration (structure, shape, material, etc. ) of the lenses determines much of the camera performance.
  • the image capturing portion which has been available recently is an image sensor, which is mainly a CMOS (Complementary metal oxide semiconductor) image sensor or an old-fashioned silver halide film.
  • the image capturing portion means any type of image sensor that may capture temporally-sequential images, such as the CMOS image sensor, a CCD (Charge coupled device) image sensor, an organic photoconductive film image sensor, and a quantum thin film image sensor.
  • the relative illuminance (RI) is the ratio of the off-axis illuminance of the image plane to the illuminance on-axis of the image region of the image plane. Because of the relative illuminance (RI) of the lens and the non-uniformity of the sensitivity of the image sensor, an output image from the image sensor may have so-called shading. As for the camera module for mobile devices, severe restrictions on the height of the module may cause the RI to be much smaller than 50%. The small RI may cause the off-axis area of an image to be darker than the center area of the image (See FIG. 1 for an image with a low-RI lens) . Such a phenomenon is called shading or vignetting.
  • Patent Citation 1 discloses lens shading, which is caused by the incident light rays being received at most extreme angles relative to less compact systems, due to the compacted height of the integrated camera modules. Additionally, there are some other reasons which may reduce the signal level of the off-axis area of the image sensor (see Non Patent Citation 1) .
  • Non Patent Citation 1 discloses optical vignetting, which is caused by the physical dimensions of a multiple element lens. Rear elements are shaded by elements in front of them, which reduces the effective lens opening for off-axis incident light.
  • Non Patent Citation 2 also discloses optical vignetting in which less light reaches edges of sensor due to physical obstruction in lens.
  • the related-art technique is referred to as lens shading compensation (LSC) or shading compensation.
  • LSC lens shading compensation
  • the LSC which is a function to compensate for and recover a low-RI image to a generally-flat signal level, multiplies a gain which depends on the position in the image to increase the signal level of the off-axis area of the image to keep the signal level flatter over the whole area of the image.
  • FIG. 2 shows an exemplary schematic block diagram of the above-described lens shading compensation (LSC) function. These blocks may be located in an image sensor. Otherwise, they may be located in an ISP (Image signal processor) .
  • ISP Image signal processor
  • the ISP which stands for the image signal processor or image signal processing, refers to the processing hardware, the processing software, or a processing algorithm to process an image from the image sensor to reproduce the image to be adapted to some type of image format.
  • an input image signal ( “Image signal” in FIG. 2) is multiplied by a certain gain ( “Gain” in FIG. 2) in a multiplier ( “Multiplier” in FIG. 2) .
  • the gain is calculated to compensate for the signal level of the off-axis area of the image to be the same as that of the center area (the on-axis area) of the image. Therefore, the LSC function calculates the position of the image sensor and then the signal level at the position of the image sensor is converted such that it is amplified using a certain pre-calculated table ( “Gain table” in FIG. 2) .
  • FIG. 3 is a graph illustrating an example of relative illuminance versus image height.
  • the horizontal axis shows the image height.
  • the image height is shown as a percentage (in %) of the maximum image height or half the image circle, which is the same as the diagonal length of the image sensor.
  • the vertical axis is the relative illuminance to be fixed.
  • the graph is shown on the basis that the brightness (luminance) is completely flat (uniform) .
  • the LSC may compensate only for shading caused by the relative illuminance.
  • the gain to be applied depends only on the image height (the distance from the focal point) of the area to be compensated for.
  • FIG. 4 is a graph illustrating an example of the gain of the normal LSC function to fix the problem in FIG. 3.
  • the horizontal axis shows the image height.
  • the image height is shown as a percentage (in %) of the maximum image height or half the image circle, which is the same as the diagonal length of the image sensor.
  • the vertical axis is the gain of the normal LSC function to fix the problem in FIG. 3.
  • the graph is shown on the basis that the brightness (luminance) is completely flat (uniform) .
  • the gain of the LSC function is a reciprocal value of, or inversely correlated with the relative illuminance (FIG. 4) . After this process, as long as the object is flat, the output signal is correctly compensated for, so that it maintains the same signal level.
  • Patent Citation 2 for the disclosure of compensating for the lens shading by changing an amplification factor according to the position of a sensor
  • Patent Citation 3 for the disclosure of a spherical intensity correction, as an example of LSC, which corrects the data of each image pixel by an amount that is a function of the radius of the pixel from the optical center of the image
  • Patent Citation 4 for the disclosure of compensating for the lens shading phenomenon after photographing for an image of a white area
  • Non Patent Citation 2 for the disclosure of LSC or vignetting compensation, which corrects for the above-described vignetting
  • Non Patent Citation 3 for the disclosure of LSC.
  • the SNR of the picture may be calculated in accordance with the following equations:
  • Noise is the noise level of the Signal. Noise may be estimated in accordance with the following equation:
  • SQRT is the square-root function
  • k is a coefficient related to the design of the image sensor
  • base noise is the type of noise unrelated to the Signal.
  • the base noise usually changes in relation to image sensor settings such as the image sensor-internal analog gain. With this technique, only the gain is applied to the image signal. The gain also multiplies noise, so that the SNR of the signal after the LSC is the same as the SNR of the signal before the LSC.
  • the basis for the SNR obtained in accordance with the above-mentioned calculations being higher in the center of the picture is the lens design for small-height cameras.
  • FIG. 5 shows a curve for the roughly-estimated SNR based on the relative illuminance in FIG. 3 and the optical shot noise of 900e-at the center area (900e-is 18%of 5000e-, which is the typical full-well capacity of a 1.12um CMOS image sensor. ) .
  • FIG. 5 shows the SNR of an image captured without utilizing the present invention.
  • the horizontal axis shows the image height.
  • the image height is shown as a percentage (in %) of the maximum image height or half the image circle, which is the same as the diagonal length of the image sensor.
  • the vertical axis is the SNR of an image captured without utilizing the present invention.
  • Embodiments of the present invention are intended to solve the above-described problem.
  • the present invention discloses a method of compensating for and a device configured to compensate for relative illuminance by multiple exposure.
  • a method of compensating for relative illuminance by multiple exposure includes: externally receiving two or more temporally-sequential frames for an image and sending one main frame determined from the received two or more temporally-sequential frames; storing at least one of the two or more temporally-sequential frames for the image; applying a mixture gain to one or a plurality of mixed frames of the two or more temporally-sequential frames; and mixing the applied results, wherein the mixture gain varies in accordance with the position in the image.
  • a device configured to compensate for relative illuminance by multiple exposure.
  • the device includes: a memory controller configured to externally receive the two or more temporally-sequential frames for an image and to send one main frame determined from the received two or more temporally-sequential frames to a multiplier; a memory configured to capture at least one of the two or more temporally-sequential frames for the image; the multiplier configured to apply a mixture gain to one or a plurality of mixed frames of the two or more temporally-sequential frames; and a mixer configured to mix the applied results, wherein the mixture gain varies in accordance with the position in the image.
  • the method of compensating for and the device configured to compensate for relative illuminance by multiple exposure make it possible to increase the signal level sufficiently high to maintain a high SNR at a higher image height area even if the relative illuminance is small.
  • FIG. 1 illustrates an image with a low-RI lens.
  • FIG. 2 is an exemplary schematic block diagram of the above-described lens shading compensation (LSC) function.
  • LSC lens shading compensation
  • FIG. 3 is a graph illustrating an example of relative illuminance.
  • FIG. 4 is a graph illustrating an example of a gain table.
  • FIG. 5 is a graph illustrating a roughly-estimated SNR versus the image height.
  • FIG. 6 is a diagram illustrating the basic concept according to embodiments of the present invention.
  • FIG. 7 is one exemplary block diagram according to one embodiment of the present invention.
  • FIG. 8 is a graph illustrating an example of the mixture gain which depends on the image height.
  • FIG. 9 is a graph illustrating the image height versus the signal level after mixing with the mixture gain on FIG. 8.
  • FIG. 10 is another exemplary block diagram according to another embodiment of the present invention.
  • FIG. 11 is a further exemplary block diagram according to a further embodiment of the present invention.
  • FIG. 6 is a diagram illustrating the basic concept according to embodiments of the present invention, which includes steps of “Capture sequential frames” (1 in FIG. 6) ; “Decide main frame from capture frames” (2 in FIG. 6, in which the “capture frames” may include “Previous frame” , “Main frame” , and “Next frame” . ) ; and “Apply gain to next (or previous) frame and add it to main frame” (3 in FIG. 6, in which “Mixture Gain” is applied to the “Next frame” (a “Mixed frame” ) and the “Mixture Gain” -applied “Next frame” is added to the “Main frame” . ) .
  • two or more temporally-sequential frames are captured for one image; one main frame is determined from the captured two or more temporally-sequential frames; a mixture gain is applied to one or a plurality of mixed frames, which is a frame temporally-sequential to the one main frame, wherein the frame temporally-sequential to the one main frame may be a frame previous to the one main frame or a frame following the one main frame, or both the frame previous to the one main image and the frame following the one main image; and the applied results are combined, wherein the mixture gain varies in accordance with the position in the image.
  • FIG. 7 shows an exemplary block diagram according to one embodiment of the present invention.
  • a number of image frames received from an image sensor are stored in a memory ( “Memory” in FIG. 7) .
  • at least two frames are stored in the memory.
  • a method of controlling the memory and the size of the memory are to not be predefined in the present embodiment.
  • one frame is selected by a certain method.
  • the one frame is determined to be a main frame ( “Image1” in FIG. 7) .
  • a “Main frame” is determined using a certain method which is determined by the camera system which uses the image sensor.
  • a frame which has the lowest latency from a Shutter trigger is generally selected as the "Main frame” .
  • the camera system detects the Shutter trigger from one of mechanical buttons named “Shutter” , a multi-touch or single-touch on an area dedicated to "Shutter trigger” on a display panel, or detection of a situation such as "Smile shutter” .
  • a further one frame is selected by a certain different method, and the latter frame is to be referred to as a mixed frame ( “Image2” in FIG. 7) .
  • Image2 mixed frame
  • a mixed frame is an image which may be captured/observed before "Other functions including LSC" in FIG. 7.
  • the mixed frame is multiplied by a mixture gain (The mixture gain is applied to “Multiplier” of FIG. 7) with reference to a gain table.
  • the mixture gain is varied depending on the image height of the position, described in FIG. 7. If the relative illuminance of the lens is higher, the “mixture gain” would be lower. As the relative illuminance of the center of most of the lenses (the center of the image) is higher than the edge of these lenses, the mixture gain should be smaller at the center and higher at the edge.
  • the gain table may be defined in a LUT (lookup table) ( “Gain table” in FIG. 7) or the memory.
  • the gain table may also be calculated at the time of executing the process (i.e., on the fly) .
  • the mixture gain may depend on the image height of the position to be calculated.
  • the image height is the distance between the center of an image and the position to be calculated.
  • FIG. 8 shows an example of the mixture gain versus the image height for the above-mentioned one of the simplest implementations.
  • the horizontal axis shows the image height.
  • the image height is shown as a percentage (in %) of the maximum image height or half the image circle, which is the same as the diagonal length of the image sensor.
  • the vertical axis is the mixture gain.
  • the mixture gain may be calculated such that the same Signal level is set over the whole image height. It may be limited up to some constant, e.g. 1.0.
  • FIG. 9 shows the signal level after undergoing the mixture operation in accordance with the present invention.
  • the horizontal axis shows the image height.
  • the image height is shown as a percentage (in %) of the maximum image height or half the image circle, which is the same as the diagonal length of the image sensor.
  • the vertical axis is the signal level after undergoing the mixture operation in accordance with the present invention. If the relative illuminance of the lens is represented by FIG. 3 and the mixture gain by FIG. 8, the signal level of the output frame of the embodiment of this invention would be as shown in FIG. 9.
  • the mixture gain may be determined in the following manner:
  • the light should have sufficient uniformity. Moreover, the light is preferably 3200K Halogen light, A-light (2858K) , or D65.
  • parameters for the LSC should be determined to compensate for the color shading portion including variations in the light source.
  • FIG. 10 is an exemplary block diagram according to another embodiment of the present invention in which only one previous frame or one following frame may be stored in a memory to reduce the size of the memory for storage ( “Memory” in FIG. 10) , where the memory to be used is reduced to only the one frame.
  • “Image 2” which is a frame previous to or a frame following “Image 1” , is to be a mixed frame in FIG. 10.
  • FIG. 11 is a further exemplary block diagram according to a further embodiment of the present invention in which three temporally-sequential frames are combined.
  • “Image 1” which is a frame previous to “Image 2”
  • “Image 3” which is a frame following “Image 2” are to be mixed frames in FIG. 11.
  • two or more temporally-sequential frames are to be combined into one frame, which is a main frame.
  • the other frame (s) is/are to be (a) mixed frame (s) ;
  • the mixture gain varies depending on the height of the image.
  • a method of relative illuminance compensation by multiple exposure according to embodiments of the present invention makes it possible to increase the signal level sufficiently high to maintain a high SNR at a higher image height area even if the relative illuminance is small.
  • a method of relative illuminance compensation by multiple exposure according to an embodiment of the present invention allows only one previous frame or one following frame to be stored in a memory to make it possible to reduce the size of the memory for storage.
  • the mixture gain varies depending on both the height of image and the azimuthal angle to compensate not only relative illuminance but also shading caused by image sensor's non-uniformity, mechanical tolerance of optical parts and other reasons.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

A method of compensating for relative illuminance by multiple exposure is disclosed. The method includes: externally receiving two or more temporally-sequential frames for an image and sending one main frame determined from the received two or more temporally-sequential frames; storing at least one of the two or more temporally-sequential frames for the image; applying a mixture gain to one or a plurality of mixed frames of the two or more temporally-sequential frames; and mixing the applied results, wherein the mixture gain varies in accordance with the position in the image. A device, an image sensor, an image signal processor, a camera module, a smart phone, a mobile device, and an integrated circuit are also disclosed.

Description

RELATIVE ILLUMINANCE COMPENSATION METHOD AND DEVICE BY MULTIPLE EXPOSURE TECHNICAL FIELD
Embodiments of the present invention relate to relative illuminance (RI) compensation methods and devices by multiple exposure.
BACKGROUND
Almost all cameras have a lens portion and an image capturing portion. The lens portion includes a number of lenses. The configuration (structure, shape, material, etc. ) of the lenses determines much of the camera performance. The image capturing portion which has been available recently is an image sensor, which is mainly a CMOS (Complementary metal oxide semiconductor) image sensor or an old-fashioned silver halide film. In the present patent application, the image capturing portion means any type of image sensor that may capture temporally-sequential images, such as the CMOS image sensor, a CCD (Charge coupled device) image sensor, an organic photoconductive film image sensor, and a quantum thin film image sensor.
The relative illuminance (RI) is the ratio of the off-axis illuminance of the image plane to the illuminance on-axis of the image region of the image plane. Because of the relative illuminance (RI) of the lens and the non-uniformity of the sensitivity of the image sensor, an output image from the image sensor may have so-called shading. As for the camera module for mobile devices, severe restrictions on the height of the module may cause the RI to be much smaller than 50%. The small RI may cause the off-axis area of an image to be darker than the center area of the image (See FIG. 1 for an image with a low-RI lens) . Such a phenomenon is called shading or vignetting. Patent Citation 1 discloses lens shading, which is caused by the incident light rays being received at most extreme angles relative to less compact systems, due to the compacted height of the integrated camera modules. Additionally, there are some other reasons which may reduce the signal level of the off-axis area of the image sensor (see Non Patent Citation 1) . Non Patent Citation 1 discloses optical vignetting, which is caused by the physical dimensions of a multiple element lens. Rear elements  are shaded by elements in front of them, which reduces the effective lens opening for off-axis incident light. Non Patent Citation 2 also discloses optical vignetting in which less light reaches edges of sensor due to physical obstruction in lens.
There is a related-art technique to compensate for the above-described shading. The related-art technique is referred to as lens shading compensation (LSC) or shading compensation. The LSC, which is a function to compensate for and recover a low-RI image to a generally-flat signal level, multiplies a gain which depends on the position in the image to increase the signal level of the off-axis area of the image to keep the signal level flatter over the whole area of the image. FIG. 2 shows an exemplary schematic block diagram of the above-described lens shading compensation (LSC) function. These blocks may be located in an image sensor. Otherwise, they may be located in an ISP (Image signal processor) . The ISP, which stands for the image signal processor or image signal processing, refers to the processing hardware, the processing software, or a processing algorithm to process an image from the image sensor to reproduce the image to be adapted to some type of image format. In the above-described LSC function, an input image signal ( “Image signal” in FIG. 2) is multiplied by a certain gain ( “Gain” in FIG. 2) in a multiplier ( “Multiplier” in FIG. 2) . As long as the object is the same as the center area, the gain is calculated to compensate for the signal level of the off-axis area of the image to be the same as that of the center area (the on-axis area) of the image. Therefore, the LSC function calculates the position of the image sensor and then the signal level at the position of the image sensor is converted such that it is amplified using a certain pre-calculated table ( “Gain table” in FIG. 2) .
FIG. 3 is a graph illustrating an example of relative illuminance versus image height. The horizontal axis shows the image height. The image height is shown as a percentage (in %) of the maximum image height or half the image circle, which is the same as the diagonal length of the image sensor. The vertical axis is the relative illuminance to be fixed. The graph is shown on the basis that the brightness (luminance) is completely flat (uniform) . To simplify explanations of the behavior of the LSC, it is assumed that the LSC may compensate only for shading caused by the relative illuminance. On this assumption, the gain to be applied depends only on the image height (the distance from the focal point) of the area to be compensated for. FIG. 4 is a graph illustrating an example of the gain of the normal LSC function to fix the problem in FIG. 3. The horizontal axis shows the image height. The image height is  shown as a percentage (in %) of the maximum image height or half the image circle, which is the same as the diagonal length of the image sensor. The vertical axis is the gain of the normal LSC function to fix the problem in FIG. 3. The graph is shown on the basis that the brightness (luminance) is completely flat (uniform) . As shown in FIG. 4, the gain of the LSC function is a reciprocal value of, or inversely correlated with the relative illuminance (FIG. 4) . After this process, as long as the object is flat, the output signal is correctly compensated for, so that it maintains the same signal level. (See, for example, Patent Citation 2 for the disclosure of compensating for the lens shading by changing an amplification factor according to the position of a sensor; Patent Citation 3 for the disclosure of a spherical intensity correction, as an example of LSC, which corrects the data of each image pixel by an amount that is a function of the radius of the pixel from the optical center of the image; and Patent Citation 4 for the disclosure of compensating for the lens shading phenomenon after photographing for an image of a white area; Non Patent Citation 2 for the disclosure of LSC or vignetting compensation, which corrects for the above-described vignetting; and Non Patent Citation 3 for the disclosure of LSC. )
[Patent Citations]
[Patent Citation 1] US20144184813A1
[Patent Citation 2] US2011304752A1
[Patent Citation 3] US7,408,576B2
[Patent Citation 4] US8,049,795B2
[Non Patent Citations]
[Non Patent Citation 1] https: //en. wikipedia. org/wiki/Vignetting#Optical_vignetting
[Non Patent Citation 2] Kayvon Fatahalian, CMU15-869: Graphics and Imaging Architectures (Fall 2011)
[Non Patent Citation 3] “Computational Photography: Understanding and Expanding the Capabilities of Standard Cameras, " a Presentation from NVIDIA
To show the technical problem of the related-art technique, the SNR (Signal-to-noise ratio) of a picture is considered. The SNR of the picture may be calculated in accordance with the following equations:
In one type of explanation-i.e., linear explanation, the equation used is:
SNR = Noise /Signal.
In another type of explanation -i.e., dB explanation, the equation used is:
SNR = 20 *log10 (Noise/Signal) .
The latter type of explanation is more common. In both of the explanations, Signal is the output signal level from an image sensor and Noise is the noise level of the Signal. Noise may be estimated in accordance with the following equation:
Noise = SQRT (k *Signal + base noise) ,
where SQRT is the square-root function, k is a coefficient related to the design of the image sensor, and the "base noise" is the type of noise unrelated to the Signal. However, the base noise usually changes in relation to image sensor settings such as the image sensor-internal analog gain. With this technique, only the gain is applied to the image signal. The gain also multiplies noise, so that the SNR of the signal after the LSC is the same as the SNR of the signal before the LSC. The basis for the SNR obtained in accordance with the above-mentioned calculations being higher in the center of the picture is the lens design for small-height cameras. One of the most possible reasons is “the Cosine fourth law” for falloff of illuminance of illuminance across a camera image, which causes reative darkening at the edges. FIG. 5 shows a curve for the roughly-estimated SNR based on the relative illuminance in FIG. 3 and the optical shot noise of 900e-at the center area (900e-is 18%of 5000e-, which is the typical full-well capacity of a 1.12um CMOS image sensor. ) . FIG. 5 shows the SNR of an image captured without utilizing the present invention. The horizontal axis shows the image height. The image height is shown as a percentage (in %) of the maximum image height or half the image circle, which is the same as the diagonal length of the image sensor. The vertical axis is the SNR of an image captured without utilizing the present invention.
As described in the above as the technical problem of the related-art technique, there is a problem that, if the relative illuminance is not so high, the SNR at the off-axis area is low, causing the picture to be noisy and degraded. Embodiments of the present invention are intended to solve the above-described problem.
SUMMARY
The present invention discloses a method of compensating for and a device configured to compensate for relative illuminance by multiple exposure.
According to a first aspect, a method of compensating for relative illuminance by multiple exposure is provided. The method includes: externally receiving two or more temporally-sequential frames for an image and sending one main frame determined from the received two or more temporally-sequential frames; storing at least one of the two or more temporally-sequential frames for the image; applying a mixture gain to one or a plurality of mixed frames of the two or more temporally-sequential frames; and mixing the applied results, wherein the mixture gain varies in accordance with the position in the image.
According to a second aspect, a device configured to compensate for relative illuminance by multiple exposure is provided. The device includes: a memory controller configured to externally receive the two or more temporally-sequential frames for an image and to send one main frame determined from the received two or more temporally-sequential frames to a multiplier; a memory configured to capture at least one of the two or more temporally-sequential frames for the image; the multiplier configured to apply a mixture gain to one or a plurality of mixed frames of the two or more temporally-sequential frames; and a mixer configured to mix the applied results, wherein the mixture gain varies in accordance with the position in the image.
The method of compensating for and the device configured to compensate for relative illuminance by multiple exposure according to embodiments of the present invention make it possible to increase the signal level sufficiently high to maintain a high SNR at a higher image height area even if the relative illuminance is small.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1 illustrates an image with a low-RI lens.
FIG. 2 is an exemplary schematic block diagram of the above-described lens shading compensation (LSC) function.
FIG. 3 is a graph illustrating an example of relative illuminance.
FIG. 4 is a graph illustrating an example of a gain table.
FIG. 5 is a graph illustrating a roughly-estimated SNR versus the image height.
FIG. 6 is a diagram illustrating the basic concept according to  embodiments of the present invention.
FIG. 7 is one exemplary block diagram according to one embodiment of the present invention.
FIG. 8 is a graph illustrating an example of the mixture gain which depends on the image height.
FIG. 9 is a graph illustrating the image height versus the signal level after mixing with the mixture gain on FIG. 8.
FIG. 10 is another exemplary block diagram according to another embodiment of the present invention.
FIG. 11 is a further exemplary block diagram according to a further embodiment of the present invention.
To describe the technical solutions in the embodiments of the present invention more clearly, the following briefly introduces the accompanying drawings required for describing the embodiments of the present invention or the prior art. Apparently, the accompanying drawings in the following description show merely some embodiments of the present invention, and a person of ordinary skill in the art may still derive other drawings from these accompanying drawings without creative efforts.
DESCRIPTION OF EMBODIMENTS
The following clearly and completely describes the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Apparently, the described embodiments are a part rather than all of the embodiments of the present invention. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative efforts shall fall within the protection scope of the present invention.
FIG. 6 is a diagram illustrating the basic concept according to embodiments of the present invention, which includes steps of “Capture sequential frames” (1 in FIG. 6) ; “Decide main frame from capture frames” (2 in FIG. 6, in which the “capture frames” may include “Previous frame” , “Main frame” , and “Next frame” . ) ; and “Apply gain to next (or previous) frame and add it to main frame” (3 in FIG. 6, in which “Mixture Gain” is applied to the “Next frame” (a “Mixed frame” )  and the “Mixture Gain” -applied “Next frame” is added to the “Main frame” . ) .
According to a method of relative illuminance compensation by multiple exposure according to embodiments of the present invention, two or more temporally-sequential frames are captured for one image; one main frame is determined from the captured two or more temporally-sequential frames; a mixture gain is applied to one or a plurality of mixed frames, which is a frame temporally-sequential to the one main frame, wherein the frame temporally-sequential to the one main frame may be a frame previous to the one main frame or a frame following the one main frame, or both the frame previous to the one main image and the frame following the one main image; and the applied results are combined, wherein the mixture gain varies in accordance with the position in the image. In the method of relative illuminance compensation by multiple exposure according to one embodiment of the present invention, the mixture gain varies with an upper limit of 1.0. FIG. 7 shows an exemplary block diagram according to one embodiment of the present invention. A number of image frames received from an image sensor are stored in a memory ( “Memory” in FIG. 7) . In this example, at least two frames are stored in the memory. A method of controlling the memory and the size of the memory are to not be predefined in the present embodiment. Of the frames stored, one frame is selected by a certain method. The one frame is determined to be a main frame ( “Image1” in FIG. 7) . In the present invention, a “Main frame" is determined using a certain method which is determined by the camera system which uses the image sensor. The certain method depends on the architecture of the camera system. A frame which has the lowest latency from a Shutter trigger is generally selected as the "Main frame" . In most camera systems, the camera system detects the Shutter trigger from one of mechanical buttons named "Shutter" , a multi-touch or single-touch on an area dedicated to "Shutter trigger" on a display panel, or detection of a situation such as "Smile shutter" . A further one frame is selected by a certain different method, and the latter frame is to be referred to as a mixed frame ( “Image2” in FIG. 7) . For selecting the further one frame, there are only three possible solutions:
(1) Always using a first frame after the "Main frame" ;
(2) Always using a previous frame immediately before the "Main frame" ; and
(3) Adaptively selecting one relevant frame from the above first frame after the "Main frame" or the previous frame immediately before the "Main frame" . Some possible methods to select the one relevant frame in (3) in the above include:
(a) Selecting a frame with the smaller difference in the image
(b) Selecting a frame with the smaller difference in the position of the object (s) 
(c) Selecting a frame with the smaller difference in the position of the camera
(d) Selecting a frame with the "best" value of some value calculated from images, movement of the objects and position movement of the camera.
A mixed frame is an image which may be captured/observed before "Other functions including LSC" in FIG. 7. The mixed frame is multiplied by a mixture gain (The mixture gain is applied to “Multiplier” of FIG. 7) with reference to a gain table. The mixture gain is varied depending on the image height of the position, described in FIG. 7. If the relative illuminance of the lens is higher, the "mixture gain" would be lower. As the relative illuminance of the center of most of the lenses (the center of the image) is higher than the edge of these lenses, the mixture gain should be smaller at the center and higher at the edge. The gain table may be defined in a LUT (lookup table) ( “Gain table” in FIG. 7) or the memory. The gain table may also be calculated at the time of executing the process (i.e., on the fly) .
According to one of the simplest implementations of the present embodiment, the mixture gain may depend on the image height of the position to be calculated. The image height is the distance between the center of an image and the position to be calculated. FIG. 8 shows an example of the mixture gain versus the image height for the above-mentioned one of the simplest implementations. The horizontal axis shows the image height. The image height is shown as a percentage (in %) of the maximum image height or half the image circle, which is the same as the diagonal length of the image sensor. The vertical axis is the mixture gain. The mixture gain may be calculated such that the same Signal level is set over the whole image height. It may be limited up to some constant, e.g. 1.0.
After the above-mentioned multiplying, the mixed frame and the main frame are combined into one frame. FIG. 9 shows the signal level after undergoing the mixture operation in accordance with the present invention. The horizontal axis shows the image height. The image height is shown as a percentage (in %) of the maximum image height or half the image circle, which is the same as the diagonal length of the image sensor. The vertical axis is the signal level after undergoing the mixture operation in accordance with the present invention. If the relative illuminance of the lens is represented by FIG. 3 and the mixture gain by FIG. 8, the signal level of the output frame of the embodiment of this invention would be as shown in FIG. 9.
The mixture gain may be determined in the following manner:
(1) Capture a grayscale chart under uniform light. Light should have sufficient uniformity. Moreover, the light is preferably 3200K Halogen light, A-light (2858K) , or D65.
(2) Calculate shading of the green channel (if the image sensor is of the monochrome type, the B/W channel may suffice for the calculation) .
(3) Mixture gain at position (x, y) may be obtained as Mixture gain at position (x, y) =min (1.0, (signal at center) / (signal at position (x, y) ) ) .
Additionally, parameters for the LSC should be determined to compensate for the color shading portion including variations in the light source.
FIG. 10 is an exemplary block diagram according to another embodiment of the present invention in which only one previous frame or one following frame may be stored in a memory to reduce the size of the memory for storage ( “Memory” in FIG. 10) , where the memory to be used is reduced to only the one frame. “Image 2” , which is a frame previous to or a frame following “Image 1” , is to be a mixed frame in FIG. 10.
FIG. 11 is a further exemplary block diagram according to a further embodiment of the present invention in which three temporally-sequential frames are combined. “Image 1” , which is a frame previous to “Image 2” , and “Image 3” , which is a frame following “Image 2” are to be mixed frames in FIG. 11.
According to a method of relative illuminance compensation by multiple exposure according to embodiments of the present invention,
(a) two or more temporally-sequential frames are to be combined into one frame, which is a main frame. The other frame (s) is/are to be (a) mixed frame (s) ;
(b) the mixed frame (s) are to be multiplied by a mixture gain; and
(c) the mixture gain varies depending on the position in an image.
According to one of the most feasible implementations of the present embodiment, the mixture gain varies depending on the height of the image.
A method of relative illuminance compensation by multiple exposure according to embodiments of the present invention makes it possible to increase the signal level sufficiently high to maintain a high SNR at a higher image height area even if the relative illuminance is small. A method of relative illuminance compensation by multiple exposure according to an embodiment of the present invention allows only one previous frame or one following frame to be stored in a  memory to make it possible to reduce the size of the memory for storage.
In other kind of implementation, the mixture gain varies depending on both the height of image and the azimuthal angle to compensate not only relative illuminance but also shading caused by image sensor's non-uniformity, mechanical tolerance of optical parts and other reasons.
The foregoing descriptions are merely specific embodiments of the present invention, but are not intended to limit the protection scope of the present invention. Any modification or replacement readily figured out by a person skilled in the art within the technical scope disclosed in the present invention shall fall within the protection scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (17)

  1. A method of compensating for relative illuminance by multiple exposure, the method comprising:
    externally receiving two or more temporally-sequential frames for an image and sending one main frame determined from the received two or more temporally-sequential frames;
    storing at least one of the two or more temporally-sequential frames for the image;
    applying a mixture gain to one or a plurality of mixed frames of the two or more temporally-sequential frames; and
    mixing the applied results, wherein the mixture gain varies in accordance with the position in the image.
  2. The method as claimed in claim 1, wherein the one or the plurality of mixed frames of the two or more temporally-sequential frames include a frame previous to the one main frame or a frame following the one main frame.
  3. The method as claimed in claim 1, wherein the one or the plurality of mixed frames of the two or more temporally-sequential frames include both a frame previous to the one main frame and a frame following the one main frame.
  4. The method as claimed in claim 1, wherein the mixture gain varies in accordance with the height of the image.
  5. The method as claimed in claim 1, wherein the mixture gain varies with an upper limit of 1.0.
  6. A device configured to compensate for relative illuminance by multiple exposure, the device comprising:
    a memory controller configured to externally receive the two or more temporally-sequential frames for an image and to send one main frame determined from the received two or more temporally-sequential frames to a multiplier;
    a memory configured to capture at least one of the two or more temporally-sequential frames for the image;
    the multiplier configured to apply a mixture gain to one or a plurality of mixed frames of the two or more temporally-sequential frames; and
    a mixer configured to mix the applied results, wherein the mixture gain varies in accordance with the position in the image.
  7. The device as claimed in claim 6, wherein the memory captures the one of the two or more temporally-sequential frames for the image.
  8. The device as claimed in claim 6, wherein the one or the plurality of mixed frames of the two or more temporally-sequential frames include a frame previous to the one main frame or a frame following the one main frame.
  9. The device as claimed in claim 6, wherein the one or the plurality of mixed frames of the two or more temporally-sequential frames include both a frame previous to the one main frame and a frame following the one main frame.
  10. The device as claimed in claim 6, wherein the mixture gain varies in accordance with the height of the image.
  11. The device as claimed in claim 6, wherein the mixture gain varies with an upper limit of 1.0.
  12. An image sensor, comprising the device as claimed in claim 6.
  13. An ISP (Image signal processor) , comprising the device as claimed in claim 6.
  14. A camera module, comprising the device as claimed in claim 6.
  15. A smartphone comprising the camera module as claimed in claim 14.
  16. A mobile device, comprising the camera module as claimed in claim 14.
  17. An integrated circuit, comprising the device as claimed in claim 6.
PCT/CN2017/072768 2017-01-26 2017-01-26 Relative illuminance compensation method and device by multiple exposure WO2018137247A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780005068.0A CN108496355B (en) 2017-01-26 2017-01-26 Method and equipment for compensating relative illumination through multiple exposure
PCT/CN2017/072768 WO2018137247A1 (en) 2017-01-26 2017-01-26 Relative illuminance compensation method and device by multiple exposure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/072768 WO2018137247A1 (en) 2017-01-26 2017-01-26 Relative illuminance compensation method and device by multiple exposure

Publications (1)

Publication Number Publication Date
WO2018137247A1 true WO2018137247A1 (en) 2018-08-02

Family

ID=62978874

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/072768 WO2018137247A1 (en) 2017-01-26 2017-01-26 Relative illuminance compensation method and device by multiple exposure

Country Status (2)

Country Link
CN (1) CN108496355B (en)
WO (1) WO2018137247A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080002038A1 (en) * 2006-07-03 2008-01-03 Canon Kabushiki Kaisha Imaging apparatus, control method thereof, and imaging system
CN101651786A (en) * 2008-08-14 2010-02-17 深圳华为通信技术有限公司 Method for restoring brightness change of video sequence and video processing equipment
US20120162467A1 (en) * 2009-08-27 2012-06-28 Fumiki Nakamura Image capture device
CN103237168A (en) * 2013-04-02 2013-08-07 清华大学 Method for processing high-dynamic-range image videos on basis of comprehensive gains
JP2015195453A (en) * 2014-03-31 2015-11-05 パナソニックIpマネジメント株式会社 Image processing apparatus and image processing method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7751643B2 (en) * 2004-08-12 2010-07-06 Semiconductor Insights Inc. Method and apparatus for removing uneven brightness in an image
KR101589310B1 (en) * 2009-07-08 2016-01-28 삼성전자주식회사 LENS SHADING CORRECTION METHOD AND APPARATUS
CN102611851A (en) * 2012-03-01 2012-07-25 林青 Automatic illumination compensation method and system of video image
KR101871945B1 (en) * 2013-01-17 2018-08-02 한화에어로스페이스 주식회사 Apparatus and method for processing image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080002038A1 (en) * 2006-07-03 2008-01-03 Canon Kabushiki Kaisha Imaging apparatus, control method thereof, and imaging system
CN101651786A (en) * 2008-08-14 2010-02-17 深圳华为通信技术有限公司 Method for restoring brightness change of video sequence and video processing equipment
US20120162467A1 (en) * 2009-08-27 2012-06-28 Fumiki Nakamura Image capture device
CN103237168A (en) * 2013-04-02 2013-08-07 清华大学 Method for processing high-dynamic-range image videos on basis of comprehensive gains
JP2015195453A (en) * 2014-03-31 2015-11-05 パナソニックIpマネジメント株式会社 Image processing apparatus and image processing method

Also Published As

Publication number Publication date
CN108496355B (en) 2021-02-12
CN108496355A (en) 2018-09-04

Similar Documents

Publication Publication Date Title
US12267548B2 (en) Display device configured as an illumination source
US10194091B2 (en) Image capturing apparatus, control method therefor, program, and recording medium
US9357138B2 (en) Image capture apparatus, method of controlling image capture apparatus, and electronic device
US8508619B2 (en) High dynamic range image generating apparatus and method
US20100157079A1 (en) System and method to selectively combine images
US9906732B2 (en) Image processing device, image capture device, image processing method, and program
AU2014203602A1 (en) Flash synchronization using image sensor interface timing signal
US20170034461A1 (en) Image processing apparatus and control method for image processing apparatus
WO2017043190A1 (en) Control system, imaging device, and program
US9407842B2 (en) Image pickup apparatus and image pickup method for preventing degradation of image quality
CN109937382B (en) Imaging device and imaging method
US20160196640A1 (en) Image processing apparatus, imaging apparatus, and image processing method
US20240430577A1 (en) Low-light autofocus technique
CN116567432A (en) Shooting method and electronic equipment
US11653107B2 (en) Image pick up apparatus, image pick up method, and storage medium
US11044396B2 (en) Image processing apparatus for calculating a composite ratio of each area based on a contrast value of images, control method of image processing apparatus, and computer-readable storage medium
US10425602B2 (en) Image processing apparatus, image processing method, and computer-readable recording medium
US10491840B2 (en) Image pickup apparatus, signal processing method, and signal processing program
US10205870B2 (en) Image capturing apparatus and control method thereof
WO2018137247A1 (en) Relative illuminance compensation method and device by multiple exposure
JP6704611B2 (en) Imaging device and imaging method
JP6148577B2 (en) Imaging apparatus and control method
JP2011146960A (en) Image pickup apparatus
JP2011146961A (en) Image pickup apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17893647

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17893647

Country of ref document: EP

Kind code of ref document: A1