WO2013055274A2 - Method for adjusting the integration time for an ir detector, and an ir camera for implementing the method - Google Patents
Method for adjusting the integration time for an ir detector, and an ir camera for implementing the method Download PDFInfo
- Publication number
- WO2013055274A2 WO2013055274A2 PCT/SE2012/000154 SE2012000154W WO2013055274A2 WO 2013055274 A2 WO2013055274 A2 WO 2013055274A2 SE 2012000154 W SE2012000154 W SE 2012000154W WO 2013055274 A2 WO2013055274 A2 WO 2013055274A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- integration time
- scene
- signal level
- camera
- detector
- Prior art date
Links
- 230000010354 integration Effects 0.000 title claims abstract description 81
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000005259 measurement Methods 0.000 claims description 9
- 230000006978 adaptation Effects 0.000 claims description 7
- 230000003287 optical effect Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003321 amplification Effects 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000006735 deficit Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012913 prioritisation Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/67—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response
- H04N25/671—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/20—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from infrared radiation only
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/50—Control of the SSIS exposure
- H04N25/53—Control of the integration time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/67—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response
- H04N25/671—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction
- H04N25/673—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction by using reference sources
- H04N25/674—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction by using reference sources based on the scene itself, e.g. defocusing
Definitions
- the present invention relates to a method for adjusting the integration time for an IR detector in an IR camera, the integration time being adapted to the scene content, and to an IR camera for implementing the method, which IR camera comprises means for adjusting the integration time for an IR detector comprised therein .
- IR cameras In IR cameras, the use of one or more fixed integration times in order to cover the dynamic range of the IR camera is previously disclosed. IR cameras with fixed integration times are previously disclosed in, for example, patent documents WO2008107117 , US 2003183765, US .2007228278 and US 2009121139.
- the reason for constructing a good image on the basis of the raw data that have been transmitted in the form of pixels by an IR detector is to compensate for the individual distribution of the discrete pixels and to compensate for any unevennesses attributable to the lens system.
- the pixels differ in two different ways ; on the one hand the individual pixels have different amplification (gain) , and on the other hand the pixels have different offset in relation to one another.
- gain amplification
- offset map two types of map are used today, these being a gain map and an offset map.
- the gain map is applied first by multiplication for the respective pixel, and the image is then equalized with an offset for each pixel .
- the object of the present invention is to make available a method for adjusting the integration time for an IR detector, as well as an IR camera, which invention improves the image performance within the entire dynamic range of a depicted scene.
- the invention makes it possible for an IR camera to constantly receive optimized adjustments from the scene that it is observing. Improvements in the result of up to 30 per cent have been recorded. Several mapping steps in the production of the camera can be eliminated, furthermore, since only a single gain map is required to cover the dynamic range. In addition to this, there are no requirements for the operator to have knowledge of when and how replacement of a temperature range must or should be carried out .
- the following steps are included in the dynamic adaptation of the integration time:
- steps b) to e) repetition of steps b) to e) in the event of an adjustable tolerance range for the target signal level not being achieved.
- the integration time cannot be calculated directly, but according to the previous paragraph the camera first measures what step response applies for the actual integration time. A long step is then estimated as a fraction of the step in order not to risk obtaining an overshoot, and the actual integration time is adjusted with a fraction of the step. The camera then corrects the step length so that the signal level approaches the target level. New steps are calculated according to the same principle until a signal level is achieved which lies within an adjustable tolerance range. According to one advantageous method, repetition of steps a) to f) is included depending on the set criteria for adaptation of the integration time in the form of times and scene characteristics.
- the fraction of a step according to the above is appropriately selected within a range of between 70 and 90 per cent of the step, and preferably approximately 80 per cent of the step. Overshoots are effectively avoided by the selected fraction, at the same time as relevant tolerance requirements can be met with a reasonable processing effort.
- the integration time of the camera changes, the signal level will change, and this will be clearly visible in the image.
- changing of the integration time is implemented at the same time as the IR camera is calibrated during operation against a spade and/or against a defocused scene.
- the image is frozen for the period during which the camera is calibrated, and for this reason the change in the signal level does not have an effect, but is dealt with in the calibration.
- Both calibration against a spade and calibration against a defocused scene can then be implemented in a coordinated fashion with the change in the integration time .
- the IR detector is checked during observation of a scene in pixel form in respect of clipping both upwards and downwards.
- the target signal level is set depending on the type of the actual scene being observed. In conjunction with the prioritization of low noise, the target signal level in this case can be set close to the maximum signal level for the actual scene. Similarly, the target signal level in the case of rapid movements and sequences can be set so that a small motion blur is prioritized.
- the target value for adjusting the integration time can vary as a result. If low noise is prioritized, for example for a surveillance camera at a long distance, the integration time is optimized against the maximum signal level for the actual scene.
- the integration time can be optimized against the smallest possible motion blur for the actual scene.
- a short integration time is particularly suitable for small motion blur in high-contrast scenes, whereas the signal level is instead maximized in low- contrast scenes where the noise dominates.
- a preferred method in this context when prioritizing low noise, is to set the target signal level close to the maximum signal level for the actual scene.
- Another preferred method is to set the target signal level for rapid movements and sequences so that a small motion blur is prioritized.
- the IR detector is calibrated against a spade in the case of high contrast.
- the IR detector is calibrated against a defocused scene in the case of low contrast .
- the IR detector in the case of medium contrast, is calibrated on the one hand against a spade and on the other hand against a defocused scene. In the event of the integration time being changed during operation, a gain map is obtained which is not produced for the active integration time. This causes extra spatial noise (fixed pattern noise) to be produced.
- the image performance can be impaired if calibration takes place against a spade that is too hot.
- the IR camera for the implementation of the method is characterized in that it comprises a means for the signal measurement of scenes, a means for determining a local derivative in the measured signal for the actual integration time, a means for determining steps for changing the integration time in order to achieve a set measurement signal level based on a determined local derivative, the present signal level and the set signal level, a means for the adjustment of the actual integration time with a fraction of the determined step, a means for checking whether the integration time results in an adjustable tolerance range for the measurement level being achieved, and a means for controlling the above-mentioned means depending on the tolerance range and the criteria for the adaptation of the integration time, such as times and scene characteristics .
- Figure 1 depicts schematically an example of how the integration time for optimal image performance relates to the scene temperature.
- Figure 2 depicts schematically a flow chart describing a sequence according to the invention which can be included in conjunction with the adjustment of the integration time for an IR detector.
- Figure 3 depicts schematically a calibration which can be coordinated with the dynamic adaptation of the integration time.
- Figure 4 depicts schematically a signal level for a detector as a function of the integration time.
- Figure 5 depicts schematically an example of an IR camera according to the invention. Detailed description of the embodiment
- FIG. 1 Depicted schematically in Figure 1 is an example of how the integration time ti for optimal image performance relates to the scene temperature T g .
- the prior art with two fixed integration times, corresponds to the points in time Ti fl and i 2 , according to which a gain map is established for each of the points in time.
- the application of the method for the adjustment of the integration time causes a displacement to take place along the curve 1 in order to achieve optimal image performance at all temperatures ti .
- the image performance is optimized only for a single scene temperature, whereas in the case of a dynamic integration time, a displacement takes place along the curve in order to achieve optimal image performance at all temperatures.
- the process starts from a starting block 10. Contained in a block 11 are criteria which control subsequent processes in respect of times, scene characteristics, etc.
- a scene is measured in the subsequent block 12. Step responses from the measured scene are measured in block 13.
- a suitable step length in order to arrive at a target level is estimated in the subsequent block 14, and the step length is reduced to about 80 per cent.
- a check is then made in block 15 to establish whether the target level is achieved within an adjustable tolerance range, for which the tolerances can be obtained from block 11, for example. In the event that the target level is not achieved, process steps 14 and 15 are repeated, as indicated by the return arrow 16.
- a check for the presence of any clipped pixels is also made according to a block 17, and a check of the spade temperature is made according to a block 18.
- the observed scene is also checked in order to assess whether calibration against a defocused image is necessary; see block 21 in Figure 3.
- calibration is performed against a spade, spade NUC, if the image has high contrast, see blocks 22 and 23, against a defocused scene, optical NUC, if the image has low contrast, see blocks 22 and 25, whereas in the case of medium contrast a combined calibration is performed on the one hand against a spade, spade NUC, and on the other hand against a defocused scene, optical NUC, see blocks 22 and 24.
- the spade is inserted at the same time as the defocusing movement begins.
- the signal level S of a detector is indicated in Figure 4 as a function of the integration time ti .
- the signal level in this case follows a curve 2.
- a target level 3 is also plotted on the graph.
- a prevailing level 4 is plotted as a dashed line and corresponds to the actual integration time ti a indicated by the dashed line 5.
- the size of the gradient on the signal curve 2 for the integration time ti a is measured first, after which a suitable step for the integration time is estimated in order to arrive at the target level. In order to avoid the risk of overshoots, approximately 80 per cent of the step length is used. This is followed by a check to establish the size of the step response that was actually achieved by the step, and a new step length is estimated using this step response. The procedure is repeated until the target level is achieved within an adjustable tolerance.
- an IR camera 31 Depicted schematically in Figure 5 is an IR camera 31.
- an input orifice 32 Present on the input side of the camera is an input orifice 32, in this case in the form of an input lens 33.
- the input lens 33 is included in a lens system 44.
- the optical path from an observed scene is indicated as a central beam 34 and as two peripheral beams 35 and 36.
- a movable spade 37 is arranged behind the input lens 33.
- the spade is arranged to be introduced between an input lens and the focusing lens.
- the spade can be arranged behind the focusing lens when viewed from the input orifice 32 of the IR camera.
- the proposed spade positions do not exclude other alternatives.
- the spade can have the form of a flat plate which can be inserted or pushed in by means of a transport mechanism, which is not illustrated here, of an appropriate, previously disclosed kind.
- the spade 37 is illustrated in the Figure in an outer position outside the optical path 34-36.
- the inserted or pushed- in position of the spade 37 is indicated as dashed lines 38, and a dashed bidirectional arrow 39 indicates the movement of the spade between its outer position and the inserted/pushed- in position.
- a focusing mechanism which, in a simple depicted form, can consist of a single displaceable lens 40.
- the displaceable lens 40 is included together with the input lens 33 in the lens system 44.
- Dashed lines 41 indicate an alternative position for the lens 40, and a dashed bidirectional arrow 42 indicates the displacement movement of the lens 40.
- the beams 34-36 strike an IR sensor, also referred to as an IR detector 43, which supplies an image in pixel form to an image processing unit 45 connected to a memory 46.
- the IR sensor can be both of the type which has cooled detectors and of the type with uncooled detectors.
- gain maps 47 and offset maps 48 can be stored in the memory 46.
- a display 49 is provided for the presentation of the final, image-processed and compensated image.
- the input lens 33 and the displaceable lens 40 in the lens system 44 can, of course, be replaced by other optical components, in terms of both type and number, in order to achieve adequate functions.
- calibration can be performed both as calibration against a spade, known as spade NUC, by inserting the spade 37, and as calibration against a defocused optical scene, known as optical NUC, by displacing the lens 40 away from a position which focuses the observed scene .
- the image processing unit also, comprises a means for processing the signal received by the IR detector 43 for signal measurement of the scene, determination of the gradient of the measured signal, determination of the step for changing the integration time, adjustment of the integration time, checking the tolerance range, and control.
- the image processing unit preferably consists of a processor, for example a microprocessor.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention relates to a method for adjusting the integration time for an IR detector in an IR camera, and to an IR camera for implementing the method according to the invention. According to the method, the integration time is adapted dynamically with regard to the scene content. Achieved through the invention are a method and an IR camera which improve the image performance within the entire dynamic range of a depicted scene.
Description
Method for adjusting the integration time for an IR detector, and an IR camera for implementing the method Technical field
The present invention relates to a method for adjusting the integration time for an IR detector in an IR camera, the integration time being adapted to the scene content, and to an IR camera for implementing the method, which IR camera comprises means for adjusting the integration time for an IR detector comprised therein . Background
In IR cameras, the use of one or more fixed integration times in order to cover the dynamic range of the IR camera is previously disclosed. IR cameras with fixed integration times are previously disclosed in, for example, patent documents WO2008107117 , US 2003183765, US .2007228278 and US 2009121139.
The disadvantage of fixed integration times is that access is available only to a limited number of integration times, which are thus not able in an optimal fashion to cover the dynamic range within which the IR camera operates. For a previously disclosed solution with two fixed integration times, for example, the use of fixed integration times is optimal only for two specific values within the dynamic range in respect of which details are stored for the adjustment of the amplification, these being known as gain maps. Impairment of the image performance takes place for non-specific values within the dynamic range. This can be counteracted to some extent with a number of fixed integration times with associated maps, although at the expense of, among other things, increased complexity associated with the comprehensive storage of ma s.
In this context, the following paragraph briefly describes the reasons for the calibration of IR detectors based on gain maps and offset ma s.
The reason for constructing a good image on the basis of the raw data that have been transmitted in the form of pixels by an IR detector is to compensate for the individual distribution of the discrete pixels and to compensate for any unevennesses attributable to the lens system. The pixels differ in two different ways ; on the one hand the individual pixels have different amplification (gain) , and on the other hand the pixels have different offset in relation to one another. In order to equalize these differences and to display an image in which the unevennesses of the detector are not visible, two types of map are used today, these being a gain map and an offset map. The gain map is applied first by multiplication for the respective pixel, and the image is then equalized with an offset for each pixel .
Summary of the invention The object of the present invention is to make available a method for adjusting the integration time for an IR detector, as well as an IR camera, which invention improves the image performance within the entire dynamic range of a depicted scene.
The object of the invention is achieved by a method characterized in that the integration time is adapted dynamically in the course of the repeated determination of the integration time on the basis of a local derivative of the scene content during operation of the
IR camera.
The invention makes it possible for an IR camera to constantly receive optimized adjustments from the scene that it is observing. Improvements in the result of up to 30 per cent have been recorded. Several mapping steps in the production of the camera can be eliminated, furthermore, since only a single gain map is required to cover the dynamic range. In addition to this, there are no requirements for the operator to have knowledge of when and how replacement of a temperature range must or should be carried out .
According to one advantageous method, the following steps are included in the dynamic adaptation of the integration time:
a) signal measurement of the scene,
b) determination of a local derivative of the measured signal for the actual integration time, c) determination of steps for changing the integration time in order to achieve a set target signal level based on a determined local derivative, the present signal level and the set target signal level,
d) adjustment of the actual integration time with a fraction of a determined step,
e) checking whether the integration time results in the achievement of an adjustable tolerance range for the target level,
f) repetition of steps b) to e) in the event of an adjustable tolerance range for the target signal level not being achieved.
Due to the fact that the relationship between the signal level and the integration time is not linear and, moreover, that it can vary between different detector elements, the integration time cannot be calculated directly, but according to the previous paragraph the camera first measures what step response applies for the actual integration time. A long step is
then estimated as a fraction of the step in order not to risk obtaining an overshoot, and the actual integration time is adjusted with a fraction of the step. The camera then corrects the step length so that the signal level approaches the target level. New steps are calculated according to the same principle until a signal level is achieved which lies within an adjustable tolerance range. According to one advantageous method, repetition of steps a) to f) is included depending on the set criteria for adaptation of the integration time in the form of times and scene characteristics. The fraction of a step according to the above is appropriately selected within a range of between 70 and 90 per cent of the step, and preferably approximately 80 per cent of the step. Overshoots are effectively avoided by the selected fraction, at the same time as relevant tolerance requirements can be met with a reasonable processing effort.
If the integration time of the camera changes, the signal level will change, and this will be clearly visible in the image. In order to minimize the influence on the image, it is proposed according to an advantageous method that changing of the integration time is implemented at the same time as the IR camera is calibrated during operation against a spade and/or against a defocused scene. The image is frozen for the period during which the camera is calibrated, and for this reason the change in the signal level does not have an effect, but is dealt with in the calibration. Both calibration against a spade and calibration against a defocused scene can then be implemented in a coordinated fashion with the change in the integration time .
According to a further advantageous method, the IR detector is checked during observation of a scene in pixel form in respect of clipping both upwards and downwards. In addition to adjusting the integration time against a target value, a check is also made at the same time that no pixels are clipped. It is possible in this way to prevent a large solar reflectance, for example, from clipping pixels. According to yet another advantageous method, the target signal level is set depending on the type of the actual scene being observed. In conjunction with the prioritization of low noise, the target signal level in this case can be set close to the maximum signal level for the actual scene. Similarly, the target signal level in the case of rapid movements and sequences can be set so that a small motion blur is prioritized. The target value for adjusting the integration time can vary as a result. If low noise is prioritized, for example for a surveillance camera at a long distance, the integration time is optimized against the maximum signal level for the actual scene. If, on the other hand, the camera requires rapid movements and sequences, the integration time can be optimized against the smallest possible motion blur for the actual scene. A short integration time is particularly suitable for small motion blur in high-contrast scenes, whereas the signal level is instead maximized in low- contrast scenes where the noise dominates. A preferred method in this context, when prioritizing low noise, is to set the target signal level close to the maximum signal level for the actual scene. Another preferred method is to set the target signal level for rapid movements and sequences so that a small motion blur is prioritized.
It is proposed, especially in one method for the calibration of the IR detector, that the IR detector is
calibrated against a spade in the case of high contrast. According to another method, it is proposed that the IR detector is calibrated against a defocused scene in the case of low contrast . According to yet another method, it is proposed that the IR detector, in the case of medium contrast, is calibrated on the one hand against a spade and on the other hand against a defocused scene. In the event of the integration time being changed during operation, a gain map is obtained which is not produced for the active integration time. This causes extra spatial noise (fixed pattern noise) to be produced. This extra noise is evident in low-contrast scenes and is dealt with by calibrating against a defocused scene, that is to say by performing a so- called optical NUC (NUC = Non Uniformity Correction) , where optical elements are moved in order to defocus the depicted scene.
When calibrating against a spade, the image performance can be impaired if calibration takes place against a spade that is too hot. According to another proposed method, provision is accordingly made for the temperature of the spade to be checked and for the integration time to be corrected slightly when the temperature of the spade is considered to impair the image performance . The IR camera for the implementation of the method is characterized in that it comprises a means for the signal measurement of scenes, a means for determining a local derivative in the measured signal for the actual integration time, a means for determining steps for changing the integration time in order to achieve a set measurement signal level based on a determined local derivative, the present signal level and the set signal level, a means for the adjustment of the actual
integration time with a fraction of the determined step, a means for checking whether the integration time results in an adjustable tolerance range for the measurement level being achieved, and a means for controlling the above-mentioned means depending on the tolerance range and the criteria for the adaptation of the integration time, such as times and scene characteristics . Brief description of the drawings
The invention is now described in more detail below in exemplified form with reference to the accompanying drawings, in which:
Figure 1 depicts schematically an example of how the integration time for optimal image performance relates to the scene temperature. Figure 2 depicts schematically a flow chart describing a sequence according to the invention which can be included in conjunction with the adjustment of the integration time for an IR detector. Figure 3 depicts schematically a calibration which can be coordinated with the dynamic adaptation of the integration time.
Figure 4 depicts schematically a signal level for a detector as a function of the integration time.
Figure 5 depicts schematically an example of an IR camera according to the invention. Detailed description of the embodiment
Depicted schematically in Figure 1 is an example of how the integration time ti for optimal image performance
relates to the scene temperature Tg . The prior art, with two fixed integration times, corresponds to the points in time Tifl and i2, according to which a gain map is established for each of the points in time. The application of the method for the adjustment of the integration time, which is the subject of this application, causes a displacement to take place along the curve 1 in order to achieve optimal image performance at all temperatures ti . Thus, in the case of fixed integration times, the image performance is optimized only for a single scene temperature, whereas in the case of a dynamic integration time, a displacement takes place along the curve in order to achieve optimal image performance at all temperatures.
The possible appearance of the flow in a method with a dynamically adapted integration time according to the invention is described schematically below with reference to Figure 2.
The process starts from a starting block 10. Contained in a block 11 are criteria which control subsequent processes in respect of times, scene characteristics, etc. A scene is measured in the subsequent block 12. Step responses from the measured scene are measured in block 13. A suitable step length in order to arrive at a target level is estimated in the subsequent block 14, and the step length is reduced to about 80 per cent. A check is then made in block 15 to establish whether the target level is achieved within an adjustable tolerance range, for which the tolerances can be obtained from block 11, for example. In the event that the target level is not achieved, process steps 14 and 15 are repeated, as indicated by the return arrow 16. A check for the presence of any clipped pixels is also made according to a block 17, and a check of the spade temperature is made according to a block 18. In the event that clipped pixels are present, or that the
spade is too hot, the downward adjustment of the integration time is ensured according to a block 19. Once the requirements according to blocks 15, 17 and 18 have been met, that is to say once the set tolerance requirements have been met, to the effect that no clipped pixels are present and that the spade is not hot, a new integration time is provided, as symbolized by the block 20.
Parallel to or in close association with the production of a new integration time, the observed scene is also checked in order to assess whether calibration against a defocused image is necessary; see block 21 in Figure 3. In the event of calibration being necessary, calibration is performed against a spade, spade NUC, if the image has high contrast, see blocks 22 and 23, against a defocused scene, optical NUC, if the image has low contrast, see blocks 22 and 25, whereas in the case of medium contrast a combined calibration is performed on the one hand against a spade, spade NUC, and on the other hand against a defocused scene, optical NUC, see blocks 22 and 24. In the case of combined calibration, the spade is inserted at the same time as the defocusing movement begins. This is followed by a short calibration against the spade, which is then withdrawn so that calibration can be performed against the defocused scene. If no calibration is considered to be necessary for the time being, calibration is dispensed with and a new request is awaited, as indicated by the block 26.
The signal level S of a detector is indicated in Figure 4 as a function of the integration time ti . The signal level in this case follows a curve 2. A target level 3 is also plotted on the graph. A prevailing level 4 is plotted as a dashed line and corresponds to the actual integration time tia indicated by the dashed line 5. At the actual integration time tj.a, the size of the
gradient on the signal curve 2 for the integration time tia is measured first, after which a suitable step for the integration time is estimated in order to arrive at the target level. In order to avoid the risk of overshoots, approximately 80 per cent of the step length is used. This is followed by a check to establish the size of the step response that was actually achieved by the step, and a new step length is estimated using this step response. The procedure is repeated until the target level is achieved within an adjustable tolerance.
Depicted schematically in Figure 5 is an IR camera 31. Present on the input side of the camera is an input orifice 32, in this case in the form of an input lens 33. The input lens 33 is included in a lens system 44. The optical path from an observed scene is indicated as a central beam 34 and as two peripheral beams 35 and 36. A movable spade 37 is arranged behind the input lens 33. In the embodiment illustrated in Figure 1, the spade is arranged to be introduced between an input lens and the focusing lens. In an alternative embodiment, which is not illustrated here, the spade can be arranged behind the focusing lens when viewed from the input orifice 32 of the IR camera. The proposed spade positions do not exclude other alternatives. The spade can have the form of a flat plate which can be inserted or pushed in by means of a transport mechanism, which is not illustrated here, of an appropriate, previously disclosed kind. The spade 37 is illustrated in the Figure in an outer position outside the optical path 34-36. The inserted or pushed- in position of the spade 37 is indicated as dashed lines 38, and a dashed bidirectional arrow 39 indicates the movement of the spade between its outer position and the inserted/pushed- in position. Present in the optical path behind the inserted/pushed-in position of the spade is a focusing mechanism which, in a simple
depicted form, can consist of a single displaceable lens 40. The displaceable lens 40 is included together with the input lens 33 in the lens system 44. Dashed lines 41 indicate an alternative position for the lens 40, and a dashed bidirectional arrow 42 indicates the displacement movement of the lens 40. Once past the focusing mechanism, the beams 34-36 strike an IR sensor, also referred to as an IR detector 43, which supplies an image in pixel form to an image processing unit 45 connected to a memory 46. The IR sensor can be both of the type which has cooled detectors and of the type with uncooled detectors. Among other things, gain maps 47 and offset maps 48 can be stored in the memory 46. A display 49 is provided for the presentation of the final, image-processed and compensated image. The input lens 33 and the displaceable lens 40 in the lens system 44 can, of course, be replaced by other optical components, in terms of both type and number, in order to achieve adequate functions.
With the IR camera described in the previous paragraph, calibration can be performed both as calibration against a spade, known as spade NUC, by inserting the spade 37, and as calibration against a defocused optical scene, known as optical NUC, by displacing the lens 40 away from a position which focuses the observed scene .
The image processing unit also, comprises a means for processing the signal received by the IR detector 43 for signal measurement of the scene, determination of the gradient of the measured signal, determination of the step for changing the integration time, adjustment of the integration time, checking the tolerance range, and control. The image processing unit preferably consists of a processor, for example a microprocessor.
The invention is not restricted to the methods and the camera configurations described above by way of example, and it may undergo modifications within the scope of the following Patent Claims.
Claims
1. Method for adjusting the integration time for an IR detector in an IR camera, the integration time being adapted to the scene content, characterized in that the integration time is adapted dynamically in the course of the repeated determination of the integration time on the basis of a local derivative of the scene content during operation of the IR camera.
2. Method according to Patent Claim 1, characterized in that the following steps are included in the dynamic adaptation of the integration time:
a) signal measurement of the scene,
b) determination of a local derivative of the measured signal for the actual integration time, c) determination of steps for changing the integration time in order to achieve a set target signal level based on a determined local derivative, the present signal level and the set target signal level,
d) adjustment of the actual integration time with a fraction of a determined step,
e) checking whether the integration time results in the achievement of an adjustable tolerance range for the target level,
f) repetition of steps b) to e) in the event of an adjustable tolerance range for the target signal level not being achieved.
3. Method according to Patent Claim 2, characterized in that
g) steps a) to f) are repeated depending on the set criteria for adaptation of the integration time in the form of times and scene characteristics.
4. Method according to Patent Claim 2 or 3 , characterized in that the fraction of a determined step is selected within a range of between 70 and 90 per cent of the step and preferably approximately 80 per cent of the step.
5. Method according to one of the preceding Patent Claims, characterized in that the changing of the integration time is implemented at the same time as the IR camera is calibrated during operation against a spade and/or against a defocused scene.
6. Method according to one of the preceding Patent Claims, characterized in that the IR detector is checked during observation of a scene in pixel form in respect of clipping both upwards and downwards.
7. Method according to one of the preceding Patent Claims, characterized in that the target signal level is set depending on the type of the actual scene being observed .
8. Method according to Patent Claim 6, characterized in that, when prioritizing low noise, the target signal level is set close to the maximum signal level for the actual scene.
9. Method according to Patent Claim 6, characterized in that the target signal level in the case of rapid movements and sequences is set so that a small motion blur is prioritized.
10. Method according to one of the preceding Patent Claims, characterized in that the IR detector is calibrated against a spade in the case of high contrast .
11. Method according to one of the preceding Patent Claims, characterized in that the IR detector is calibrated against a defocused scene in the case of low contrast .
12. Method according to one of the preceding Patent Claims, characterized in that the IR detector, in the case of medium contrast, is calibrated on the one hand against a spade and on the other hand against a defocused scene.
13. Method according to one of Patent Claims 10, 11 or 12, characterized in that the temperature of the spade is checked, and in that the integration time is corrected slightly when the temperature of the spade is considered to impair the image performance.
14. IR camera for the implementation of the method according to one of the preceding Patent Claims 1-13, which IR camera comprises a means for adjusting the integration time for an IR detector comprised therein, characterized in that the IR camera comprises a means for the signal measurement of a scene, a means for determining a local derivative in the measured signal for the actual integration time, a means for determining steps for changing the integration time in order to achieve a set measurement signal level based on a local derivative, the present signal level and the set signal level, a means for the adjustment of the actual integration time with a fraction of the determined step, a means for verifying whether the integration time results in the adjustable tolerance range for the measurement level being achieved, and a means for controlling the above-mentioned means depending on the tolerance range and the criteria for the adaptation of the integration time, such as times and scene characteristics.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| SE1130098-5 | 2011-10-14 | ||
| SE1130098A SE536252C2 (en) | 2011-10-14 | 2011-10-14 | Method for setting integration time for an IR detector, and IR camera for performing the procedure |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2013055274A2 true WO2013055274A2 (en) | 2013-04-18 |
| WO2013055274A3 WO2013055274A3 (en) | 2013-06-20 |
Family
ID=48082698
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/SE2012/000154 WO2013055274A2 (en) | 2011-10-14 | 2012-10-09 | Method for adjusting the integration time for an ir detector, and an ir camera for implementing the method |
Country Status (2)
| Country | Link |
|---|---|
| SE (1) | SE536252C2 (en) |
| WO (1) | WO2013055274A2 (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2015136525A1 (en) * | 2014-03-13 | 2015-09-17 | Opgal Optronic Industries Ltd. | Optical system and infrared camera with a non-uniformity correction shutter integrated within the optical system |
| CN105136308A (en) * | 2015-05-25 | 2015-12-09 | 北京空间机电研究所 | Adaptive correction method under variable integral time of infrared focal plane array |
| WO2017100696A1 (en) * | 2015-12-09 | 2017-06-15 | Flir Systems Ab | Dynamic frame rate controlled thermal imaging systems and methods |
| CN108061602A (en) * | 2017-10-25 | 2018-05-22 | 中国航空工业集团公司洛阳电光设备研究所 | A kind of highlighted suppressing method based on infrared imaging system |
| US11659292B2 (en) | 2017-12-29 | 2023-05-23 | FLIR Systemes AB | Image output adjustment responsive to integration time changes for infrared imaging devices |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP3432705B2 (en) * | 1997-06-23 | 2003-08-04 | 三菱電機株式会社 | Imaging device |
| US6737639B2 (en) * | 2002-03-27 | 2004-05-18 | Raytheon Company | Display uniformity calibration system and method for a staring forward looking infrared sensor |
| US8569684B2 (en) * | 2009-11-06 | 2013-10-29 | Steven J. Olson | Infrared sensor control architecture |
-
2011
- 2011-10-14 SE SE1130098A patent/SE536252C2/en unknown
-
2012
- 2012-10-09 WO PCT/SE2012/000154 patent/WO2013055274A2/en active Application Filing
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2015136525A1 (en) * | 2014-03-13 | 2015-09-17 | Opgal Optronic Industries Ltd. | Optical system and infrared camera with a non-uniformity correction shutter integrated within the optical system |
| CN105136308A (en) * | 2015-05-25 | 2015-12-09 | 北京空间机电研究所 | Adaptive correction method under variable integral time of infrared focal plane array |
| WO2017100696A1 (en) * | 2015-12-09 | 2017-06-15 | Flir Systems Ab | Dynamic frame rate controlled thermal imaging systems and methods |
| US10834337B2 (en) | 2015-12-09 | 2020-11-10 | Flir Systems Ab | Dynamic frame rate controlled thermal imaging systems and methods |
| CN108061602A (en) * | 2017-10-25 | 2018-05-22 | 中国航空工业集团公司洛阳电光设备研究所 | A kind of highlighted suppressing method based on infrared imaging system |
| CN108061602B (en) * | 2017-10-25 | 2020-05-19 | 中国航空工业集团公司洛阳电光设备研究所 | Highlight inhibition method based on infrared imaging system |
| US11659292B2 (en) | 2017-12-29 | 2023-05-23 | FLIR Systemes AB | Image output adjustment responsive to integration time changes for infrared imaging devices |
Also Published As
| Publication number | Publication date |
|---|---|
| SE536252C2 (en) | 2013-07-16 |
| WO2013055274A3 (en) | 2013-06-20 |
| SE1130098A1 (en) | 2013-04-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8760509B2 (en) | Thermal imager with non-uniformity correction | |
| CN109870239B (en) | Self-adaptive calibration method for uncooled infrared focal plane detector | |
| WO2013055274A2 (en) | Method for adjusting the integration time for an ir detector, and an ir camera for implementing the method | |
| US9648253B2 (en) | Procedure for mapping when capturing video streams by means of a camera | |
| EP2395370A1 (en) | Time-of-flight imager | |
| US20100091169A1 (en) | Dithered focus evaluation | |
| US20120062694A1 (en) | Imaging apparatus, imaging method, and program | |
| WO2000042399A1 (en) | Infrared imaging device, vehicle with the same, and infrared image adjusting device | |
| US20060029377A1 (en) | System and method for image capture device | |
| JP4016022B2 (en) | Infrared imaging device and vehicle equipped with the same | |
| CN101796817A (en) | Camera and method of calibrating a camera | |
| US20130141631A1 (en) | Auto-focus apparatus, image pick-up apparatus, and auto-focus method | |
| US20110007166A1 (en) | Mixed Optical Device for Multifocal Imaging and IR Calibration | |
| US20160147131A1 (en) | Multiple Camera Apparatus and Method for Synchronized Autofocus | |
| KR101076027B1 (en) | Image-based Offset Correction Apparatus and Method of Infrared Camera for DIRCM | |
| RU2012114794A (en) | METHOD FOR CORRECTION OF IMAGES GIVEN BY A DETECTOR WITHOUT REGULATING THE TEMPERATURE, AND A DETECTOR IMPLEMENTING SUCH METHOD | |
| JP2005530129A (en) | Improvements in or relating to infrared camera calibration | |
| US20110069381A1 (en) | Spherical aberration correction for an optical microscope using a moving infinity-conjugate relay | |
| CN108616685A (en) | A kind of focusing method and focusing mechanism | |
| CN104268863A (en) | Zooming correcting method and device | |
| JP2019213193A (en) | Infrared imaging apparatus and program used for the same | |
| JP3675066B2 (en) | Infrared imaging device and image correction method | |
| US8749640B2 (en) | Blur-calibration system for electro-optical sensors and method using a moving multi-focal multi-target constellation | |
| WO2013055273A1 (en) | Method for gain map generation in an ir camera, and an ir camera for implementing gain map generation | |
| JP2013197965A (en) | Imaging apparatus |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12840816 Country of ref document: EP Kind code of ref document: A2 |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 12840816 Country of ref document: EP Kind code of ref document: A2 |