WO2018173031A1 - System and method for compensating diffraction - Google Patents
System and method for compensating diffraction Download PDFInfo
- Publication number
- WO2018173031A1 WO2018173031A1 PCT/IL2017/050368 IL2017050368W WO2018173031A1 WO 2018173031 A1 WO2018173031 A1 WO 2018173031A1 IL 2017050368 W IL2017050368 W IL 2017050368W WO 2018173031 A1 WO2018173031 A1 WO 2018173031A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- image data
- imaging
- filter
- linear filter
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration using non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
Definitions
- the present invention in some embodiments thereof, relates to image processing and, more particularly, but not exclusively, to a system and a method for compensating diffraction.
- Wave diffraction is a known physical phenomenon, in which the propagation direction of a wave is redirected due to interaction between the wave and small size objects.
- the diffraction may generate spatial distortion of the image information.
- the distortion can reduce the sharpness or produce other undesired aberrations.
- many imaging modalities utilize optical elements such as lenses for compensating diffraction.
- the present Inventors discovered a technique that can be used for compensating diffraction by means of image processing.
- a method of processing image data generated by capturing an image of an object through a grating selected to up-shift a spatial frequency of radiation received from the object comprises: applying a linear filter to the image data to provide filtered image data; down- shifting a spatial frequency of the filtered image data; and reconstructing an image based on the down-shifted image data, thereby processing the image data.
- a method of imaging comprises: capturing an image of an object through a grating selected to up-shift a spatial frequency of radiation received from the object, thereby providing image data; applying a linear filter to the image data to provide filtered image data; down-shifting a spatial frequency of the filtered image data; and reconstructing an image based on the down- shifted image data, thereby processing the image data.
- the method comprises applying a band pass filter to the image data prior to the application of the linear filter.
- the linear filter is a spatial filter and the method is executed without transforming the image data to a spectral domain.
- the image is an electron beam image. According to some embodiments of the invention the image is an X-ray image. According to some embodiments of the invention the image is an ultrasound image. According to some embodiments of the invention the image is a thermal image. According to some embodiments of the invention the image is an ultraviolet image. According to some embodiments of the invention the image is a visible image. According to some embodiments of the invention the image is an infra-red image. According to some embodiments of the invention the image is captured by an imaging system that is devoid of any diffraction compensating optical element.
- the filter is a spectral filer
- the method comprises transforming the image data to a spectral domain prior to application of the linear filter.
- the linear filter is characterized by a cutoff parameter ⁇ , which is higher than 10% of a difference between a maximal intensity and an average intensity of the image.
- the linear filter is characterized by a cutoff parameter ⁇ , which is less than 90% of a difference between the maximal intensity and an average intensity of the image.
- the cutoff parameter ⁇ is less than 90% and above 10% of the difference between the maximal intensity and the average intensity of the image.
- the cutoff parameter ⁇ equals about one third of the difference between the maximal intensity and the average intensity of the image.
- an imaging kit comprising a grating selected to up- shift a spatial frequency of radiation received from an object, an imaging system configured for imaging the object though the grating to provide image data, and an image processor configured to apply a linear filter to the image data to provide filtered image data, and to down- shift a spatial frequency of the filtered image data.
- the imaging kit wherein the image processor is configured for applying a band pass filter to the up-shifted image data prior to the application of the linear filter.
- the imaging system is an electron beam imaging system. According to some embodiments of the invention the imaging system is an X-ray imaging system. According to some embodiments of the invention the imaging system is an ultrasound imaging system. According to some embodiments of the invention the imaging system is a thermal imaging system. According to some embodiments of the invention the imaging system is an ultraviolet imaging system. According to some embodiments of the invention the imaging system is devoid of any diffraction compensating optical element.
- a system for processing image data comprises an input for receiving image data, and an image processor configured to apply a linear filter to the image data to provide filtered image data, to down-shift a spatial frequency of the filtered image data, and to reconstruct an image based on the down- shifted image data.
- the filter is a spatial filer.
- the filter is a spectral filer
- the image processor is configured for transforming the image data to a spectral domain prior to application of the linear filter.
- the linear filter is characterized by a cutoff parameter ⁇ , which is higher than 10% of a difference between a maximal intensity and an average intensity of the image.
- the linear filter is characterized by a cutoff parameter ⁇ , which is less than 90% of a difference between the maximal intensity and an average intensity of the image.
- the cutoff parameter ⁇ is less than 90% and above 10% of the difference between the maximal intensity and the average intensity of the image.
- the cutoff parameter ⁇ equals about one third of the difference between the maximal intensity and the average intensity of the image.
- Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
- a data processor such as a computing platform for executing a plurality of instructions.
- the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data.
- a network connection is provided as well.
- a display and/or a user input device such as a keyboard or mouse are optionally provided as well.
- FIG. 1 is a flowchart diagram describing a method suitable for processing image data, according to some embodiments of the present invention
- FIG. 2 is a schematic illustration of an imaging kit according to some embodiments of the present invention.
- FIGs. 3A-C are images demonstrating diffraction compensation without up- shifting a spatial frequency of the image data, as obtained in experiments performed according to some embodiments of the present invention.
- FIG. 4 is a graph that compares between the performances of the technique of the present embodiments when up- shifting of the spatial frequency is employed, and the performances of the technique of the present embodiments when up- shifting of the spatial frequency is not employed;
- FIGs. 5A-C are images comparing the diffraction compensation without spatial frequency up-shifting to diffraction compensation with spatial frequency up-shifting, as obtained in experiments performed according to some embodiments of the present invention
- FIG. 6 is a graph showing a relation between a cutoff parameter and image contrast, as obtained in experiments performed according to some embodiments of the present invention.
- FIG. 7 shows the difference between the maximal intensity and the average intensity of an image, as a function of the maximal variation of the absorption coefficient of an imaged object.
- the present invention in some embodiments thereof, relates to image processing and, more particularly, but not exclusively, to a system and a method for compensating diffraction.
- FIG. 1 is a flowchart diagram describing a method suitable for processing image data, according to some embodiments of the present invention.
- the method begins at 10 and optionally and preferably continues to 11 at which an image of an object is captured through a grating to provide image data.
- the image can be obtained from a different source (e.g. , a computer readable medium, the internet, a cloud storage facility, etc.).
- the capturing, when executed, is by an imaging system that receives radiation, optionally and preferably coherent radiation, from the object, through the grating, and outputs the image data.
- the grating is preferably placed in proximity to or on the object. When the grating is placed in proximity to the object the distance between the grating and the object is larger than the distance between the grating and the imaging system that capture the image.
- the position and size of the grating is such that the entire field-of-view of the object, from the viewpoint of the imaging system, is behind the grating with respect to the imaging system.
- the method of the present embodiments is particularly useful when at least a portion of the radiation from the object experiences diffraction before being recorded by the imaging system.
- the imaging system records the diffracted portions of the radiation without correcting it by optical means (e.g. , lenses and the like).
- the imaging system optionally and preferably records these portions as well.
- the imaging system typically employs a pixelated imager (e.g. , a CMOS or a CCD imager) that resolves the spatial distribution of the received radiation.
- the radiation can be of any type, including, without limitation, electromagnetic radiation, electron beam radiation and ultrasound radiation.
- electromagnetic radiation it can be a visible light radiation, an infrared radiation, an ultraviolet radiation, an X-ray radiation, and the like.
- the present embodiments can be used for processing images captured by many types of imaging techniques, including, without limitation, electron beam imaging, X-ray imaging, computerized tomography, ultrasound imaging, thermal imaging, ultraviolet imaging, infrared imaging, visible light imaging, and the like.
- the image data is typically arranged gridwise in a plurality of picture-elements (e.g. , pixels, arrangements of pixels) representing the image.
- picture-elements e.g. , pixels, arrangements of pixels
- pixel is sometimes abbreviated herein to indicate a picture-element. However, this is not intended to limit the meaning of the term “picture-element” which refers to a unit of the composition of an image.
- references to an "image” herein are, inter alia, references to values at picture- elements treated collectively as an array, typically a two-dimensional array.
- image as used herein also encompasses a mathematical object which does not necessarily correspond to a physical object.
- the original and processed images certainly do correspond to physical objects which are the object from which the imaging data are acquired.
- Each pixel in the image can be associated with a single digital intensity value, in which case the image is a grayscale image.
- each pixel is associated with three or more digital intensity values sampling the amount of light at three or more different color channels (e.g. , red, green and blue) in which case the image is a color image.
- images in which each pixel is associated with a mantissa for each color channels and a common exponent e.g. , the so-called RGBE format. Such images are known as "high dynamic range" images.
- diffraction is a linear optical process that can be corrected by means of linear optics
- the diffraction is not a linear process once captured by an imager since the electrical signal generated by the imager is indicative of the optical intensity, which is proportional to the square of the optical electromagnetic field.
- the imager destroys the linearity by generating an electrical signal which is nonlinear with respect to the optical electromagnetic field.
- the technique of the present embodiments successfully mitigates diffraction effects exhibited by the optical signal after image capture, even though the diffraction is non-linear with respect to the signal generated by the imager.
- the technique of the present embodiments is particularly useful when the imaged object is characterized by sufficiently small variation (e.g., variation of less than 50% or less than 40% or less than 30% or less than 20% or less than 10% or less than 5%) of an absorption coefficient or a refraction index across a surface of the object from which the radiation is received.
- sufficiently small variation e.g., variation of less than 50% or less than 40% or less than 30% or less than 20% or less than 10% or less than 5%
- the technique of the present embodiments is particularly useful when the image is characterized by sufficiently small intensity variations, e.g., intensity variation of less than 50% or less than 40% or less than 30% or less than 20% or less than 10% or less than 5% from the average intensity of the image.
- the intensity of a picture-element of the image can be expressed, for example, in gray-level units.
- the grating is optionally and preferably characterized by a spatial frequency that is higher than the maximal spatial frequency that an image would have had, had the image been captured by the same imaging device but without the grating.
- the spatial frequency DF of the grating optionally and preferably satisfies DF>X*FM, where X is at least 1.1 or at least 1.2 or at least 1.3 or at least 1.4 or at least 1.5.
- X is at least 1.1 or at least 1.2 or at least 1.3 or at least 1.4 or at least 1.5.
- the method optionally and preferably continues to 13 at which a spatial band pass filter is applied to the image data.
- the cutoff frequencies of the band pass filter are optionally and preferably selected so as to remove frequencies below DF-FM and above DF+FM.
- the method preferably continues to 14 at which a linear filter is applied to the image data, optionally and preferably the band -pass filtered image data.
- the filter can be applied by an image processor, which can process the image data to provide output image data composed of a summation of successive data samples weighted by individual coefficients.
- the data samples are optionally and preferably two-dimensional data samples, and the filter is optionally and preferably a two- dimensional filter.
- the operation of the image processor when applying the filter is typically described by a mathematical filtering function, which optionally and preferably provides the individual coefficients and the number of samples. Representative examples of filtering functions suitable for the present embodiments are provided below. It was found by the present Inventors that such a combination between the shift in spatial frequencies and application of linear filter mitigates the diffraction effect in the image.
- the filtering function that describes the filter applied at 14 is characterized by a cutoff parameter ⁇ .
- the cutoff parameter ⁇ is preferably dimensionless.
- the cutoff parameter ⁇ can correspond to a number of terms in a series expansion of the filter. For example, the number of terms can be about l/ ⁇ , e.g., the nearest integer of l/ ⁇ or the floor of l/ ⁇ or the ceiling of 1/ ⁇ .
- the cutoff parameter ⁇ is selected based on the maximal variation of the absorption coefficient or refraction index across a surface of the object. As shown in the Examples section that follows, the maximal variation of the absorption coefficient or refraction index can be approximated using the difference between the maximal intensity and the average intensity of the image.
- the cutoff parameter ⁇ can be about Y*m, where m is the difference between the maximal intensity and the average intensity of the image, and where Y is a positive number which is preferably less than 1, e.g., from about 0.1 to about 0.9, is from about 0.1 to about 0.8, or from about 0.1 to about 0.7, or from about 0.1 to about 0.6, or from about 0.1 to about 0.5, or from about 0.1 to about 0.4, or from about 0.2 to about 0.4, e.g., about 0.3.
- Other values of Y are also contemplated.
- the filter that is applied at 14 can be embodied in more than one way.
- a spectral filter is applied globally to the entire image, as disclosed, for example, in Gureyev, Opt. Commun. 220, 49-58 (2003), or Gureyev et al., Opt. Commun. 231, 53-70 (2004), or Gureyev et al., Appl. Opt. 43, 2418-2430 (2004).
- the image data are transformed to the spectral domain and a spectral filter is applied to the transformed image data. Thereafter, the image data is transformed back to the spatial domain.
- the transform from the spatial domain to the spectral domain can be by applying a Fourier Transform (FT) or a Fast FT (FFT) to the image data I(x, y) to provide a spectral image data I(co x , Q Y ), where x and y are spatial coordinates of picture-elements in the image and co x and Q Y are frequencies of the spectral image.
- FT Fourier Transform
- FFT Fast FT
- the transform from I(Q X , Q Y ) to I(x, y) can be by applying the inverse of the transformed applied to I(x, y).
- a representative example of a filtering function G(Q X , Q Y ) suitable for describing a spectral filter that can be applied to I(Q X , Q Y ) for the case of monochromatic imaging, is:
- a spatial filter is employed.
- a representative example of a filtering function g(x,y) suitable for describing a spatial filter that can be a lied to I(x, y) for the case of monochromatic imaging, is:
- the method can optionally and preferably continue to 15 at which the spatial frequency of the image data is down-shifted, optionally and preferably to the original spatial frequency, and to 16 at which an image is reconstructed based on the downshifted image data.
- FIG. 2 is a schematic illustration of a kit 20 according to some embodiments of the present invention.
- Kit 30 comprises data processing system 30 having a computer 32, which typically comprises an input/output (I/O) circuit 34, an image processor, such as a central processing unit (CPU) 36 (e.g., a microprocessor), and a memory 46 which typically includes both volatile memory and non-volatile memory.
- I/O circuit 34 is used to communicate information in appropriately structured form to and from other CPU 36 and other devices or networks external to system 30.
- CPU 36 is in communication with I O circuit 34 and memory 38.
- a display device 40 is shown in communication with computer 32, typically via I/O circuit 34.
- Computer 32 issues to display device 40 output images generated by CPU 36.
- a keyboard 42 is also shown in communication with computer 32, typically I/O circuit 34.
- system 30 can be part of a larger system.
- system 30 can also be in communication with a network, such as connected to a local area network (LAN), the Internet or a cloud computing resource of a cloud computing facility.
- LAN local area network
- the Internet or a cloud computing resource of a cloud computing facility.
- Kit 20 can additionally comprise a grating 47 selected to up-shift a spatial frequency of radiation 50 received from an object 52, and an imaging system 44 that image object 52 though grating 47 to provide image data.
- Imaging system 44 is preferably devoid of any diffraction compensating optical element.
- imaging systems suitable for the present embodiments including, without limitation, an electron beam imaging system, an X-ray imaging system, an ultrasound imaging system, a thermal imaging system, an ultraviolet imaging system, an infrared imaging system and a visible light imaging system.
- computer 32 of system 30 is configured for receiving the image data from imaging system 44 or a computer readable storage 46, applying a linear filter to the image data, down- shifting the spatial frequency of the filtered image data, reconstructing an image based on the down-shifted image data, as further detailed hereinabove, and displaying the image on display 40.
- system 30 communicates with a cloud computing resource (not shown) of a cloud computing facility, wherein the cloud computing resource receives the image data, applies the linear filter, reconstructs an image based on the down-shifted image data as further detailed hereinabove, and transmits it to system 30 for displaying the image on display 40.
- the method as described above can be implemented in computer software executed by system 30.
- the software can be stored in of loaded to memory 38 and executed on CPU 36.
- some embodiments of the present invention comprise a computer software product which comprises a computer-readable medium, more preferably a non-transitory computer-readable medium, in which program instructions are stored. The instructions, when read by computer 32, cause computer 32 to receive the image data and execute the method as described above.
- compositions, method or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
- a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof.
- range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
- FIGs. 3A-C are images demonstrating the diffraction compensation without up- shifting the spatial frequency of the image data.
- FIG. 3A shows the original image
- FIG. 3B shows the diffracted image from which the input image data was obtained
- FIG. 3C shows the output reconstructed (after applying the filter) image.
- FIG. 4 is a graph that compares between the performances of the technique when up-shifting of the spatial frequency is employed, and the performances of the technique when up-shifting of the spatial frequency is not employed. The comparison is for the case in which the modulation depth is 100%.
- phase and amplitude linear ratio correspond to the following set of parameters: image size 512x512 pixels, detector array size 256x256 ⁇ , radiation wavelength of 1 angstrom, diffraction distance of 1 m, max phase difference of 1 radian, and phase and amplitude linear ratio of about 10.
- the dot-dashed line corresponds to the original image data
- the dotted line corresponds to the diffracted image without filtering and without spatial frequency up-shifting
- the dashed line corresponds to a corrected image after filtering but without spatial frequency up-shifting
- the solid line corresponds to a corrected image after spatial frequency up-shifting and filtering.
- FIGs. 5A-C are images comparing the diffraction compensation without the spatial frequency up- shifting to the diffraction compensation with the spatial frequency up-shifting.
- FIG. 5A shows the original image
- FIG. 5B shows the diffraction compensation without the spatial frequency up-shifting
- FIG. 5C shows the diffraction compensation with the spatial frequency up-shifting.
- both techniques successfully reconstruct the original image, with a substantial improvement in FIG. 5C relative to FIG. 5B.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
A method of imaging comprises: capturing an image of an object through a grating selected to up-shift a spatial frequency of radiation received from the object, thereby providing image data; applying a linear filter to the image data to provide filtered image data; down-shifting a spatial frequency of the filtered image data; and reconstructing an image based on the down- shifted image data, thereby processing the image data.
Description
SYSTEM AND METHOD FOR COMPENSATING DIFFRACTION
FIELD AND BACKGROUND OF THE INVENTION
The present invention, in some embodiments thereof, relates to image processing and, more particularly, but not exclusively, to a system and a method for compensating diffraction.
Wave diffraction is a known physical phenomenon, in which the propagation direction of a wave is redirected due to interaction between the wave and small size objects. In case of waves constituting an image, the diffraction may generate spatial distortion of the image information. When the distorted image information is processed to reconstruct an image for displaying, the distortion can reduce the sharpness or produce other undesired aberrations. For this reason, many imaging modalities utilize optical elements such as lenses for compensating diffraction. There are modalities, however, in which it is not possible or undesired to employ such diffraction compensating elements.
The present Inventors discovered a technique that can be used for compensating diffraction by means of image processing.
SUMMARY OF THE INVENTION
According to an aspect of some embodiments of the present invention there is provided a method of processing image data generated by capturing an image of an object through a grating selected to up-shift a spatial frequency of radiation received from the object. The method comprises: applying a linear filter to the image data to provide filtered image data; down- shifting a spatial frequency of the filtered image data; and reconstructing an image based on the down-shifted image data, thereby processing the image data.
According to an aspect of some embodiments of the present invention there is provided a method of imaging. The method comprises: capturing an image of an object through a grating selected to up-shift a spatial frequency of radiation received from the object, thereby providing image data; applying a linear filter to the image data to provide filtered image data; down-shifting a spatial frequency of the filtered image data; and
reconstructing an image based on the down- shifted image data, thereby processing the image data.
According to some embodiments of the invention the method comprises applying a band pass filter to the image data prior to the application of the linear filter.
According to some embodiments of the invention the linear filter is a spatial filter and the method is executed without transforming the image data to a spectral domain.
According to some embodiments of the invention the image is an electron beam image. According to some embodiments of the invention the image is an X-ray image. According to some embodiments of the invention the image is an ultrasound image. According to some embodiments of the invention the image is a thermal image. According to some embodiments of the invention the image is an ultraviolet image. According to some embodiments of the invention the image is a visible image. According to some embodiments of the invention the image is an infra-red image. According to some embodiments of the invention the image is captured by an imaging system that is devoid of any diffraction compensating optical element.
According to some embodiments of the invention the filter is a spectral filer, and the method comprises transforming the image data to a spectral domain prior to application of the linear filter.
According to some embodiments of the invention the linear filter is characterized by a cutoff parameter ε, which is higher than 10% of a difference between a maximal intensity and an average intensity of the image. According to some embodiments of the invention the linear filter is characterized by a cutoff parameter ε, which is less than 90% of a difference between the maximal intensity and an average intensity of the image. According to some embodiments of the invention the cutoff parameter ε, is less than 90% and above 10% of the difference between the maximal intensity and the average intensity of the image. According to some embodiments of the invention the cutoff parameter ε equals about one third of the difference between the maximal intensity and the average intensity of the image.
According to an aspect of some embodiments of the present invention there is provided an imaging kit. The kit comprises a grating selected to up- shift a spatial frequency of radiation received from an object, an imaging system configured for
imaging the object though the grating to provide image data, and an image processor configured to apply a linear filter to the image data to provide filtered image data, and to down- shift a spatial frequency of the filtered image data.
According to some embodiments of the invention the imaging kit wherein the image processor is configured for applying a band pass filter to the up-shifted image data prior to the application of the linear filter.
According to some embodiments of the invention the imaging system is an electron beam imaging system. According to some embodiments of the invention the imaging system is an X-ray imaging system. According to some embodiments of the invention the imaging system is an ultrasound imaging system. According to some embodiments of the invention the imaging system is a thermal imaging system. According to some embodiments of the invention the imaging system is an ultraviolet imaging system. According to some embodiments of the invention the imaging system is devoid of any diffraction compensating optical element.
According to an aspect of some embodiments of the present invention there is provided a system for processing image data. The system comprises an input for receiving image data, and an image processor configured to apply a linear filter to the image data to provide filtered image data, to down-shift a spatial frequency of the filtered image data, and to reconstruct an image based on the down- shifted image data.
According to some embodiments of the invention the filter is a spatial filer.
According to some embodiments of the invention the filter is a spectral filer, and the image processor is configured for transforming the image data to a spectral domain prior to application of the linear filter.
According to some embodiments of the invention the linear filter is characterized by a cutoff parameter ε, which is higher than 10% of a difference between a maximal intensity and an average intensity of the image. According to some embodiments of the invention the linear filter is characterized by a cutoff parameter ε, which is less than 90% of a difference between the maximal intensity and an average intensity of the image. According to some embodiments of the invention the cutoff parameter ε, is less than 90% and above 10% of the difference between the maximal intensity and the average intensity of the image. According to some embodiments of the invention the cutoff
parameter ε equals about one third of the difference between the maximal intensity and the average intensity of the image.
Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
For example, hardware for performing selected tasks according to embodiments of the invention could be implemented as a chip or a circuit. As software, selected tasks according to embodiments of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In an exemplary embodiment of the invention, one or more tasks according to exemplary embodiments of method and/or system as described herein are performed by a data processor, such as a computing platform for executing a plurality of instructions. Optionally, the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data. Optionally, a network connection is provided as well. A display and/or a user input device such as a keyboard or mouse are optionally provided as well.
BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS
Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings and images. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of
example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.
In the drawings:
FIG. 1 is a flowchart diagram describing a method suitable for processing image data, according to some embodiments of the present invention;
FIG. 2 is a schematic illustration of an imaging kit according to some embodiments of the present invention;
FIGs. 3A-C are images demonstrating diffraction compensation without up- shifting a spatial frequency of the image data, as obtained in experiments performed according to some embodiments of the present invention;
FIG. 4 is a graph that compares between the performances of the technique of the present embodiments when up- shifting of the spatial frequency is employed, and the performances of the technique of the present embodiments when up- shifting of the spatial frequency is not employed;
FIGs. 5A-C are images comparing the diffraction compensation without spatial frequency up-shifting to diffraction compensation with spatial frequency up-shifting, as obtained in experiments performed according to some embodiments of the present invention;
FIG. 6 is a graph showing a relation between a cutoff parameter and image contrast, as obtained in experiments performed according to some embodiments of the present invention; and
FIG. 7 shows the difference between the maximal intensity and the average intensity of an image, as a function of the maximal variation of the absorption coefficient of an imaged object.
DESCRIPTION OF SPECIFIC EMBODIMENTS OF THE INVENTION
The present invention, in some embodiments thereof, relates to image processing and, more particularly, but not exclusively, to a system and a method for compensating diffraction.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of
construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways.
It is to be understood that, unless otherwise defined, the operations described hereinbelow can be executed either contemporaneously or sequentially in many combinations or orders of execution. Specifically, the ordering of the flowchart diagrams is not to be considered as limiting. For example, two or more operations, appearing in the following description or in the flowchart diagrams in a particular order, can be executed in a different order (e.g., a reverse order) or substantially contemporaneously. Additionally, several operations described below are optional and may not be executed.
FIG. 1 is a flowchart diagram describing a method suitable for processing image data, according to some embodiments of the present invention.
The method begins at 10 and optionally and preferably continues to 11 at which an image of an object is captured through a grating to provide image data. Alternatively, the image can be obtained from a different source (e.g. , a computer readable medium, the internet, a cloud storage facility, etc.). The capturing, when executed, is by an imaging system that receives radiation, optionally and preferably coherent radiation, from the object, through the grating, and outputs the image data. The grating is preferably placed in proximity to or on the object. When the grating is placed in proximity to the object the distance between the grating and the object is larger than the distance between the grating and the imaging system that capture the image. Preferably, the position and size of the grating is such that the entire field-of-view of the object, from the viewpoint of the imaging system, is behind the grating with respect to the imaging system.
The method of the present embodiments is particularly useful when at least a portion of the radiation from the object experiences diffraction before being recorded by the imaging system. Preferably, the imaging system records the diffracted portions of the radiation without correcting it by optical means (e.g. , lenses and the like). When the radiation also includes non-diffracted portions, the imaging system optionally and preferably records these portions as well.
The imaging system typically employs a pixelated imager (e.g. , a CMOS or a CCD imager) that resolves the spatial distribution of the received radiation.
The radiation can be of any type, including, without limitation, electromagnetic radiation, electron beam radiation and ultrasound radiation. When the radiation is electromagnetic radiation it can be a visible light radiation, an infrared radiation, an ultraviolet radiation, an X-ray radiation, and the like. Thus, the present embodiments can be used for processing images captured by many types of imaging techniques, including, without limitation, electron beam imaging, X-ray imaging, computerized tomography, ultrasound imaging, thermal imaging, ultraviolet imaging, infrared imaging, visible light imaging, and the like.
The image data is typically arranged gridwise in a plurality of picture-elements (e.g. , pixels, arrangements of pixels) representing the image.
The term "pixel" is sometimes abbreviated herein to indicate a picture-element. However, this is not intended to limit the meaning of the term "picture-element" which refers to a unit of the composition of an image.
References to an "image" herein are, inter alia, references to values at picture- elements treated collectively as an array, typically a two-dimensional array. Thus, the term "image" as used herein also encompasses a mathematical object which does not necessarily correspond to a physical object. The original and processed images certainly do correspond to physical objects which are the object from which the imaging data are acquired.
Each pixel in the image can be associated with a single digital intensity value, in which case the image is a grayscale image. Alternatively, each pixel is associated with three or more digital intensity values sampling the amount of light at three or more different color channels (e.g. , red, green and blue) in which case the image is a color image. Also contemplated are images in which each pixel is associated with a mantissa for each color channels and a common exponent (e.g. , the so-called RGBE format). Such images are known as "high dynamic range" images.
It is recognized that while diffraction is a linear optical process that can be corrected by means of linear optics, the diffraction is not a linear process once captured by an imager since the electrical signal generated by the imager is indicative of the optical intensity, which is proportional to the square of the optical electromagnetic field.
Thus, the imager destroys the linearity by generating an electrical signal which is nonlinear with respect to the optical electromagnetic field. The technique of the present embodiments successfully mitigates diffraction effects exhibited by the optical signal after image capture, even though the diffraction is non-linear with respect to the signal generated by the imager.
The technique of the present embodiments is particularly useful when the imaged object is characterized by sufficiently small variation (e.g., variation of less than 50% or less than 40% or less than 30% or less than 20% or less than 10% or less than 5%) of an absorption coefficient or a refraction index across a surface of the object from which the radiation is received.
The technique of the present embodiments is particularly useful when the image is characterized by sufficiently small intensity variations, e.g., intensity variation of less than 50% or less than 40% or less than 30% or less than 20% or less than 10% or less than 5% from the average intensity of the image. The intensity of a picture-element of the image can be expressed, for example, in gray-level units.
The grating is optionally and preferably characterized by a spatial frequency that is higher than the maximal spatial frequency that an image would have had, had the image been captured by the same imaging device but without the grating. Denoting the maximal spatial frequency of an image captured without the grating by FM, the spatial frequency DF of the grating optionally and preferably satisfies DF>X*FM, where X is at least 1.1 or at least 1.2 or at least 1.3 or at least 1.4 or at least 1.5. Thus, the grating effects a shift in the spatial frequency of the image data towards higher spatial frequencies.
The method optionally and preferably continues to 13 at which a spatial band pass filter is applied to the image data. The cutoff frequencies of the band pass filter are optionally and preferably selected so as to remove frequencies below DF-FM and above DF+FM.
The method preferably continues to 14 at which a linear filter is applied to the image data, optionally and preferably the band -pass filtered image data.
The filter can be applied by an image processor, which can process the image data to provide output image data composed of a summation of successive data samples weighted by individual coefficients. The data samples are optionally and preferably
two-dimensional data samples, and the filter is optionally and preferably a two- dimensional filter. The operation of the image processor when applying the filter is typically described by a mathematical filtering function, which optionally and preferably provides the individual coefficients and the number of samples. Representative examples of filtering functions suitable for the present embodiments are provided below. It was found by the present Inventors that such a combination between the shift in spatial frequencies and application of linear filter mitigates the diffraction effect in the image.
In some embodiments of the present invention the filtering function that describes the filter applied at 14 is characterized by a cutoff parameter ε. The cutoff parameter ε is preferably dimensionless. The cutoff parameter ε can correspond to a number of terms in a series expansion of the filter. For example, the number of terms can be about l/ε, e.g., the nearest integer of l/ε or the floor of l/ε or the ceiling of 1/ε.
In various exemplary embodiments of the invention the cutoff parameter ε is selected based on the maximal variation of the absorption coefficient or refraction index across a surface of the object. As shown in the Examples section that follows, the maximal variation of the absorption coefficient or refraction index can be approximated using the difference between the maximal intensity and the average intensity of the image. Thus, according to some embodiments of the present invention the cutoff parameter ε can be about Y*m, where m is the difference between the maximal intensity and the average intensity of the image, and where Y is a positive number which is preferably less than 1, e.g., from about 0.1 to about 0.9, is from about 0.1 to about 0.8, or from about 0.1 to about 0.7, or from about 0.1 to about 0.6, or from about 0.1 to about 0.5, or from about 0.1 to about 0.4, or from about 0.2 to about 0.4, e.g., about 0.3. Other values of Y (e.g., less than 0.1 or more than 0.9) are also contemplated.
The filter that is applied at 14 can be embodied in more than one way. In some embodiments of the present invention, a spectral filter is applied globally to the entire image, as disclosed, for example, in Gureyev, Opt. Commun. 220, 49-58 (2003), or Gureyev et al., Opt. Commun. 231, 53-70 (2004), or Gureyev et al., Appl. Opt. 43, 2418-2430 (2004).
In some embodiments of the present invention, the image data are transformed to the spectral domain and a spectral filter is applied to the transformed image data. Thereafter, the image data is transformed back to the spatial domain. The transform
from the spatial domain to the spectral domain can be by applying a Fourier Transform (FT) or a Fast FT (FFT) to the image data I(x, y) to provide a spectral image data I(cox, QY), where x and y are spatial coordinates of picture-elements in the image and cox and QY are frequencies of the spectral image. The transform from I(QX, QY) to I(x, y) can be by applying the inverse of the transformed applied to I(x, y). A representative example of a filtering function G(QX, QY) suitable for describing a spectral filter that can be applied to I(QX, QY) for the case of monochromatic imaging, is:
G(Qx,Qy) = sec(a(Qx 2+Qy 2)-is) (EQ. 1) where a=z/2k, z is the distance between the object and the imaging system, k = 2πη0 /λ is the beam's wavenumber, n0 is the refractive index of the medium, and λ is the wavelength of the light.
In some embodiments of the present invention, a spatial filter is employed. A representative example of a filtering function g(x,y) suitable for describing a spatial filter that can be a lied to I(x, y) for the case of monochromatic imaging, is:
While EQs. 1-3 above are suitable, it is to be understood that other filteri8ng functions are also contemplated.
The method can optionally and preferably continue to 15 at which the spatial frequency of the image data is down-shifted, optionally and preferably to the original spatial frequency, and to 16 at which an image is reconstructed based on the downshifted image data.
The method ends at 7.
FIG. 2 is a schematic illustration of a kit 20 according to some embodiments of the present invention. Kit 30 comprises data processing system 30 having a computer 32, which typically comprises an input/output (I/O) circuit 34, an image processor, such as a central processing unit (CPU) 36 (e.g., a microprocessor), and a memory 46 which
typically includes both volatile memory and non-volatile memory. I/O circuit 34 is used to communicate information in appropriately structured form to and from other CPU 36 and other devices or networks external to system 30. CPU 36 is in communication with I O circuit 34 and memory 38. These elements are those typically found in most general purpose computers and are known per se.
A display device 40 is shown in communication with computer 32, typically via I/O circuit 34. Computer 32 issues to display device 40 output images generated by CPU 36. A keyboard 42 is also shown in communication with computer 32, typically I/O circuit 34.
It will be appreciated by one of ordinary skill in the art that system 30 can be part of a larger system. For example, system 30 can also be in communication with a network, such as connected to a local area network (LAN), the Internet or a cloud computing resource of a cloud computing facility.
Kit 20 can additionally comprise a grating 47 selected to up-shift a spatial frequency of radiation 50 received from an object 52, and an imaging system 44 that image object 52 though grating 47 to provide image data.
Imaging system 44 is preferably devoid of any diffraction compensating optical element. Representative examples of imaging systems suitable for the present embodiments including, without limitation, an electron beam imaging system, an X-ray imaging system, an ultrasound imaging system, a thermal imaging system, an ultraviolet imaging system, an infrared imaging system and a visible light imaging system.
In some embodiments of the invention computer 32 of system 30 is configured for receiving the image data from imaging system 44 or a computer readable storage 46, applying a linear filter to the image data, down- shifting the spatial frequency of the filtered image data, reconstructing an image based on the down-shifted image data, as further detailed hereinabove, and displaying the image on display 40.
In some embodiments of the invention system 30 communicates with a cloud computing resource (not shown) of a cloud computing facility, wherein the cloud computing resource receives the image data, applies the linear filter, reconstructs an image based on the down-shifted image data as further detailed hereinabove, and transmits it to system 30 for displaying the image on display 40.
The method as described above can be implemented in computer software executed by system 30. For example, the software can be stored in of loaded to memory 38 and executed on CPU 36. Thus, some embodiments of the present invention comprise a computer software product which comprises a computer-readable medium, more preferably a non-transitory computer-readable medium, in which program instructions are stored. The instructions, when read by computer 32, cause computer 32 to receive the image data and execute the method as described above.
As used herein the term "about" refers to ± 10 %.
The word "exemplary" is used herein to mean "serving as an example, instance or illustration." Any embodiment described as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments.
The word "optionally" is used herein to mean "is provided in some embodiments and not provided in other embodiments." Any particular embodiment of the invention may include a plurality of "optional" features unless such features conflict.
The terms "comprises", "comprising", "includes", "including", "having" and their conjugates mean "including but not limited to".
The term "consisting of means "including and limited to".
The term "consisting essentially of" means that the composition, method or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
As used herein, the singular form "a", "an" and "the" include plural references unless the context clearly dictates otherwise. For example, the term "a compound" or "at least one compound" may include a plurality of compounds, including mixtures thereof.
Throughout this application, various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as
from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
Whenever a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range. The phrases "ranging/ranges between" a first indicate number and a second indicate number and "ranging/ranges from" a first indicate number "to" a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.
Various embodiments and aspects of the present invention as delineated hereinabove and as claimed in the claims section below find experimental support in the following examples.
EXAMPLES
Reference is now made to the following examples, which together with the above descriptions illustrate some embodiments of the invention in a non limiting fashion.
Experiments were performed to demonstrate the ability of the system and method of the present embodiments to compensate for diffraction effects. The results are presented in FIGs. 3A-5C.
FIGs. 3A-C are images demonstrating the diffraction compensation without up- shifting the spatial frequency of the image data. FIG. 3A shows the original image, FIG. 3B shows the diffracted image from which the input image data was obtained, and FIG. 3C shows the output reconstructed (after applying the filter) image. As shown, many features in the original image are successfully reconstructed.
FIG. 4 is a graph that compares between the performances of the technique when up-shifting of the spatial frequency is employed, and the performances of the technique when up-shifting of the spatial frequency is not employed. The comparison is for the case in which the modulation depth is 100%. The simulations shown in FIG. 4 correspond to the following set of parameters: image size 512x512 pixels, detector array size 256x256 μιη , radiation wavelength of 1 angstrom, diffraction distance of 1 m, max phase difference of 1 radian, and phase and amplitude linear ratio of about 10.
In FIG. 4, the dot-dashed line corresponds to the original image data, the dotted line corresponds to the diffracted image without filtering and without spatial frequency up-shifting, the dashed line corresponds to a corrected image after filtering but without spatial frequency up-shifting, and the solid line corresponds to a corrected image after spatial frequency up-shifting and filtering. As shown, both the solid and dashed lines successfully reconstruct the original image data, with a substantial improvement in the solid line relative to the dashed line.
FIGs. 5A-C are images comparing the diffraction compensation without the spatial frequency up- shifting to the diffraction compensation with the spatial frequency up-shifting. FIG. 5A shows the original image, FIG. 5B shows the diffraction compensation without the spatial frequency up-shifting, FIG. 5C shows the diffraction compensation with the spatial frequency up-shifting. As shown, both techniques successfully reconstruct the original image, with a substantial improvement in FIG. 5C relative to FIG. 5B.
Experiments have been made to determine suitable values of the cutoff parameter ε. The image of FIG. 5A has been modified so as to vary its contrast (difference between the maximal intensity and the average intensity). For each modified version of the image, the value of ε that allows reconstruction of the image has been determined. The results are presented in FIG. 6, which is a graph of the value of ε that allows adequate reconstruction of the image as a function of the modified contrast m. Generally, setting ε to be m/3 allows adequate reconstruction of the image. The circles in FIG. 6 shows the experimental results and the solid line in FIG. 6 corresponds to the relation ε=ητ/3.
Experimental simulations have also been made to determine how the maximal variation of the absorption coefficient or refraction index of the object relates to the
difference between the maximal intensity and the average intensity of the image. The results are shown in FIG. 7, which shows the difference between the maximal intensity and the average intensity (devoted mimage) as a function of the maximal variation of the absorption coefficient of the object (denoted m0bject)- As shown, the relation between niobject and mimage, is, to a good approximation, linear, and so variations in the absorption coefficient or refraction index of the object can be approximated by intensity variations over the image.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.
All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.
Claims
1. A method of processing image data generated by capturing an image of an object through a grating selected to up-shift a spatial frequency of radiation received from the object, the method comprising:
applying a linear filter to the image data to provide filtered image data;
down- shifting a spatial frequency of said filtered image data; and
reconstructing an image based on the down- shifted image data, thereby processing the image data.
2. A method of imaging, comprising:
capturing an image of an object through a grating selected to up-shift a spatial frequency of radiation received from the object, thereby providing image data;
applying a linear filter to said image data to provide filtered image data;
down-shifting a spatial frequency of said filtered image data; and
reconstructing an image based on the down- shifted image data, thereby processing the image data.
3. The method according to any of claims 1 and 2, further comprising applying a band pass filter to said image data prior to said application of said linear filter.
4. The method according to any of claims 1 and 2, wherein said linear filter is a spatial filter and the method is executed without transforming the image data to a spectral domain.
5. The method according to claim 3, wherein said linear filter is a spatial filter and the method is executed without transforming the image data to a spectral domain.
6. The method according to any of claims 1-5, wherein said image is an electron beam image.
7. The method according to any of claims 1-5, wherein said image is an X- ray image.
8. The method according to any of claims 1-5, wherein said image is an ultrasound image.
9. The method according to any of claims 1-8, wherein said image is a thermal image.
10. The method according to any of claims 1-8, wherein said image is an ultraviolet image.
11. The method according to any of claims 1-8, wherein said image is a visible image.
12. The method according to any of claims 1-8, wherein said image is an infra-red image.
13. The method according to any of claims 1 and 2, wherein said image is captured by an imaging system that is devoid of any diffraction compensating optical element.
14. The method according to any of claims 2-10, wherein said image is captured by an imaging system that is devoid of any diffraction compensating optical element.
15. The method according to any of claims 1 and 2, wherein said filter is a spatial filer.
16. The method according to any of claims 2-13, wherein said filter is a spatial filer.
17. The method according to any of claims 1 and 2, wherein said filter is a spectral filer, and the method comprising transforming the image data to a spectral domain prior to application of said linear filter.
18. The method according to any of claims 3-13, wherein said filter is a spectral filer, and the method comprising transforming the image data to a spectral domain prior to application of said linear filter.
19. The method according to any of claims 1 and 2, wherein said linear filter is characterized by a cutoff parameter ε, which is higher than 10% of a difference between a maximal intensity and an average intensity of the image.
20. The method according to any of claims 3-17, wherein said linear filter is characterized by a cutoff parameter ε, which is higher than 10% of a difference between a maximal intensity and an average intensity of the image.
21. The method according to any of claims 1 and 2, wherein said linear filter is characterized by a cutoff parameter ε, which is less than 90% of a difference between the maximal intensity and an average intensity of the image.
22. The method according to any of claims 3-17, wherein said linear filter is characterized by a cutoff parameter ε, which is less than 90% of a difference between the maximal intensity and an average intensity of the image.
23. The method according to claim 21, wherein said cutoff parameter ε, is higher than 10% of said difference between said maximal intensity and said average intensity of the image.
24. The method according to claim 22, wherein said cutoff parameter ε, is higher than 10% of said difference between said maximal intensity and said average intensity of the image.
25. The method according to claim 19, wherein said cutoff parameter ε equals about one third of said difference between said maximal intensity and said average intensity of the image.
26. The method according to any of claims 20-24, wherein said cutoff parameter ε equals about one third of said difference between said maximal intensity and said average intensity of the image.
27. An imaging kit, comprising a grating selected to up- shift a spatial frequency of radiation received from an object, an imaging system configured for imaging the object though said grating to provide image data, and an image processor configured to apply a linear filter to said image data to provide filtered image data, and to down-shift a spatial frequency of said filtered image data.
28. The imaging kit of claim 27, wherein said image processor is configured for applying a band pass filter to said up- shifted image data prior to said application of said linear filter.
29. The imaging kit according to any of claims 27 and 28, wherein said imaging system is an electron beam imaging system.
30. The imaging kit according to any of claims 27 and 28, wherein said imaging system is an X-ray imaging system.
31. The imaging kit according to any of claims 27 and 28, wherein said imaging system is an ultrasound imaging system.
32. The imaging kit according to any of claims 27 and 28, wherein said imaging system is a thermal imaging system.
33. The imaging kit according to any of claims 27 and 28, wherein said imaging system is an ultraviolet imaging system.
34. The imaging kit according to any of claims 27-33, wherein said imaging system is devoid of any diffraction compensating optical element.
35. A system for processing image data, the system comprises an input for receiving image data, and an image processor configured to apply a linear filter to said image data to provide filtered image data, to down-shift a spatial frequency of said filtered image data, and to reconstruct an image based on the down- shifted image data.
36. The imaging kit or system according to any of claims 27-35, wherein said filter is a spatial filer.
37. The imaging kit or system according to any of claims 27-35, wherein said filter is a spectral filer, and the image processor is configured for transforming the image data to a spectral domain prior to application of said linear filter.
38. The imaging kit or system according to any of claims 27-37, wherein said linear filter is characterized by a cutoff parameter ε, which is higher than 10% of a difference between a maximal intensity and an average intensity of the image.
39. The imaging kit or system according to any of claims 27-37, wherein said linear filter is characterized by a cutoff parameter ε, which is less than 90% of a difference between the maximal intensity and an average intensity of the image.
40. The imaging kit or system according to claim 39, wherein said cutoff parameter ε, is higher than 10% of said difference between said maximal intensity and said average intensity of the image.
41. The imaging kit or system according to any of claims 38 and 39, wherein said cutoff parameter ε equals about one third of said difference between said maximal intensity and said average intensity of the image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/IL2017/050368 WO2018173031A1 (en) | 2017-03-24 | 2017-03-24 | System and method for compensating diffraction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/IL2017/050368 WO2018173031A1 (en) | 2017-03-24 | 2017-03-24 | System and method for compensating diffraction |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018173031A1 true WO2018173031A1 (en) | 2018-09-27 |
Family
ID=63585943
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IL2017/050368 WO2018173031A1 (en) | 2017-03-24 | 2017-03-24 | System and method for compensating diffraction |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018173031A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040245439A1 (en) * | 2002-07-26 | 2004-12-09 | Shaver David C. | Optical imaging systems and methods using polarized illumination and coordinated pupil filter |
US20060164287A1 (en) * | 2005-01-21 | 2006-07-27 | Safeview, Inc. | Depth-based surveillance image reconstruction |
US20150077755A1 (en) * | 2003-10-27 | 2015-03-19 | The General Hospital Corporation | Method and apparatus for performing optical imaging using frequency-domain interferometry |
US20150330775A1 (en) * | 2012-12-12 | 2015-11-19 | The University Of Birminggham | Simultaneous multiple view surface geometry acquisition using structured light and mirrors |
-
2017
- 2017-03-24 WO PCT/IL2017/050368 patent/WO2018173031A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040245439A1 (en) * | 2002-07-26 | 2004-12-09 | Shaver David C. | Optical imaging systems and methods using polarized illumination and coordinated pupil filter |
US20150077755A1 (en) * | 2003-10-27 | 2015-03-19 | The General Hospital Corporation | Method and apparatus for performing optical imaging using frequency-domain interferometry |
US20060164287A1 (en) * | 2005-01-21 | 2006-07-27 | Safeview, Inc. | Depth-based surveillance image reconstruction |
US20150330775A1 (en) * | 2012-12-12 | 2015-11-19 | The University Of Birminggham | Simultaneous multiple view surface geometry acquisition using structured light and mirrors |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6952277B2 (en) | Imaging equipment and spectroscopic system | |
Branchitta et al. | New technique for the visualization of high dynamic range infrared images | |
Ancuti et al. | Single-scale fusion: An effective approach to merging images | |
JP6653460B2 (en) | Imaging device, imaging system, image generation device, and color filter | |
WO2021192891A1 (en) | Signal processing method, signal processing device, and image-capturing system | |
CN107274350B (en) | Method and system for reducing ringing effects in X-ray images | |
Wu et al. | Color-to-grayscale conversion through weighted multiresolution channel fusion | |
US10012953B2 (en) | Method of reconstructing a holographic image and apparatus therefor | |
JP2022107641A (en) | Imaging method | |
Reeves | Image restoration: fundamentals of image restoration | |
US9014506B2 (en) | Image processing method, program, image processing apparatus, image-pickup optical apparatus, and network device | |
Kozacik et al. | Comparison of turbulence mitigation algorithms | |
Rafinazari et al. | Demosaicking algorithm for the Kodak-RGBW color filter array | |
Brady et al. | Coding for compressive focal tomography | |
US20210034012A1 (en) | Holographic display method and holographic display device | |
US20170171476A1 (en) | System and method for spectral imaging | |
WO2018173031A1 (en) | System and method for compensating diffraction | |
Hou et al. | Efficient L 1-based nonlocal total variational model of Retinex for image restoration | |
Voronin et al. | Block-based multi-scale haze image enhancement method for surveillance application | |
Dixon et al. | Selection of image fusion quality measures: objective, subjective, and metric assessment | |
Klein | Multispectral imaging and image processing | |
Kamlah et al. | Wavelength dependence of image quality metrics and seeing parameters and their relation to adaptive optics performance | |
Lagunas et al. | Human eye visual hyperacuity: Controlled diffraction for image resolution improvement | |
JP6835227B2 (en) | Image processing equipment, image processing methods and computer programs | |
Mikelsons et al. | A fast and robust implementation of the adaptive destriping algorithm for SNPP VIIRS and Terra/Aqua MODIS SST |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17901579 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17901579 Country of ref document: EP Kind code of ref document: A1 |