[go: up one dir, main page]

CN115035009B - A two-exposure image fusion method based on guided filtering multi-level decomposition - Google Patents

A two-exposure image fusion method based on guided filtering multi-level decomposition Download PDF

Info

Publication number
CN115035009B
CN115035009B CN202210574642.7A CN202210574642A CN115035009B CN 115035009 B CN115035009 B CN 115035009B CN 202210574642 A CN202210574642 A CN 202210574642A CN 115035009 B CN115035009 B CN 115035009B
Authority
CN
China
Prior art keywords
image
exposure
taking
layer
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210574642.7A
Other languages
Chinese (zh)
Other versions
CN115035009A (en
Inventor
綦俊炜
杨振
李迎松
高敬鹏
薛伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202210574642.7A priority Critical patent/CN115035009B/en
Publication of CN115035009A publication Critical patent/CN115035009A/en
Application granted granted Critical
Publication of CN115035009B publication Critical patent/CN115035009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本发明属于图像处理技术领域,具体涉及一种基于导向滤波多级分解的两曝光图像融合方法。本发明通过导向滤波将图像分解为一个基础层和多个细节层,然后分别逐级利用曝光权重和全局梯度权重挖掘包含在基础层和细节层中的图像信息以重建图像。本发明解决了现有技术中原始图像序列具有较大曝光时间差异时融合图像质量不佳的问题,可以适应多种曝光比率以及复杂的真实场景,通过曝光权重和全局梯度权重分别保留整体亮度和局部细节。

The present invention belongs to the field of image processing technology, and specifically relates to a two-exposure image fusion method based on guided filtering multi-level decomposition. The present invention decomposes an image into a base layer and multiple detail layers through guided filtering, and then uses exposure weights and global gradient weights to mine the image information contained in the base layer and the detail layer to reconstruct the image. The present invention solves the problem of poor fused image quality when the original image sequence has a large exposure time difference in the prior art, and can adapt to a variety of exposure ratios and complex real scenes, and retains the overall brightness and local details through exposure weights and global gradient weights.

Description

Two-exposure image fusion method based on guide filtering multistage decomposition
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a two-exposure image fusion method based on guide filtering multi-stage decomposition.
Background
Dynamic range represents the ratio of the maximum and minimum values of a varying signal (e.g., sound or light), and the luminance dynamic range of a natural scene may exceed 100,000:1. Consumer-level digital cameras are limited in dynamic range by manufacturing costs, and it is difficult to capture real natural scenes with high dynamic range (HIGH DYNAMIC RANGE, HDR) by a single imaging, and there may be areas in the image that are over-exposed or under-exposed, resulting in loss of image detail. In order to solve the above problems, currently, a multi-exposure fusion technology is generally adopted to generate a high dynamic range image. Firstly, a plurality of low dynamic range image sequences with different exposure times under the same scene are acquired, and then an image is reconstructed by adopting an image fusion method, so that the newly generated image can simultaneously retain the image details of high-brightness and low-brightness areas. The method does not need professional imaging equipment and additional information of a camera, has universality and practical attractive force, and the generated new image has rich information and meets the reality of human eye visual perception.
The existing multi-exposure fusion method mostly needs long sequence original images with small exposure difference to obtain better fusion results, and when the original images are fewer and have larger exposure difference, the quality of the fusion image is obviously reduced. Poplar et al in 2018, multi-Scale Fusion of Two Large-Exposure-Ratio Images, propose a method of generating a virtual image with medium Exposure by means of an intensity mapping function, and then fusing the two original Images and the virtual image to generate the final fused image. The method reduces the reliability of the fusion image and increases the computational complexity of the algorithm.
Disclosure of Invention
The invention aims to solve the problem of poor fusion image quality when an original image sequence has larger exposure time difference, and particularly when two input original images are respectively underexposure and overexposed images, the existing multi-exposure image fusion method can not reconstruct relative contrast information in a fusion image well. The invention provides a two-exposure image fusion method based on guide filtering multi-stage decomposition, which decomposes an image into a base layer and a plurality of detail layers through guide filtering, and then respectively utilizes exposure weights and global gradient weights to mine image information contained in the base layer and the detail layers step by step so as to reconstruct the image.
A two-exposure image fusion method based on guide filtering multi-stage decomposition comprises the following steps:
step 1, acquiring two original RGB channel images I (I) to be fused, and converting the two original RGB channel images I (I) into YCbCr channel images, wherein i=1, 2;
Step 2, acquiring an exposure weight map of an YCbCr channel image in a Y channel, and obtaining exposure significance weight W e (i) through Gaussian filtering;
Step 3, taking the original RGB channel image I (I) as a guide image to conduct guide filtering so as to divide the guide image into a base layer B 1 (I) and a detail layer D 1 (I);
And 4, continuing to conduct guided filtering on the base layer B 1 (i) obtained in the step 3 by taking the base layer B 1 (i) as a guide diagram to obtain B 2 (i) and D 2 (i), and iterating the same to obtain B n (i) and D n (i), and decomposing the image into:
I(i)=B1(i)+D1(i)
=B2(i)+D2(i)+D1(i)
=...
=Bn(i)+Dn(i)+Dn-1(i)+…++D1(i)
Note that base layer B (i) =b n (i), detail layer D (i) =d n(i)+Dn-1(i)+…++D1 (i);
Step 5, taking global gradients from each detail layer D n (i) frame by frame, and obtaining each layer of saliency weight W g,n (i) through Gaussian filtering treatment;
step 6, calculating the exposure weight of the base layer of the fusion image And the weight of detail layer
Step 7, respectively carrying out image fusion on a base layer B (i) and a detail layer D (i) according to image multi-level decomposition and weight diagrams at all levels to obtain a fusion image F;
further, the calculation method of the exposure saliency weight W e (i) in the step 2 is as follows:
Where σ is the variance, T 1、T2 is the exposure threshold, mean (I (I)) is the pixel average intensity value of the original RGB channel image I (I), and γ is the coefficient for increasing the adaptive weight robustness.
Further, the guided filtering in the step 3 is an edge preserving filtering operation, and a loss function of the guided filtering based on the least square method is as follows:
Wherein, The method comprises the steps of taking a pixel point k as a center, taking a local window omega k as a linear transformation coefficient, taking an I i as a guide image, taking P i as an original image, taking epsilon as a regularization parameter, taking omega, mu and delta as local window pixel numbers, mean values and variances respectively, and taking a final output sequence as follows:
Wherein, Denoted with Gf (I, P, r, epsilon) the guided filter operator, r being the filter radius, the original image is divided into a base layer B 1 (I) and a detail layer D 1 (I) by the guided filter operation:
B1(i)=Gf(I(i),I(i),r,ε),D1(i)=I(i)-B1(i)。
Further, in the step 5, global gradients are respectively taken at each detail layer D n (i) frame by frame, and the method for obtaining the saliency weight W g,n (i) of each layer through gaussian filtering treatment specifically comprises the following steps:
each pixel of the RGB image sequence contains three components of red, green and blue, and each pixel is composed of a three-dimensional vector To express:
Wherein, The vector gradient is calculated by taking a unit vector R, G, B as three components of red, green and blue, and the parameter g xx、gyy、gxy for calculating the vector gradient is respectively as follows:
the gradient direction angle θ is:
the gradient Grad (θ) is:
The invention has the beneficial effects that:
the invention provides a two-exposure image fusion method based on guide filtering multi-stage decomposition, which decomposes an image into a base layer and a plurality of detail layers through guide filtering, and then utilizes exposure weights and global gradient weights to mine image information contained in the base layer and the detail layers step by step so as to reconstruct a fusion strategy of the image. The invention solves the problem of poor quality of the fused image when the original image sequence has larger exposure time difference in the prior art, can adapt to various exposure ratios and complex real scenes, and respectively maintains the whole brightness and the local detail through the exposure weight and the global gradient weight.
Drawings
Fig. 1 is a general flow chart of the present invention.
Fig. 2 is a flow chart of multi-level decomposition of an image based on guided filtering in the present invention.
FIG. 3 is a flow chart of multi-level image fusion in accordance with the present invention.
FIG. 4 is a graph comparing the results of multi-exposure fusion structure similarity (MEF-SSIM) of the present invention with nine multi-exposure fusion techniques.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The invention aims to solve the problem of poor fusion image quality when an original image sequence has larger exposure time difference, and particularly when two input original images are respectively underexposure images and overexposed images, the existing multi-exposure image fusion method can not reconstruct relative contrast information in the fusion image well. The invention provides a two-exposure image fusion method based on guide filtering multi-stage decomposition, which decomposes an image into a base layer and a plurality of detail layers through guide filtering, and then utilizes exposure weights and global gradient weights to mine image information contained in the base layer and the detail layers step by step so as to reconstruct a fusion strategy of the image. The method can adapt to various exposure ratios and complex real scenes, and compared with the existing algorithm, the method obtains a better fusion result.
The object of the invention is achieved by first decomposing the image to be fused into a base layer and a detail layer of the first layer by using the original image as guided filtering, then continuously decomposing the base layer into a base layer and a detail layer of the second layer by using the base layer image as guided filtering, and so on, decomposing the original image into a base layer and a plurality of detail layers. Finally, the image information contained in the base layer and the detail layer is mined by using the exposure weight and the global gradient weight respectively to reconstruct the image.
The main idea of the new two-exposure image fusion method is that the image is decomposed into a basic layer and a detail layer step by utilizing guide filtering, and then the basic layer and the detail layer information of the original image are mined in a multi-level manner by utilizing an exposure weight and a global gradient weight map respectively.
The invention discloses a two-exposure fusion working principle based on guide filtering image multistage decomposition, which is shown in figure 1, and mainly comprises the following steps:
Step 1, converting an original RGB channel image I (I) (i=1, 2) into a YCbCr channel image;
Step 2, obtaining exposure weight of each frame in the Y channel (namely the brightness channel), and obtaining a saliency weight map W e (i) through Gaussian filtering, wherein the saliency weight W e (i) serving as a base layer is a solution for adapting to weight factors.
Where σ is the variance and T 1、T2 is the exposure threshold. In order for the fused image to reflect as much detail as possible of the poorly exposed sequence, we need to know the details of the low pixel intensity locations in the underexposed sequence and the details of the high pixel intensity locations that are overexposed. For this purpose, an adaptation factor of 1-mean (I)) is introduced, mean (I)) being the pixel average intensity value of the image I (I). In addition, a coefficient gamma is added to increase the robustness of the adaptive weight, and the parameter can be adjusted according to the exposure size of a specific scene and an exposure sequence.
And 3, performing guide filtering by taking the RGB channel image of the original image as a guide image to decompose the image. Among them, the guided filtering is a popular edge preserving filtering operation, and the loss function of the guided filtering based on the least square method is as follows:
Wherein,
The linear transformation coefficients of the local window omega k with the pixel point k as the center are respectively, the I i is a guide image, the P i is an original image, epsilon is a regularization parameter, omega, mu and delta are the pixel number, the mean value and the variance of the local window respectively, and the final output sequence is as follows:
Wherein, The guided filter operator is denoted here by Gf (I, P, r, epsilon), r being the filter radius. The original image is divided into a base layer B 1 (i) and a detail layer D 1 (i) by a guided filtering operation, namely
B1(i)=Gf(I(i),I(i),r,ε),D1(i)=I(i)-B1(i);
Step 4. The base layer B 1 (i) obtained in step 3 is continuously guided and filtered by taking itself as a guide diagram to obtain B 2 (i) and D 2 (i), namely
B2(i)=Gf(B1(i),B1(i),r,ε),D2(i)=B1(i)-B2(i)
And the same thing can be iterated all the time to obtain B n (i) and D n (i), and the image is decomposed into
I(i)=B1(i)+D1(i)
=B2(i)+D2(i)+D1(i)
=...
=Bn(i)+Dn(i)+Dn-1(i)+…++D1(i)
Note that the base layer B (i) =b n (i), the detail layer is D (i) =d n(i)+Dn-1(i)+…++D1 (i), as shown in fig. 2, and two weights are designed to respectively mine the image information of the base layer and the detail layer;
and 5, respectively taking global gradients for the D n (i), and carrying out Gaussian filtering treatment to obtain a weight graph W g,n (i) of each level of saliency edge. The global gradient is calculated mainly by the following steps, wherein each pixel of the RGB image sequence comprises three components of red, green and blue, so that each pixel can be formed by a three-dimensional vector To express:
Wherein, As unit vectors, R, G, B are three components of red, green and blue. The parameters for calculating the vector gradient are then g xx、gyy、gxy:
Gradient direction angle θ:
the gradient Grad (θ) is:
gradients can be taken as global gradient weights W g,n (i) on each level of detail layer from frame to frame, and the underexposure and overexposure image detail information of each layer can be fully mined;
step 6. Foundation layer Exposure weight And the weight of detail layerAs will be described below,
Wherein, Is the result of the exposure weights being normalized and filtered,Is the result of the normalization and filtering of the global gradient weights of the nth level of detail layer.
Step 7, performing image fusion on the base layer B (i) and the detail layer D (i) respectively according to the obtained image multi-stage decomposition and the weight diagrams at all levels to obtain a fusion image F, as shown in FIG. 3, namely:
Finally, a multi-exposure fusion structure similarity (MEF-SSIM) result is used as a standard for measuring the performance of the algorithm.
FIG. 4 is a graph showing a comparison of the structural similarity of multi-exposure fusion (MEF-SSIM) results of the present invention with nine multi-exposure fusion techniques. 39 different two-exposure image sequences are selected, and the algorithm is tested and analyzed in two aspects of subjective and objective. Experimental results show that the method can better adapt to different exposure conditions of the original image, and the whole brightness and the local detail are respectively reserved through the exposure weight and the global gradient weight. Compared with the prior art, the method is superior to other 9 methods in objective evaluation, and color distortion and fusion artifacts do not appear in all test image scenes.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (4)

1. The two-exposure image fusion method based on the guided filtering multi-stage decomposition is characterized by comprising the following steps of:
step 1, acquiring two original RGB channel images I (I) to be fused, and converting the two original RGB channel images I (I) into YCbCr channel images, wherein i=1, 2;
Step 2, acquiring an exposure weight map of an YCbCr channel image in a Y channel, and obtaining exposure significance weight W e (i) through Gaussian filtering;
Step 3, taking the original RGB channel image I (I) as a guide image to conduct guide filtering so as to divide the guide image into a base layer B 1 (I) and a detail layer D 1 (I);
And 4, continuing to conduct guided filtering on the base layer B 1 (i) obtained in the step 3 by taking the base layer B 1 (i) as a guide diagram to obtain B 2 (i) and D 2 (i), and iterating the same to obtain B n (i) and D n (i), and decomposing the image into:
I(i)=B1(i)+D1(i)
=B2(i)+D2(i)+D1(i)
=...
=Bn(i)+Dn(i)+Dn-1(i)+…++D1(i)
Note that base layer B (i) =b n (i), detail layer D (i) =d n(i)+Dn-1(i)+…++D1 (i);
Step 5, taking global gradients from each detail layer D n (i) frame by frame, and obtaining each layer of saliency weight W g,n (i) through Gaussian filtering treatment;
step 6, calculating the exposure weight of the base layer of the fusion image And the weight of detail layer
Step 7, respectively carrying out image fusion on a base layer B (i) and a detail layer D (i) according to image multi-level decomposition and weight diagrams at all levels to obtain a fusion image F;
2. The method for fusing the two exposure images based on the guided filtering multistage decomposition according to claim 1, wherein the method for calculating the exposure saliency weight W e (i) in the step 2 is as follows:
Where σ is the variance, T 1、T2 is the exposure threshold, mean (I (I)) is the pixel average intensity value of the original RGB channel image I (I), and γ is the coefficient for increasing the adaptive weight robustness.
3. The method for merging two exposure images based on the guided filtering multi-level decomposition according to claim 1, wherein the guided filtering in said step 3 is an edge preserving filtering operation, and a loss function of the guided filtering based on a least square method is as follows:
Wherein, The method comprises the steps of taking a pixel point k as a center, taking a local window omega k as a linear transformation coefficient, taking an I i as a guide image, taking P i as an original image, taking epsilon as a regularization parameter, taking omega, mu and delta as local window pixel numbers, mean values and variances respectively, and taking a final output sequence as follows:
Wherein, Let Gf (I, P, r,)) denote the guided filter operator, r is the filter radius, and the original image is divided into base layer B 1 (I) and detail layer D 1 (I) by the guided filter operation:
B1(i)=Gf(I(i),I(i),r,ε),D1(i)=I(i)-B1(i)。
4. The method for fusing the two exposure images based on the guided filtering multistage decomposition according to claim 1, wherein in the step 5, global gradients are taken from each detail layer D n (i) respectively, and the method for obtaining each layer of saliency weight W g,n (i) through Gaussian filtering treatment is specifically as follows:
each pixel of the RGB image sequence contains three components of red, green and blue, and each pixel is composed of a three-dimensional vector To express:
Wherein, The vector gradient is calculated by using a unit vector R, G, B as three color components of red, green and blue, and the parameter g xx、gyy、gxy for calculating the vector gradient is respectively as follows:
the gradient direction angle θ is:
the gradient Grad (θ) is:
CN202210574642.7A 2022-05-24 2022-05-24 A two-exposure image fusion method based on guided filtering multi-level decomposition Active CN115035009B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210574642.7A CN115035009B (en) 2022-05-24 2022-05-24 A two-exposure image fusion method based on guided filtering multi-level decomposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210574642.7A CN115035009B (en) 2022-05-24 2022-05-24 A two-exposure image fusion method based on guided filtering multi-level decomposition

Publications (2)

Publication Number Publication Date
CN115035009A CN115035009A (en) 2022-09-09
CN115035009B true CN115035009B (en) 2025-04-18

Family

ID=83120837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210574642.7A Active CN115035009B (en) 2022-05-24 2022-05-24 A two-exposure image fusion method based on guided filtering multi-level decomposition

Country Status (1)

Country Link
CN (1) CN115035009B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115511765A (en) * 2022-10-18 2022-12-23 七海测量技术(深圳)有限公司 Multi-exposure image rapid fusion method based on double-scale decomposition
CN116051440A (en) * 2022-12-30 2023-05-02 西安电子科技大学芜湖研究院 An image enhancement processing method and system
CN118735918B (en) * 2024-09-03 2024-10-29 江西财经职业学院南昌校区 A method and system for detecting damage of a stay cable based on cross-sectional images
CN120599713A (en) * 2025-08-07 2025-09-05 成都山莓科技有限公司 Attendance method and device based on face recognition

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763367A (en) * 2021-09-13 2021-12-07 中国空气动力研究与发展中心超高速空气动力研究所 Comprehensive interpretation method for infrared detection characteristics of large-size test piece

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763367A (en) * 2021-09-13 2021-12-07 中国空气动力研究与发展中心超高速空气动力研究所 Comprehensive interpretation method for infrared detection characteristics of large-size test piece

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进导向滤波算法的低剂量CT 图像处理;龙邦媛等;电子学报;20190731;全文 *

Also Published As

Publication number Publication date
CN115035009A (en) 2022-09-09

Similar Documents

Publication Publication Date Title
CN115035009B (en) A two-exposure image fusion method based on guided filtering multi-level decomposition
Brooks et al. Unprocessing images for learned raw denoising
CN112288658B (en) Underwater image enhancement method based on multi-residual joint learning
CN110175964B (en) Retinex image enhancement method based on Laplacian pyramid
US20240062530A1 (en) Deep perceptual image enhancement
CN115223004B (en) Method for generating image enhancement of countermeasure network based on improved multi-scale fusion
KR101831551B1 (en) High dynamic range image generation and rendering
CN104881854B (en) High dynamic range images fusion method based on gradient and monochrome information
CN112465727A (en) Low-illumination image enhancement method without normal illumination reference based on HSV color space and Retinex theory
CN107292830B (en) Low-illumination image enhancement and evaluation method
Lee et al. Image contrast enhancement using classified virtual exposure image fusion
CN113284061B (en) Underwater image enhancement method based on gradient network
CN114862698A (en) Method and device for correcting real overexposure image based on channel guidance
Shutova et al. NTIRE 2023 challenge on night photography rendering
CN116385298B (en) A reference-free enhancement method for images collected by UAV at night
CN117291851B (en) Multi-exposure image fusion method based on low-rank decomposition and sparse representation
CN110009574A (en) A kind of method that brightness, color adaptively inversely generate high dynamic range images with details low dynamic range echograms abundant
JP5765893B2 (en) Image processing apparatus, imaging apparatus, and image processing program
CN106375675B (en) A kind of more exposure image fusion methods of aerial camera
Singh et al. Weighted least squares based detail enhanced exposure fusion
Vanmali et al. Low complexity detail preserving multi-exposure image fusion for images with balanced exposure
CN117173041A (en) An underwater image enhancement method, device, equipment and medium
CN112927162A (en) Low-illumination image oriented enhancement method and system
Dar et al. A dynamic fuzzy histogram equalization for high dynamic range images by using multi-scale Retinex algorithm
Siddiqui et al. Hierarchical color correction for camera cell phone images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant