Disclosure of Invention
Aiming at the defects, the invention provides the underwater image enhancement method based on the anisotropic color channel attenuation difference, which considers the light attenuation difference under water, can effectively improve the brightness and contrast of the enhanced underwater image and has good robustness.
The invention adopts an underwater image enhancement method based on the attenuation difference of an anisotropic color channel to solve the problems, and specifically comprises the following steps:
(1) Extracting R, G, B color channels of the original image, and respectively calculating the total pixel value, the mean value and the variance of each color channel;
(2) Judging whether color channels meeting the conditions exist according to the mean value and the variance of each color channel, wherein the conditions are as follows:
F=diff((μλ-2σλ),(μλ+2σλ))
F≥3/4
μλ>1/2
wherein mu λ is expressed as the mean value corresponding to each color channel, sigma λ is expressed as the variance corresponding to each color channel, lambda epsilon { R, G, B };
If one channel meeting the conditions exists in the three color channels, correcting each pixel value in the remaining two color channels by adopting an adaptive gamma conversion function by taking the channel meeting the conditions as a reference image, and acquiring enhanced images of the remaining two color channels after correction;
If two or more than two channels meeting the conditions exist in the three color channels, randomly selecting one channel meeting the conditions as a reference image, correcting each pixel value in the remaining two color channels by adopting an adaptive gamma conversion function, and acquiring an enhanced image of the remaining two color channels after correction;
If none of the three color channels meets the condition, selecting the channel with the largest total pixel value as a reference image, setting a changed gamma correction formula, and correcting each pixel value in the remaining two color channels by adopting the changed gamma correction formula respectively, and obtaining an enhanced image of the remaining two color channels after correction;
(3) The combined reference image and the enhanced images of the remaining two color channels form the final enhanced image.
Further, the adaptive gamma transformation function in step (2) is:
if the average value of the color channels is less than or equal to 1/2, the following formula is selected for correction:
Ienh(x,y)=αIγ(x,y)+βI(x,y)
Wherein, I enh represents an enhanced image, I represents an original image, x and y both represent specific positions of channel pixels, alpha and beta both represent weighting coefficients, gamma represents an index of gamma transformation, and specific numerical values are mean differences of a reference image and a color channel to be corrected;
if the average value of the color channels is greater than 1/2, the following formula is selected for correction:
In the formula, Representing the total pixel value of the reference image and s λ′ representing the total pixel value of the color channel to be corrected.
Further, the gamma correction formula after the change in the step (2) is:
Ienh(x,y)=Iγ(x,y)
Wherein I enh represents an enhanced image, I represents an original image, x and y each represent a specific position of an image pixel, gamma' represents an index of gamma conversion, and specific numerical values are as follows:
further, the normalization in step (1) specifically refers to normalizing the image pixel values of each color channel from the interval [0,255] to the range of [0,1 ].
Further, the calculation formulas for calculating the total pixel value, the mean value and the variance of each color channel in the step (1) are respectively as follows:
Where s λ denotes the total pixel value of the lambda channel, mu λ denotes the mean value of the lambda channel, sigma λ denotes the variance of the lambda channel, m denotes the number of rows of the original image I, n denotes the number of columns of the original image I, I denotes the ith row of the original image I, j denotes the jth column of the original image I, and I nor λ (I, j) denotes the normalized pixel value of the ith row and jth column of the lambda channel.
Further, in the step (3), the inverse normalization process is required for the reference image and the enhanced images of the remaining two color channels before the reference image and the enhanced images of the remaining two color channels are combined to form the final enhanced image.
The invention further provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method when executing the computer program. A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method.
Compared with the prior art, the underwater image enhancement method based on the anisotropic color channel attenuation difference has the remarkable advantages that compared with the prior underwater image enhancement algorithm, the underwater image enhancement method based on the anisotropic color channel attenuation difference can adaptively adjust an underwater image, accurately enhance the image by considering the differences of the attenuation of each channel, and improve the brightness and contrast of the image.
Detailed Description
The technical scheme of the invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the method for enhancing an underwater image based on the attenuation difference of an anisotropic color channel disclosed by the invention specifically comprises the following steps:
Firstly, carrying out normalization processing on image pixels of an original image, extracting R, G, B color channels from the original image, and respectively calculating the total pixel value, the mean value and the variance of each color channel;
(1) Let the number of rows and columns of the original image I be m, n, respectively, i.e.:
I={(i,j)|1≤i≤m,1≤j≤n}
(2) The original image I is normalized, namely, the pixel value of the image is normalized to be in the range of [0,1 ]:
where a represents the smallest pixel value in the original image and b represents the largest pixel value in the original image.
(3) The total pixel values of an R channel, a G channel and a B channel in an original image are calculated respectively, and the formula is as follows:
Where I is denoted as the ith row of the original image I, j is denoted as the jth column of the original image I, and I nor λ (I, j) is denoted as the lambda channel ith row and jth column normalized pixel value.
The color channel lambda mid,λmid E { R, G, B } where the maximum value of s λ is located is selected from:
(4) The mean value and variance of an R channel, a G channel and a B channel in an original image are calculated respectively, and the formula is as follows:
and secondly, judging the contrast and brightness of the channel image according to the mean value and the variance of each color channel, setting corresponding judging conditions, selecting a reference image according to the judging result, and enhancing the non-reference image.
(1) The following discrimination conditions are set by combining Chebyshev inequality and the histogram characteristics of the high-contrast image, and the specific formula is as follows:
F=diff((μλ-2σλ),(μλ+2σλ)) (2)
wherein l, h and o respectively represent channel images satisfying the respective conditions;
As can be seen from the equation (2), the smaller the F value, the more concentrated the pixel distribution of the channel image, the lower the contrast of the channel image, and the smaller the σ value, the less significant the contrast of the channel image, and the smaller the μ λ value, the more concentrated the pixels of the channel image are in the range of the interval [0,0.5], i.e., the lower the brightness of the channel image. Assuming that 1/3 of the pixels lie within 2 standard deviations of the average number, the channel image has a lower contrast, and assuming that the value of mu λ is between 0,0.5, the brightness of the channel image is lower. Therefore, the channel image l has a small contrast and low luminance, and the channel image h has a high contrast and high luminance.
(2) According to the above-mentioned discrimination condition, selecting the color channel lambda corresponding to the channel image h with high contrast and high brightness as reference image, and making said color channel lambda h,λh epsilon { R, G, B }.
(2.1) If the condition that F.gtoreq.3/4. Mu. λ >0.5 is satisfied, that is, lambda h is only one color channel, the remaining two color channels are respectively denoted as lambda' 1、λ′2,λ′1∈{{R,G,B}-{λh}}、λ′2∈{{R,G,B}-{λh. And further determining whether the average value of each color channel is less than or equal to 1/2 for the remaining two color channels lambda' 1、λ′2.
If the average value of the color channels is less than or equal to 1/2, the brightness of the channel image is dark, but the contrast is not low, or the brightness of the channel image is dark and the contrast is low, the remaining two color channels lambda' 1、λ′2 are respectively corrected by using the following formula (3):
Ienh(x,y)=αIγ(x,y)+βI(x,y) (3)
In the formula, Λ 'corresponds to selection of λ' 1、λ′2, α and β both represent weighting coefficients, α=β=0.5, I enh represents the enhanced image, I represents the original image, and x and y both represent the specific locations of the channel pixels;
if the average value of the color channels is larger than 1/2, the channel image is not dark, but the contrast is lower, and the following formula (4) is selected for correction:
In the formula, Representing the total pixel value of the reference image, s λ′ representing the total pixel value of the color channel lambda 'corresponding to the selection lambda' 1、λ′2.
(2.2) If the condition that the color channels with the F larger than or equal to 3/4U mu λ being more than 0.5 are two or more than two, namely lambda h is two or more than two color channels, one color channel is arbitrarily selected as a reference image, and the remaining two color channels are respectively marked as lambda' 1、λ′2;
Further judging whether the average value of each color channel is smaller than or equal to 1/2, if the average value of the color channels is smaller than or equal to 1/2, selecting the formula (3) to correct the remaining two color channels lambda' 1、λ′2 respectively, and if the average value of the color channels is larger than 1/2, selecting the formula (4) to correct.
(3) According to the above-mentioned discrimination condition, if the channel image h with high contrast and high brightness is not selected, i.e. all three color channels do not meet the condition that F is greater than or equal to 3/4U λ >0.5, then according to the color channel lambda mid where the maximum value of s λ obtained in step one is positioned as reference image, the remaining two color channels are marked as lambda '1、λ′2,λ′1∈{{R,G,B}-{λmid}}、λ′2∈{{R,G,B}-{λmid }, and the following formulas (5) and (6) are selected to correct the remaining two color channels lambda' 1、λ′2 respectively:
Ienh(x,y)=Iγ′(x,y) (5)
Where λ 'corresponds to the selection of λ' 1、λ′2.
And thirdly, carrying out inverse normalization processing on the reference image and the enhanced images of the two remaining color channels, namely multiplying the pixel value by 255 and rounding down, and then forming a final enhanced image by combining the reference image and the enhanced images of the two remaining color channels.
As shown in fig. 2,2 (a) is an original image, and fig. 2 (b) is an enhanced image obtained by processing the original image by using the image enhancement method of the present invention, and according to the comparison of the two images, it is obvious that the brightness and contrast of the enhanced image are obviously improved compared with those of the original image.
The invention further provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method when executing the computer program. A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method.