CN119722434A - Image processing chip, method, device and storage medium - Google Patents
Image processing chip, method, device and storage medium Download PDFInfo
- Publication number
- CN119722434A CN119722434A CN202411911145.7A CN202411911145A CN119722434A CN 119722434 A CN119722434 A CN 119722434A CN 202411911145 A CN202411911145 A CN 202411911145A CN 119722434 A CN119722434 A CN 119722434A
- Authority
- CN
- China
- Prior art keywords
- image
- gray
- pixel value
- gray level
- gray scale
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Processing (AREA)
Abstract
The application discloses an image processing chip, an image processing method, image processing equipment and a storage medium, and belongs to the technical field of image display. The chip comprises an image processor, wherein the image processor is configured to acquire a first gray level image based on a first image, the first gray level image indicates brightness degree of the first image, to enhance local contrast of the first gray level image according to brightness range of the first gray level image to obtain a second gray level image, the first gray level image comprises a plurality of gray level blocks, the local contrast indicates brightness difference of the gray level blocks in the first gray level image, and to conduct color restoration on the second gray level image based on color of the first image to obtain the second image. The local contrast is accurately enhanced aiming at the brightness range, so that the local tone effect of the second image obtained by adjustment is better, and the accuracy is higher.
Description
Technical Field
The embodiment of the application relates to the technical field of image display, in particular to an image processing chip, an image processing method, image processing equipment and a storage medium.
Background
In the technical field of image display, an image acquired by an image acquisition device is different from an image observed by human eyes. Therefore, the image is corrected during the process of displaying the image, so as to reduce the difference between the displayed image and the image observed by human eyes.
Disclosure of Invention
The embodiment of the application provides an image processing chip, an image processing method, image processing equipment and a storage medium, which can be used for correcting the contrast of an image. The technical scheme is as follows:
In one aspect, there is provided an image processing chip comprising an image processor configured to:
Acquiring a first gray image based on a first image, the first gray image indicating a brightness level of the first image;
According to the brightness range of the first gray level image, enhancing the local contrast of the first gray level image to obtain a second gray level image, wherein the first gray level image comprises a plurality of gray level blocks, and the local contrast indicates the brightness difference of the gray level blocks in the first gray level image;
and performing color reproduction on the second gray level image based on the color of the first image to obtain a second image.
In one possible implementation, the image processor, when enhancing the local contrast of the first gray scale image according to the brightness range of the first gray scale image, is configured to:
Acquiring a base layer pixel value and a detail layer pixel value corresponding to any gray scale block, wherein the base layer pixel value is base layer data of the gray scale value of the central pixel point of the any gray scale block, and the detail layer pixel value is detail layer data of the gray scale value of the central pixel point of the any gray scale block;
according to the brightness range, adjusting the pixel value of the base layer to obtain a first pixel value;
and determining a second pixel value corresponding to the central pixel point of any gray scale block according to the first pixel value and the detail layer pixel value, and determining a second gray scale image based on the second pixel value of each gray scale block.
In a possible implementation manner, the image processor, when determining a second pixel value corresponding to a central pixel point of the any gray scale block according to the first pixel value and the detail layer pixel value, is configured to:
And increasing the detail layer pixel value, decreasing the first pixel value, and determining the second pixel value based on the increased detail layer pixel value and the decreased first pixel value.
In one possible implementation, the luminance range is at least one of a local luminance range including a maximum value and a minimum value of gray values of respective pixels in the arbitrary gray scale block or a global luminance range including a maximum value and a minimum value of gray values of respective pixels in the first gray scale image.
In one possible implementation manner, the global brightness range of the first gray scale image is the same as the global brightness range of an adjacent gray scale image, and the adjacent gray scale image is a gray scale image corresponding to an adjacent frame image of the first image.
In one possible implementation, the image processor, when performing color reduction on the second gray level image based on the color of the first image, is configured to:
Determining the number of pixel points with each gray value according to the gray value of each pixel point in the second gray image to obtain the distribution information of the second gray image;
determining a contrast enhancement curve of the second gray level image according to the distribution information;
Adjusting the global contrast of the second gray level image according to the contrast enhancement curve to obtain a third gray level image, wherein the global contrast indicates the brightness difference in the second gray level image;
And performing color restoration on the third gray level image based on the color of the first image to obtain the second image.
In one possible implementation, the image processor, when determining the contrast enhancement curve of the second gray scale image from the distribution information, is configured to:
acquiring distribution information of a fourth gray level image, wherein the fourth gray level image indicates brightness of a third image, and the third image is a previous frame image of the first image;
judging whether the first image is subjected to scene change relative to the third image according to the distribution information of the second gray level image and the distribution information of the fourth gray level image, and determining a contrast enhancement curve of the second gray level image according to a judgment result.
In one possible implementation, the image processor, when determining the contrast enhancement curve of the second gray level image according to the determination result, is configured to:
determining a contrast enhancement curve of the fourth gray level image as a contrast enhancement curve of the second gray level image under the condition that the judging result indicates that the first image does not have scene change relative to the third image;
Or calculating a contrast enhancement curve of the second gray level image according to the distribution information of the second gray level image under the condition that the judging result indicates that the first image is subjected to scene change relative to the third image.
In one possible implementation, the image processor, when determining the contrast enhancement curve of the second gray scale image from the distribution information, is configured to:
Equalizing the number of the pixel points of each gray value based on a number threshold, wherein the number difference of the pixel points of each gray value after equalization is smaller than the number difference of the pixel points of each gray value in the distribution information;
and determining a first enhancement curve of the second gray level image according to the number of the equalized pixel points, and determining the contrast enhancement curve based on the first enhancement curve.
In one possible implementation, the image processor, when equalizing the number of pixels of each gray value based on a number threshold, is configured to:
For a first gray value with the number of the pixel points being larger than a number threshold value, adjusting the number of the pixel points of the first gray value from the first number to the number threshold value;
the number of pixels of a second gray value is adjusted according to a second number, which is obtained by adding the difference between the first number of the respective first gray values and the number threshold, the second gray value being different from the first gray value.
In one possible implementation, the image processor, when determining the contrast enhancement curve based on the first enhancement curve, is configured to:
When the first image is a dim light scene, adjusting the first enhancement curve by using a second enhancement curve to obtain the contrast enhancement curve, wherein the second enhancement curve is used for enhancing the global contrast of the image in the dim light scene;
In one possible implementation, the image processor, when determining the contrast enhancement curve based on the first enhancement curve, is configured to:
And adjusting the first enhancement curve by using a smoothing coefficient to obtain the contrast enhancement curve, wherein the smoothing coefficient is used for reducing abrupt changes between the first image and adjacent frame images of the first image.
In another aspect, the present application provides an image processing method, the method including:
Acquiring a first gray image based on a first image, the first gray image indicating a brightness level of the first image;
According to the brightness range of the first gray level image, enhancing the local contrast of the first gray level image to obtain a second gray level image, wherein the first gray level image comprises a plurality of gray level blocks, and the local contrast indicates the brightness difference of the gray level blocks in the first gray level image;
and performing color reproduction on the second gray level image based on the color of the first image to obtain a second image.
In one possible implementation manner, the enhancing the local contrast of the first gray scale image according to the brightness range of the first gray scale image to obtain a second gray scale image includes:
Acquiring a base layer pixel value and a detail layer pixel value corresponding to any gray scale block, wherein the base layer pixel value is base layer data of the gray scale value of the central pixel point of the any gray scale block, and the detail layer pixel value is detail layer data of the gray scale value of the central pixel point of the any gray scale block;
according to the brightness range, adjusting the pixel value of the base layer to obtain a first pixel value;
and determining a second pixel value corresponding to the central pixel point of any gray scale block according to the first pixel value and the detail layer pixel value, and determining a second gray scale image based on the second pixel value of each gray scale block.
In a possible implementation manner, the determining, according to the first pixel value and the detail layer pixel value, a second pixel value corresponding to a center pixel point of the any gray scale block includes:
And increasing the detail layer pixel value, decreasing the first pixel value, and determining the second pixel value based on the increased detail layer pixel value and the decreased first pixel value.
In one possible implementation, the luminance range includes at least one of a local luminance range including a maximum value and a minimum value of gray values of respective pixels in the arbitrary gray scale block or a global luminance range including a maximum value and a minimum value of gray values of respective pixels in the first gray scale image.
In one possible implementation manner, the global brightness range of the first gray scale image is the same as the global brightness range of an adjacent gray scale image, and the adjacent gray scale image is a gray scale image corresponding to an adjacent frame image of the first image.
In one possible implementation manner, the performing color reproduction on the second gray level image based on the color of the first image to obtain a second image includes:
Determining the number of pixel points with each gray value according to the gray value of each pixel point in the second gray image to obtain the distribution information of the second gray image;
determining a contrast enhancement curve of the second gray level image according to the distribution information;
The global contrast of the second gray level image is enhanced according to the contrast enhancement curve, so that a third gray level image is obtained, and the global contrast indicates the brightness difference in the second gray level image;
And performing color restoration on the third gray level image based on the color of the first image to obtain the second image.
In a possible implementation manner, the determining the contrast enhancement curve of the second gray level image according to the distribution information includes:
acquiring distribution information of a fourth gray level image, wherein the fourth gray level image indicates brightness of a third image, and the third image is a previous frame image of the first image;
judging whether the first image is subjected to scene change relative to the third image according to the distribution information of the second gray level image and the distribution information of the fourth gray level image, and determining a contrast enhancement curve of the second gray level image according to a judgment result.
In one possible implementation manner, the determining the contrast enhancement curve of the second gray level image according to the determination result includes:
determining a contrast enhancement curve of the fourth gray level image as a contrast enhancement curve of the second gray level image under the condition that the judging result indicates that the first image does not have scene change relative to the third image;
Or calculating a contrast enhancement curve of the second gray level image according to the distribution information of the second gray level image under the condition that the judging result indicates that the first image is subjected to scene change relative to the third image.
In a possible implementation manner, the determining the contrast enhancement curve of the second gray level image according to the distribution information includes:
Equalizing the number of the pixel points of each gray value based on a number threshold, wherein the number difference of the pixel points of each gray value after equalization is smaller than the number difference of the pixel points of each gray value in the distribution information;
and determining a first enhancement curve of the second gray level image according to the number of the equalized pixel points, and determining the contrast enhancement curve based on the first enhancement curve.
In one possible implementation manner, the equalizing the number of pixels of each gray value based on the number threshold includes:
For a first gray value with the number of the pixel points being larger than a number threshold value, adjusting the number of the pixel points of the first gray value from the first number to the number threshold value;
Adjusting the number of pixels of a second gray value according to a second number, the second number being obtained by adding the difference between the first number of each first gray value and the number threshold, the second gray value being different from the first gray value;
In one possible implementation, the determining the contrast enhancement curve based on the first enhancement curve includes:
And when the first image is a dark scene, adjusting the first enhancement curve by using a second enhancement curve to obtain the contrast enhancement curve, wherein the second enhancement curve is used for enhancing the global contrast of the image in the dark scene.
In one possible implementation, the determining the contrast enhancement curve based on the first enhancement curve includes:
And adjusting the first enhancement curve by using a smoothing coefficient to obtain the contrast enhancement curve, wherein the smoothing coefficient is used for reducing abrupt changes between the first image and adjacent frame images of the first image.
In another aspect, an electronic device is provided, where the electronic device includes any one of the image processing chips described above.
In another aspect, there is provided a computer readable storage medium having stored therein at least one computer program loaded and executed by an image processor to cause a computer to implement any of the above-described image processing methods.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. An image processor of an electronic device reads the computer instructions from the computer-readable storage medium, the image processor executing the computer instructions to cause the electronic device to perform any one of the image processing methods described above.
The technical scheme provided by the application has at least the following beneficial effects:
in the process of enhancing the local contrast ratio, the local contrast ratio can be enhanced accurately aiming at the brightness range by referring to the brightness range of the first gray level image, so that the local tone effect of the second image obtained by toning is better, and the accuracy is higher.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating an adjustment of a mapping curve according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a distance mapping provided by an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a first enhancement curve determination process according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a histogram provided by an embodiment of the present application;
FIG. 6 is a schematic view of glare provided by an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating gray scale image adjustment according to an embodiment of the present application;
Fig. 8 is a schematic structural diagram of an image processing chip according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of another image processing chip according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
In the field of image display technology, there is a high dynamic scene with high brightness, high color saturation, and high contrast. When a general image sensor is used for shooting a high dynamic scene, only a low-light area is selected to be properly exposed, so that the high-light area is overexposed, the detail of the high-light area is lost, or the high-light area is selected to be properly exposed, so that the low-light area is underexposed, and the detail of the low-light area is difficult to recognize. Based on this, high dynamic range images may be acquired using high dynamic range image sensors or HDR (HIGH DYNAMIC RANGE IMAGING ) synthesis techniques to record details of the high dynamic range scene.
However, the high dynamic range image displayed on some displays, such as 8bit or 10bit displays, is not matched to the real scene perceived by the human eye. Therefore, the brightness of the high dynamic range image is modified by using tone mapping technology, so that the modified high dynamic range image is matched with the perceived real scene, and the difference between the high dynamic range image displayed on the display and the scene observed by human eyes is reduced.
Among them, the tone mapping technique is an image processing technique that approximately displays a high dynamic range image on a limited dynamic range medium. The scene brightness is transformed to a displayable range through a large-scale contrast attenuation, and meanwhile, the information of image details, colors and the like is kept, so that the tone-mapped scene is matched with the perception of a real scene.
In some cases, tone mapping techniques are divided into GTM (Global Tone Mapping ) and LTM (Local Tone Mapping, local tone mapping). The global tone mapping maps the same gray scale value in the image to the same mapping value through the mapping curve so as to improve the global brightness and contrast. The local tone mapping is based on local feature information of the image, and maps the same gray scale value to different mapping values according to different positions of the original pixel in the image.
In the related art, during the process of performing local tone mapping, an image is decomposed into a base layer and a detail layer, the base layer brightness and the detail layer brightness are mapped by using a fixed contrast curve, a new gray image is determined based on the mapped base layer brightness and detail layer brightness, and color restoration is performed on the new gray image to obtain an enhanced image. However, the contrast curve of the local tone mapping reference is fixed, and the curve cannot adapt to the real dynamic range of the image, so that the result of the local tone mapping is not ideal.
The embodiment of the application provides an image processing method, and a flow chart of the method is shown in fig. 1, and the method comprises steps 101-103.
In step 101, a first gray image is acquired based on the first image, the first gray image indicating a brightness level of the first image.
In a possible implementation manner, the image processing method provided by the embodiment of the application is applied to a processing device, and the processing device can be any device with a processing function, for example, a terminal or a server. Optionally, an image processing chip is configured in the processing device, and the processing device processes the first image through the image processing chip to obtain the second image. Or the processing device loads and executes the computer program through the processor to realize the image processing method provided by the embodiment of the application, namely, the process of executing the image processing method by the processing device can be a software side processing process or a hardware side processing process.
Illustratively, the processing device acquires a first image to be displayed, extracts a gray-scale image indicating a brightness level from the first image, and obtains the first gray-scale image. The first image may be an acquired image, for example, an image acquisition device is configured in the processing device, and the image acquisition device performs image acquisition to obtain the first image, where the image acquisition device may be referred to as an image sensor in some cases. Taking the processing equipment as an example of a mobile phone, the image acquisition device can be a camera on the mobile phone.
Alternatively, the first image may also be an image obtained by accessing a storage space, where the accessed storage space may be a storage space of the processing device or a storage space of another device. The first image may also be an image obtained by searching the internet.
The embodiment of the application is not limited to the image type of the first image, and may be a still image, that is, the image format of the first image is jpg (Joint Photographic Experts Group, joint image expert group). The first image may also be one of a plurality of images, for example, a moving image belonging to gif (GRAPHICS INTERCHANGE Format ), or a video.
The gray-scale image of the first image can be extracted as the first gray-scale image no matter what type of the first image is acquired based on what mode. For the case where the first image includes a plurality of pixel points, the gray value of each pixel point may be determined, resulting in a gray image including a plurality of gray values. The gray value is a luminance value in a luminance channel after three channels of RGB (Red Green Blue) normalization. Alternatively, the maximum value on the RGB channel of each pixel point may be used as the gray value of each pixel point, that is, the gray value of the pixel point is determined using equation 1.
I (x, y) =max (HDR r(x,y),HDRg(x,y),HDRb (x, y)) (equation 1)
I in equation 1 indicates a gray value, (x, y) is coordinates of a pixel point on the first image for identifying the pixel point, HDR r (x, y) indicates luminance of the pixel point on the R channel, HDR g (x, y) indicates luminance of the pixel point on the G channel, HDR b (x, y) indicates luminance of the pixel point on the B channel, and MAX indicates selecting the maximum luminance from HDR r(x,y)、HDRg (x, y) and HDR b (x, y).
The gray value of the pixel point may be calculated by other methods, for example, the brightness value of any pixel point on the RGB channel is weighted to obtain the gray value of any pixel point. The processing device may determine the first gray image directly from the calculated gray values, generate a gray image from the gray values of the respective pixels, and use the generated gray image as the first gray image. The calculated gray values may also be processed to obtain a first gray image.
In one possible implementation, after the gray value of each pixel is calculated, the gray value may be further compressed, where the compression formula is, for example, formula 2.
In formula 2, I (x, y) indicates a gray value of a pixel point, t indicates a compression degree of a logarithmic curve, and the greater t is, the stronger the compression degree of the logarithmic curve is, and the more obvious the brightness improvement of a dark place of a compressed gray image is. Conversely, the smaller t is, the lower the compression degree of the logarithmic curve is, the less obvious the brightness improvement of the dark place of the compressed gray level image is, and the darker the brightness is. log (t+1) is a normalization parameter, and L in indicates a compressed gradation value.
After compressing the gray value of each pixel point by using the formula 2, the processing device may determine a first gray image by using the compressed gray value, where the brightness value of any pixel point on the first gray image is the compressed gray value.
In step 102, local contrast of the first gray scale image is enhanced according to the brightness range of the first gray scale image, so as to obtain a second gray scale image, wherein the first gray scale image comprises a plurality of gray scale blocks, and the local contrast indicates brightness difference of the gray scale blocks in the first gray scale image.
In one possible implementation, if the first image is a high dynamic range image, that is, an image obtained by capturing a high dynamic scene, the first image displayed directly may have a brightness difference from the first image displayed and the human eye views the high dynamic scene. For example, when the image capturing apparatus used is a limited dynamic range medium, only a high dynamic range image can be approximately displayed, if the exposure of a dark area is selected to be proper when the first image is captured, the overexposure of a high bright area is easy to cause, and the details of the high bright area are lost, and if the exposure of the high bright area is selected to be proper when the first image is captured, the underexposure of the low bright area is insufficient, and the details of the low bright area are difficult to recognize.
Therefore, the dynamic range of the image needs to be compressed while retaining details by the tone mapping technology, so as to transform the scene brightness to a displayable range through a large-scale contrast attenuation, and retain the information of image details, colors and the like, so that the tone-mapped scene is matched with the perception of the real scene. The dynamic range DR of the image, that is, the brightness range of the image, is determined according to the maximum value and the minimum value of the brightness values.
For example, as
Alternatively, the toning of the first image may be achieved by enhancing the local contrast of the first gray scale image. The process of enhancing local contrast may be referred to in some cases as LTM (Local Tone Mapping ) for gray-value adjustment based on local feature information of the image, i.e. the position of the pixel point in the first image. In the mapping process using the LTM, even if the gray values of two pixels in the first gray image are the same, the mapped gray values are different due to the different positions of the two pixels.
In one possible implementation manner, the process of enhancing the local contrast of the first gray image according to the position of the pixel point to obtain the second gray image comprises the steps of obtaining a base layer pixel value and a detail layer pixel value corresponding to any gray block, wherein the base layer pixel value is base layer data of the gray value of the central pixel point of any gray block, the detail layer pixel value is detail layer data of the gray value of the central pixel point of any gray block, adjusting the base layer pixel value according to the brightness range to obtain the first pixel value, determining a second pixel value corresponding to the central pixel point of any gray block according to the first pixel value and the detail layer pixel value, and determining the second gray image according to the second pixel value of each gray block.
Optionally, after the first gray scale image is acquired, a sliding window may be used to divide the first gray scale image into gray scale blocks, where the number of gray scale blocks is the same as the number of pixels in the first gray scale image. The size of the sliding window is not limited in the embodiment of the present application, and may be any size set based on experience, for example, 5×5, 7×7, or other sizes, and in addition, the sliding window may be other shapes. The processing device scans each pixel point position in the first gray scale image in turn using a sliding window, for example, in a top-to-bottom, left-to-right order.
Next, a process of dividing into a plurality of gradation blocks is exemplified by taking a size of a sliding window of 7×7 as an example. And taking a window area with the size of 7 multiplied by 7 by taking each pixel point as the center for cutting each pixel point in the first gray level image to obtain a gray level block. For a pixel located at the image boundary, when the window area is taken by taking the pixel as the center, the taken window area exceeds the image area, so that the window area can be filled, for example, in a mirror image mode.
Optionally, for the case that the processing device performs layering processing on the base layer and the detail layer in the process of enhancing the local contrast, the processing device further performs layering by using a filter after acquiring the gray scale block, so as to obtain a base layer image at a low frequency and a detail layer image at a high frequency. In the above embodiment, taking a window area with a size of 7×7 as a center of each pixel point as an example, after determining the window area, filtering processing is further performed in the window area by using a filter, and the filter used may be an edge preserving filter, for example, a pilot filter.
In the case where the guide filter includes two image input ports, gray-scale blocks may be input to the two image input ports, one of the input gray-scale blocks being the input image L in and the other being the guide image. The guide filter refers to the guide image, filters the input image to obtain a base layer image B at a low frequency, and subtracts the input image from the base layer image B to obtain a detail layer image d=l in -B at a high frequency.
The processing device may also filter the first gray-scale image, and then cut the base layer image and the detail layer image obtained by filtering by using the sliding window to obtain a detail layer block and a base layer block corresponding to each gray-scale block. The gray value of the central pixel point of any gray scale block at the base layer can be used as the pixel value of the base layer, and the gray value of the central pixel point of any gray scale block at the detail layer can be used as the pixel value of the detail layer no matter what order is adopted to execute the layering and cutting operations. The base layer is a base layer in the image and contains basic structure and main information of the image, and the detail layer contains detail information in the image, such as texture, edges and the like.
After the base layer pixel value and the detail layer pixel value corresponding to the gray scale block are determined, the base layer pixel value can be adjusted according to the brightness range to obtain a first pixel value. Wherein the luminance range includes at least one of a local luminance range including a maximum value and a minimum value of gray values of respective pixels in any one gray scale block and a global luminance range including a maximum value and a minimum value of gray values of respective pixels in the first gray scale image.
The maximum value of the gray blocks in the local luminance range may be referred to as a local maximum Max local in some cases, and the minimum value may be referred to as a local minimum Min local in some cases. The processing apparatus may divide the gray-scale block to have a maximum value of the plurality of gray-scale values in the gray-scale block as the local maximum value Max local and a minimum value of the plurality of gray-scale values in the gray-scale block as the local minimum value Min local.
The maximum value in the global luminance range may be referred to as global maximum Max global in some cases, and the minimum value in the global luminance range may be referred to as global minimum Min global in some cases. The processing apparatus may take a maximum value of the gradation values in the first gradation image as a global maximum value Max global and a minimum value of the gradation values in the first gradation image as a global minimum value Min global after the first gradation image is acquired.
Alternatively, the global maximum and the global minimum may be determined according to adjacent frame images, for example, the global brightness range of the first gray scale image is the same as the global brightness range of the adjacent gray scale image, where the adjacent gray scale image is the gray scale image corresponding to the adjacent frame image of the first image. In this case, the global maximum and the global minimum of each gray-scale image may be determined according to an image dynamic range actually generated by an automatic exposure module of the image capturing apparatus during capturing the image. The base layer images of the adjacent frame images are ensured to be displayed in the same brightness range by controlling the values of the global maximum value and the global minimum value of the adjacent frame images to be the same, so that image flickering is avoided.
The first pixel value may be obtained by adjusting the base layer pixel value for different luminance ranges, including but not limited to the following two ways.
According to the first adjustment mode, the pixel value of the base layer is adjusted according to the local brightness range, a first pixel value is obtained, and the adjustment process is shown in a formula 3.
B local in formula 3 indicates a locally adjusted pixel value of the base layer, min local indicates a local minimum value of the gray block, max local indicates a local maximum value of the gray block, f b (x) is a mapping function for mapping the base layer, belongs to a global mapping curve, and is used for mapping the local maximum value and the local minimum value to a 0-1 interval so as to improve brightness and contrast of the gray block. Referring to formula 3, in the case of mapping the pixel values of the base layer, the local maximum value and the local minimum value of the gray scale block are also referred to, so that the local image information is considered on the basis of mapping with reference to the global mapping curve, so as to ensure the local contrast of the gray scale block after the mapping process.
And adjusting the pixel value of the base layer according to the global brightness range to obtain a first pixel value, wherein the adjusting process is shown in a formula 4.
B global in equation 4 indicates a globally adjusted base layer pixel value, min global indicates a global minimum of the first gray image, max global indicates a global maximum of the first gray image, and f b (x) is a mapping function for mapping the base layer. And correcting the gray value in the process of mapping the pixel value of the base layer by using the global maximum value and the global minimum value of the first gray image. Even if the first image is one frame of image in the video and the global brightness range of the image is changed due to the scene change of each frame in the video, the global mapping curve can be adjusted to ensure that the global mapping curve can meet the mapping requirement of each frame of image and ensure the mapping accuracy.
Alternatively, the first adjustment mode may be selected and executed, the second adjustment mode may be selected and executed with B local as the first pixel value, the first adjustment mode may be executed with B globaI as the first pixel value, and the first adjustment mode and the second adjustment mode may be executed in combination, that is, after the execution of the formulas 3 and 4, the formula 5 is further executed to obtain the first pixel value.
B fusion=fusionw×Blocal+(1-fusionw)×Bglobal (equation 5)
In formula 5, B fusion is a gray value obtained by fusing B local and B global, and fusion w indicates a weight to be referred to in fusing, and fusion w may be a fixed value set empirically, a weight calculated from a luminance value or from local statistics.
The new gray values are obtained by weighted summation of B local and B global by equation 5. Because the statistical information in the local operation is less, the mapping information of B local is greatly influenced by the local extreme value, but the local contrast and brightness can be effectively improved, and the global statistical information is more, but the local contrast and brightness cannot be well considered, so that the weighted summation can be carried out through the formula 5 to balance the actions of the two, and the first pixel value which can improve the local contrast and brightness and can also improve the whole contrast and brightness can be obtained.
Since the mapping curve varies with local and global maximum and minimum variations, the above adjustment procedure can be understood as a dynamic adjustment of f b (x) according to the image content. A modified f b mapping curve is shown in fig. 2. Fig. 2 shows gray values by double (double-precision floating point number) type numerical values, the abscissa in fig. 2 shows gray values before mapping, the ordinate shows gray values after mapping, fig. 2 (1) shows an original f b (x) map curve, fig. 2 (2) shows an f b (x) map curve locally corrected in the first adjustment mode, and fig. 2 (3) shows an f b (x) map curve globally corrected in the second adjustment mode.
In addition, because the noise of the first gray image is located in the detail layer, the above adjustment modes are aimed at the pixel value of the base layer located in the base layer, that is, the noise of the first gray image cannot be amplified due to the mapping stretching, so that the noise is effectively controlled, and the accuracy of the first pixel value obtained by mapping is ensured.
After determining the first pixel value of each gray scale block, determining a second pixel value corresponding to the central pixel point of any gray scale block according to the first pixel value and the pixel value of the detail layer, and obtaining a second gray scale image based on the second pixel value of each gray scale block. Illustratively, increasing the detail layer pixel value, decreasing the first pixel value, determining the second pixel value based on the increased detail layer pixel value and the decreased first pixel value, the adjustment process may be seen in equation 6.
T=k 1×B+k2×fd (D) (formula 6)
The T in equation 6 indicates that the second pixel point is obtained by adjustment, f d is a mapping function for mapping the pixel value of the detail layer located at the detail layer, B is the first pixel value in the above embodiment, B corresponds to B local if the first pixel value is obtained based on the first adjustment mode, B corresponds to B global if the first pixel value is obtained based on the second adjustment mode, and B corresponds to B fusion if the first pixel point is obtained by combining the first adjustment mode and the second adjustment mode. D corresponds to the detail layer pixel point in the above embodiment. k 1∈(0,1),k2 is more than or equal to 1.
Through formula 6, the local detail of the central pixel point of any gray scale block is enhanced by k 2 times and then is overlapped on the base layer B weakened by k 1 times, so that the contrast of the local detail and the change of the detail are highlighted, and the visual impact is increased. After local tone mapping with recombination enhancement by layer decomposition, the local difference of T is amplified, i.e. the local contrast is increased.
The processing device may perform the operation of step 102 on each gray scale block in the first gray scale image to obtain second pixel values for each gray scale block, thereby obtaining a second gray scale image composed of a plurality of second pixel values.
In step 103, color reproduction is performed on the second gray scale image based on the color of the first image, resulting in a second image.
In one possible scenario, the second gray level image may suffer from poor perception in the global luminance range of the image. For example, in the process of increasing the pixel value of the detail layer, step 102 does not refer to the global brightness range of the image, so that the obtained second gray image has a poor perception in the global brightness range of the detail layer, and the problem of gray emission or get confused occurs in the second gray image.
In addition, if the first image is a frame of image in the video, in the video processing, if the actual dynamic range of each frame cannot be perceived, the brightness ranges of the mapping results of the adjacent frames may be different, and the problem of video flickering may occur, which affects the effect of the video image processing. Secondly, after the recombination enhancement is performed by using layer decomposition, although the local contrast is increased, the global contrast of the image is lower, and the image display get confused is poorer in appearance. Therefore, after the local contrast enhancement is performed on the first gray-scale image, the global contrast is further enhanced based on the dynamic range of the image, that is, the global brightness range. For example, the gray value of the pixel point in the second gray image is adjusted.
The method includes the steps of determining the number of pixel points of each gray value according to the gray value of each pixel point in a second gray image to obtain distribution information of the second gray image, determining a contrast enhancement curve of the second gray image according to the distribution information, enhancing global contrast of the second gray image according to the contrast enhancement curve to obtain a third gray image, and enabling the global contrast to indicate brightness difference in the second gray image.
In one possible implementation, the histogram may be used to count the distribution information of the second gray level image. When the histogram statistical distribution information is utilized, the numerical interval corresponds to the gray value, the data point corresponds to the pixel point on the second gray image, and the histogram statistical distribution information can be utilized by adopting the formula 7.
Hist= CalcHist (T (x, y)) (equation 7)
Wherein Hist indicates distribution information, calcHist (x) indicates a calculated image histogram, T (x, y) indicates a gray value of a pixel point on the second gray image, and the number of pixel points located at any gray value can be determined according to the gray value of each pixel point by using formula 7, thereby obtaining the distribution information of the second gray image.
For the case where the gray scale value of the second gray scale image is in the range of 0 to 255, there are 256 gray scale values in total, the bin (bin) number of the image histogram calculated by equation 7 may be 256, or may be another number. Other numbers, such as 64, or other numbers that can be divided by 256, reduce the computational overhead of the histogram by reducing the bin number of the image histogram, and improve the processing efficiency of subsequent global enhancement with the histogram.
Alternatively, the distribution information of the second gray level image may be counted in other manners, for example, the number of pixels corresponding to each gray level value is counted in a table form. Regardless of the manner in which the distribution information of the second gray scale image is obtained, a contrast enhancement curve for global enhancement may be determined based on the distribution information, including but not limited to the following two.
The method comprises the steps of determining a first process, obtaining distribution information of a fourth gray level image, wherein the fourth gray level image indicates brightness of a third image, the third image is a previous frame image of a first image, judging whether scene change occurs to the first image relative to the second image according to the distribution information of the second gray level image and the distribution information of the fourth gray level image, and determining a contrast enhancement curve of the second gray level image according to a judgment result.
In one possible implementation, the first image has a third image located in the previous frame, e.g., the first image is an image in the video, and the first image and the third image are adjacent frame images in the video. The processing device may determine a contrast enhancement curve for the second gray scale image of the first image based on a fourth gray scale image of the third image, the fourth gray scale image being a gray scale image that has not been globally adjusted. The process of acquiring the distribution information of the fourth gray scale image of the third image is similar to that of the first image, and reference is made to the related description, and the description thereof will not be repeated.
Alternatively, an error between the second gray image and the fourth gray image may be calculated using the distribution information of the second gray image and the distribution information of the fourth gray image to reflect the similarity of the second gray image and the fourth gray image. The calculated error is, for example, a covariance, which is used to represent the overall error of two variables, and if the two variables have a consistent trend, i.e., one is greater than its expected value and the other is greater than its expected value, the covariance between the two variables is positive, and if the two variables have a inconsistent trend, i.e., one is greater than its expected value and the other is not greater than its expected value, the covariance between the two variables is negative.
In one possible case, a first covariance between the distribution information of the second gray scale image and the distribution information of the fourth gray scale image may be calculated, and a second covariance between the distribution information of the second gray scale image and the distribution information of the second gray scale image may be calculated. For example, the frequency is calculated according to the distribution information by using the formula 8, and then the first covariance and the second covariance are calculated according to the frequency.
Wherein, pi indicates the frequency, hist i indicates the number of pixel points of any bin value in the distribution information, i is any integer for indicating the bin in the distribution information, NUM indicates the total number of pixel points in the gray level image. The ratio of the total number of pixels in the gray image, i.e., the frequency number, of each bin can be determined by equation 8.
After the frequency number of the second gray image and the frequency number of the fourth gray image are calculated using the distribution information of the second gray image and the distribution information of the fourth gray image, the first covariance may be calculated using equations 9 and 10 based on the frequency numbers.
In equation 9, k indicates the number of bins in the distribution information, and may be 256 or 64.p n [ i ] indicates the frequency number of the second gray scale image, p p [ i ] indicates the frequency number of the fourth gray scale image, and corrp n_pp indicates the calculated correlation coefficient. In equation 10, covp n_pp indicates a first covariance between the frequency of the fourth gray-scale image and the frequency of the second gray-scale image,Indicating a mean calculated based on the frequency of the second gray scale image,Indicating a mean value calculated based on the frequency of the fourth gray scale image.
Alternatively, the covariance between the frequency of the second gray level image and the frequency of the second gray level image may also be calculated using equations 11 and 12.
In formula 11, corrp n denotes a correlation coefficient, k denotes the number of bins in the distribution information, and p n [ i ] denotes the frequency number of the second gray scale image. In equation 12, varp n a second covariance between the frequency of the second gray-scale image and the frequency of the second gray-scale image,Indicating a mean value calculated based on the frequency of the second gray scale image.
After the first covariance and the second covariance are calculated by using the formula 9-the formula 12, the distance between the first covariance and the second covariance may be calculated by using the formula 13 continuously to determine the similarity between the second gray image and the fourth gray image based on the distance.
Diff=abs (covp n_pp-varpn) (equation 13)
Wherein diff indicates the distance between the first covariance and the second covariance, covp n_pp indicates the first covariance, varp n indicates the second covariance, and ABS is an absolute function for calculating the absolute value.
Since the distance between the first covariance and the second covariance can reflect the similarity between the second gray level image and the fourth gray level image, the scene change can be judged based on the distance, and the distance and the similarity threshold can be compared. Wherein the similarity threshold is any value set empirically, such as 0, or other non-negative number.
For example, in the case where the distance is not greater than the similarity threshold, it is determined that the similarity between the second gray-scale image and the fourth gray-scale image is high, and since the second gray-scale image is the gray-scale image normalized by the first image, the fourth gray-scale image is the gray-scale image normalized by the third image. If the similarity between the second gray level image and the fourth gray level image is high, the duty ratio of each gray level value is close, which can indicate that the image contents of the first image and the third image are similar, that is, the first image has no scene change relative to the third image.
For the case where the similarity threshold is 0, the above determination process may also be expressed as determining whether the first covariance is equal to the second covariance, and if covp n_pn=varpn, that is, the case where the first covariance is equal to the second covariance, it may be determined that no scene change occurs in the first image relative to the third image.
In addition, besides directly comparing the distance with the similarity threshold, the distance may be mapped to 0-1, and then the size of the mapped distance and similarity threshold may be determined, where fig. 3 is a schematic diagram of the mapping of the distance provided by the embodiment of the present application, and the abscissa of fig. 3 indicates the distance before mapping, and the ordinate indicates the distance after mapping, which may be referred to as the weight of the distance in some cases. Because the values of the distances are not fixed, the distances between the different values are large, and the distances between the different values are large, the magnitude relation between the first covariance and the second covariance can be better reflected by mapping the distances between 0 and 1, so that the similarity between the second gray level image and the fourth gray level image can be better reflected, and the result can be accurately judged.
The determination result may be determined based on any method, for example, the contrast enhancement curve of the second gray image may be determined based on the determination result, for example, the contrast enhancement curve of the fourth gray image may be determined to be the contrast enhancement curve of the second gray image when the determination result indicates that the first image is not changed from the third image, or the contrast enhancement curve of the second gray image may be calculated according to the distribution information of the second gray image when the determination result indicates that the first image is changed from the third image.
If the first image has no scene change relative to the third image, the global brightness ranges of the second gray level image and the fourth gray level image are similar, the number of pixels of each gray level value is similar, and the process of global contrast enhancement is carried out by using a contrast enhancement curve, namely, the number of pixels of each gray level value is adjusted, so that the distribution of the number of pixels of each gray level value is more uniform. For example, the brightness distribution of the pixels is readjusted mathematically so that the adjusted histogram has the maximum dynamic range, and the number of pixels per bin/bucket is close.
Therefore, the contrast enhancement curve of the fourth gradation image can be multiplexed, and the contrast enhancement curve for adjusting the fourth gradation image can be taken as the contrast enhancement curve of the second gradation image. The process of calculating the contrast enhancement curve of the fourth gray image is similar to that of the second gray image, and reference may be made to the description of the first or second determination process, and the description thereof will not be repeated here.
For similar reasons, it may be determined that the similarity between the second gray scale image and the fourth gray scale image is low when the first image has a scene change relative to the third image, and in this case, since the contrast enhancement curve of the fourth gray scale image cannot be multiplexed, the contrast enhancement curve of the second gray scale image is recalculated based on the distribution information of the second gray scale image, for example, the second determination process is performed to calculate the contrast enhancement curve.
Determining a second process, wherein the number of the pixel points of each gray value is balanced based on a number threshold value, and the number difference of the pixel points of each gray value after balancing is smaller than the number difference of the pixel points of each gray value in the distribution information; and determining a first enhancement curve of the second gray level image according to the number of the equalized pixel points, and determining a contrast enhancement curve based on the first enhancement curve.
Alternatively, the quantity threshold may be any value based on experience and implementation environment settings, such as the total number of pixels of the second gray level image, and the processing device may determine the quantity threshold by equation 14.
CLIPLIMIT = clip_ratio NUM (equation 14)
Wherein CLIPLIMIT is a number threshold, which may be referred to as a clipping value in some cases, clip_ratio is a preset contrast parameter, which is any integer located in a 0-1 interval, and NUM is the total number of pixels of the second gray level image. After determining the number threshold, the number of pixels of each gray value of the second gray image may be equalized based on the number threshold. The equalization process includes, but is not limited to, for a first gray value having a number of pixels greater than a number threshold, adjusting the number of pixels of the first gray value from the first number to a number threshold, and adjusting the number of pixels of a second gray value, the second number being obtained by adding a difference between the first number of the respective first gray values and the number threshold, the second gray value being different from the first gray value, based on the second number.
Optionally, the processing device determines the second number according to the number threshold and the number of pixels of the first gray value in the distribution information. Fig. 4 is a schematic diagram of a determination process of a first enhancement curve according to an embodiment of the present application, where the abscissa of (1) - (4) in fig. 4 is a gray level, the ordinate is the number of pixel points, the abscissa of (5) in fig. 4 is a gray level before mapping, and the ordinate is a gray level after mapping. Fig. 4 (1) indicates distribution information before adjustment, that is, a histogram determined based on gray values of respective pixels in the second gray level image, and the broken line in fig. 4 (2) indicates a number threshold value, and the second number indicates a sum of portions above the number threshold value in all bins in the histogram, which may be referred to as totalExcess in some cases.
After determining the second number, the second number may be equally divided into the respective gray values, and the equally divided gray values may be the second gray values, that is, the gray values where the number of pixels is not smaller than the number threshold, and the overall rising height l= totalExcess/bin num of the histogram. Wherein bin num is the bin number of the histogram. The above procedure can be summarized as that for bin values with magnitudes higher than CLIPLIMIT in the histogram, the bin value is directly CLIPLIMIT, that is, the histogram is clipped by CLIPLIMIT. If the magnitude is between Upper and CLIPLIMIT, the magnitude of the bin value is padded to CLIPLIMIT, upper= CLIPLIMIT-L. If the amplitude is lower than Upper, L pixel points are directly filled.
Through the above operation, the number of pixels used for filling is less than totalExcess, that is, some remaining pixels are not allocated, and the remaining pixels can refer to the remaining number in (3) of fig. 4, which indicates the distribution of the number of pixels not allocated in the second gray level image. In this case, the processing apparatus may continue to divide the remaining number equally to the gradation values whose magnitudes are still smaller than CLIPLIMIT, resulting in a histogram shown in (4) of fig. 4, which indicates the number of pixels of each gradation value after the equalization. Alternatively, the processing device may also add L directly to all bins without having to resume one-to-one allocation. The above process may be referred to as lossy and deficient, or as peak clipping and valley filling, and the probability of excess is averaged over other pixels to spread the brightness gain relatively evenly across all pixels.
Regardless of the manner in which the number of pixels of each gray value is equalized, a contrast enhancement curve may be determined based on the number of equalized pixels, for example, using equation 15, by accumulating the proportion of the number of pixels in each bin in the histogram in the image, a cumulative distribution function is obtained as the contrast enhancement curve.
Wherein C1 k is a contrast enhancement curve, k indicates bin, the value range of k in formula 15 is 0-255, j is the bin identifier, and n j is the number of pixel points in bin. The histogram of fig. 4 (4) is calculated using equation 15 to obtain a first enhancement curve shown in fig. 4 (5).
Optionally, the processing device may select to directly execute the second determining process, calculate to obtain the first enhancement curve based on the distribution information of the second gray level image, or execute the first determining process first, and selectively execute the second determining process based on the determination result in the first determining process, so as to avoid the need of executing the calculation operation of the enhancement curve on each frame of gray level image, and further reduce the calculation cost.
In addition, after the first enhancement curve is calculated, the first enhancement curve can be directly used as a contrast enhancement curve, and the contrast enhancement curve can be obtained by adjusting the first enhancement curve. For example, the contrast enhancement curve is adjusted by an adjustment process including, but not limited to, the following.
And in the first adjustment process, when the first image is a dark scene, the first enhancement curve is adjusted by using the second enhancement curve, and the second enhancement curve is used for enhancing the global contrast of the image in the dark scene.
Wherein, the dark light scene includes but is not limited to a backlight or night scene, and the histogram in the dark light scene is special. In an image of a backlight scene, which has a highlight region and a darker backlight region, the histogram of the image shows a tendency of two-headed points and less in the middle, as shown in fig. 5, the abscissa in fig. 5 indicates a gray value (the gray value is represented by a double-type numerical value), and the ordinate indicates the number of pixel points. In this case, although the enhancement is adaptively performed according to the histogram of the image, the contrast parameters for clipping are uniform, and thus the contrast parameters satisfied for a normal scene cannot be satisfied for a backlight scene.
For images of night scenes, the number of pixels in the histogram is relatively concentrated in the dark, and during clipping, the concentrated pixels are separated into larger pixel value bin regions, and this operation may result in some highlights in the picture or the glare and highlight overflow of the light source region become large in some special processing scenes, such as the scene of an in-vehicle ISP (IMAGE SIGNAL Processor), which may affect the image quality during driving, as shown in fig. 6. The left image in fig. 6 is a first gray image before adjustment, the right image in fig. 6 is a gray image directly adjusted by using a first enhancement curve, and the highlight overflow in the right image in fig. 6 is larger than that in the left image.
Therefore, whether the first image is positioned in the dim light scene or not can be judged according to the distribution information of the second gray level image, and the second enhancement curve is utilized to adjust the first enhancement curve under the condition that the first image is positioned in the dim light scene. For example, equation 16-equation 18 may be used to determine whether the first image is in a dim scene.
Dark pixel =a×num (formula 16)
Wherein dark pixel is a black threshold, a is a pixel threshold set based on experience, indicating how many proportion of pixels in the image are black pixels, i.e. determining that the image is a dark scene, and NUM is the total number of pixels in the second gray level image. num tmp indicates the number of black pixels in the second gray level image, hist [ i ] is the number of pixels corresponding to the gray value indicated by the histogram, i indicates the gray value, dark bin _num is the gray value of the black pixel, and dark bin _num can be set to 10 based on experience for the case that the gray value is 0 to 255, that is, pixels with the gray value not greater than 10 are all judged as black pixels. ratio is the ratio of black pixels, MINQ is used to output the minimum, i.e. atIn the case of less than 1, the number of the cells,At the position ofIn the case of not less than 1, ratio=1.
And after the ratio is calculated, it may be determined that the first image is not located in the dark scene in a case where the ratio is less than 1, and it is determined that the first image is located in the dark scene in a case where the ratio is not less than 1. For the first image in the dark scene, a second enhancement curve in the dark scene may also be obtained, where the second enhancement curve is determined in a similar manner to the first enhancement curve, and may be determined by using a preset contrast parameter clip_ratio_dark used in the dark condition, which is described in detail in the above embodiments and will not be repeated here. Thereafter, the first enhancement curve is adjusted using equation 19.
C 2=Cdark×ratio+(1-ratio)×C1 (equation 19)
In equation 19, C 2 is the adjusted enhancement curve, C dark is the second enhancement curve, and C 1 is the first enhancement curve. In the formula 19, reference is also made to the ratio in the process of adjusting the first enhancement curve, and since a certain number of black pixels are also present in the image under the normal scene, the ratio is also used to protect the normal scene from being misjudged as a dark scene during the process of adjusting the first enhancement curve, so that the first enhancement curve is erroneously adjusted.
And a second adjustment process of adjusting the first enhancement curve by using a smoothing coefficient, wherein the smoothing coefficient is used for reducing abrupt change between the first image and the adjacent frame image of the first image.
For example, for adjacent frame images in video, the enhancement curve is obtained by clipping using the same contrast parameter, in which case, even though there may be a large difference in the histograms of the adjacent frame images, the enhancement curve formed after clipping can basically ensure that the brightness of the front and rear frames does not change too much, so as to avoid the problems of video flickering and insufficient contrast. But a smoothing coefficient smooths the enhancement curve of the adjacent frame image on the basis, and the brightness and the contrast of the adjacent frame image respectively adjusted by the enhancement curve are smoothed by smoothing the enhancement curve, so that abrupt changes of the brightness and the contrast of the adjacent frame image are further avoided. The process of smoothing the enhancement curve using the smoothing coefficients can be seen in equation 20.
C s=smoothr×Cx+(1-smoothr)×pre_Cx (equation 20)
Wherein C s is a smoothed enhancement curve, smooth r is a smoothing coefficient, C x is an enhancement curve of the second gray level image, and pre_c x is an enhancement curve of the gray level image of the adjacent frame image of the first image. Further, the processing apparatus may smooth the enhancement curve of the gradation image of the previous frame image, that is, the smoothed enhancement curve as the enhancement curve of the fourth gradation image of the third image, using the smoothing coefficient in addition to smoothing the enhancement curve of the second gradation image, for example, pre_c x=Cs. Abrupt changes in brightness contrast are further avoided by simple temporal smoothing.
Alternatively, only the first adjustment procedure may be selected to be performed, where C 2 is used as the contrast enhancement curve of the second gray scale image, or only the second adjustment procedure may be selected to be performed, where C x in equation 20 is used as the first enhancement curve C 1, and the smoothed enhancement curve C s is used as the contrast enhancement curve of the second gray scale image. Alternatively, the first adjustment process and the second adjustment process may be performed together, that is, C x in the formula 20 is C 2 calculated using the formula 19, and the processing device may use C s calculated using the formula 20 as the contrast enhancement curve.
In addition, the process of adjusting the contrast enhancement curve obtained by the determination process two is exemplified by the process of adjusting the contrast enhancement curve obtained by the adjustment process two, but in practical application, the adjustment process two and the adjustment process one may be performed by the contrast enhancement curve obtained by the determination process two, for example, the adjustment process is not performed by the contrast enhancement curve of the fourth gray-scale image, and the processing device may adjust the contrast enhancement curve of the fourth gray-scale image, and use the adjusted contrast enhancement curve as the contrast enhancement curves of the second gray-scale image and the fourth gray-scale image.
Regardless of the manner of calculating the contrast enhancement curve of the second gray level image, global contrast enhancement can be performed on the second gray level image based on the contrast enhancement curve, so as to obtain a third gray level image. Referring to fig. 7, fig. 7 (1) is a histogram of the second gray image, the abscissa indicates the number of pixels, the ordinate indicates gray values (in this case, gray values are represented by a number of unit8 (unsigned 8-bit integer type) types), fig. 7 (2) is a contrast enhancement curve determined based on the histogram, the abscissa indicates gray values before mapping, the ordinate indicates gray values after mapping, fig. 7 (3) is a histogram of the third gray image obtained by enhancing the second gray image based on the contrast enhancement curve, the abscissa indicates gray values, and the ordinate indicates the number of pixels. The above process of enhancing the second gray image by the Contrast enhancement curve may be referred to as limiting Contrast adaptive histogram equalization (Contrast LIMITED ADAPTIVE Histogram Equalization, CLAHE).
After the third gray image is determined, color reproduction of the third gray image may be performed using the color of the first image to obtain the second image. For example, color reproduction is performed using equation 21.
Wherein T e is a third gray-scale image, c is a color channel of the image, r is a color correction coefficient, r e (0, 1), the greater the color correction degree, the heavier the color correction degree, and conversely, the closer to the gray-scale image, src c is a first image, I is a gray-scale value of the first image, and the luminance of the first image is indicated, which may be a luminance value calculated by using formula 1.
And the third gray level image is subjected to color restoration through the formula 21, so that a second image belonging to the color image is obtained, and the brightness display of the second image is closer to human eyes for watching, so that the human-computer interaction experience is effectively improved. And, if the first image adjusted in the formula 21 is a non-gray image, if the first image belongs to a gray image, the process of performing color restoration on the third gray image based on the color of the first image may be understood as that the third gray image obtained by local and global enhancement is directly used as the second image to be displayed based on the first image as the gray image.
In one possible scenario, the processing device may display the second image after processing to obtain the second image. If the processing device has a display function, for example, the processing device comprises a display screen, the processing device may display the second image via the display screen. If the processing device does not have a display function, the processing device may send the second image to a display device, such as a terminal that establishes a communication connection with the processing device, for example, for displaying the second image by the display device.
In summary, in the image processing method provided by the embodiment of the application, in the process of enhancing the local contrast, the local contrast is accurately enhanced with respect to the brightness range of the first gray level image, so that the local tone of the second image obtained by color matching has better effect and higher accuracy. Secondly, an effective global contrast enhancement method is provided, which can adaptively generate a global contrast enhancement curve according to global statistical information, namely distribution information, of the image to enhance the global contrast of the image. In addition, the application can also correct the adaptively generated global contrast enhancement curve according to the distribution information to enhance the dim light scene.
Referring to fig. 8, an embodiment of the present application provides an image processing chip including an image processor 901. The image Processor 901 is, for example, an ISP (IMAGE SIGNAL Processor), which is a hardware or processing unit for processing an image signal. Optionally, the image processor 901 is configured to obtain a first gray image based on the first image, where the first gray image indicates a brightness level of the first image.
In one possible implementation, referring to fig. 9, an image processing chip is connected to the image sensor, and the image processing chip may receive a first image acquired by the image sensor. Or the image processing chip includes a memory in which an image is stored, and the image processor 901 can read the image stored in the memory as a first image to be processed.
After the image processing chip acquires the first image, the image processor 901 may process the first image to obtain a first gray image, and the process of processing the first image by the processing device to obtain the first gray image in the embodiment shown in fig. 1 may be referred to as the process of processing the first image by the image processor 901 to obtain the first gray image, which is not repeated herein.
In one possible scenario, the image processor 901 may further enhance a local contrast of the first gray scale image according to the brightness range of the first gray scale image, to obtain a second gray scale image, where the first gray scale image includes a plurality of gray scale blocks, and the local contrast indicates a brightness difference of the gray scale blocks in the first gray scale image. For example, a base layer pixel value and a detail layer pixel value corresponding to any gray scale block are obtained, the base layer pixel value is base layer data of the gray scale value of the central pixel point of any gray scale block, the detail layer pixel value is detail layer data of the gray scale value of the central pixel point of any gray scale block, the base layer pixel value is adjusted according to the brightness range to obtain a first pixel value, a second pixel value corresponding to the central pixel point of any gray scale block is determined according to the first pixel value and the detail layer pixel value, and a second gray scale image is determined based on the second pixel value of each gray scale block.
Regarding the process of acquiring the base layer pixel value and the detail layer pixel value, reference is made to the process of acquiring the base layer pixel value and the detail layer pixel value in step 102 in the embodiment shown in fig. 1, and the description thereof will not be repeated. The embodiment of the present application does not limit the luminance range referred to in the process of adjusting the pixel value of the base layer by the image processor 901, and includes at least one of a local luminance range including a maximum value and a maximum value among the gray values of the respective pixel points in any gray scale block or a global luminance range including a maximum value and a minimum value among the gray values of the respective pixel points in the first gray scale image. Optionally, the global luminance range of the first gray image is the same as the global luminance range of the adjacent gray image, and the adjacent gray image refers to the gray image of the adjacent frame image of the first image.
The maximum value and the minimum value of any gray scale block may be referred to as a local maximum value and a local minimum value, and the process of the image processor 901 adjusting the pixel value of the base layer by using the local maximum value and the local minimum value to obtain the first pixel value may be described in relation to the first adjustment mode in step 102 of the embodiment shown in fig. 1. The maximum value and the minimum value in the first gray scale image may be referred to as a global maximum value and a global minimum value, and the process of adjusting the base layer pixel value by the image processor 901 to obtain the first pixel value by using the global maximum value and the global minimum value may be referred to as the related description of the second adjustment manner in step 102 of the embodiment shown in fig. 1.
Illustratively, the process of adjusting the base layer pixel values by the image processor 901 using the local maximum value, the local minimum value, the global maximum value, and the global minimum value to obtain the first pixel value is similar to the process of performing the first adjustment mode and the second adjustment mode to obtain the first pixel value in the step 102 of the embodiment shown in fig. 1, which are described above, and reference is made to the related description.
Regardless of the manner in which the image processor 901 adjusts the base layer pixel values to obtain the first pixel values, the second pixel values may be determined according to the first pixel values and the detail layer pixel values to obtain a second gray scale image including the second pixel values. For example, image processor 901 may increase the detail layer pixel value, decrease the first pixel value, and determine the second pixel value based on the increased detail layer pixel value and the decreased first pixel value. For detailed description, refer to the process of determining the second pixel value in step 102 in the embodiment shown in fig. 1, which is not described herein.
In one possible implementation, after acquiring the second gray level image, the image processor 901 may perform color reproduction on the second gray level image based on the color of the first image to obtain the second image. For example, the number of pixels with each gray value is determined according to the gray value of each pixel in the second gray image to obtain the distribution information of the second gray image, the contrast enhancement curve of the second gray image is determined according to the distribution information, the global contrast of the second gray image is enhanced according to the contrast enhancement curve to obtain the third gray image, the global contrast indicates the brightness difference in the second gray image, and the third gray image is subjected to color restoration based on the color of the first image to obtain the second image.
Alternatively, the process of determining the distribution information of the second gray scale image by the image processor 901 is similar to the process of determining the distribution information of the second gray scale image in step 103 shown in fig. 1, and the process of determining the contrast enhancement curve of the second gray scale image by the image processor 901 according to the distribution information is also similar to the process of determining the distribution information of the second gray scale image in step 103 shown in fig. 1, including but not limited to the following two determination manners.
Determining a first mode, acquiring distribution information of a fourth gray level image, wherein the fourth gray level image indicates brightness degree of a third image, the third image is a previous frame image of the first image, judging whether scene change occurs to the first image relative to the third image according to the distribution information of the second gray level image and the distribution information of the fourth gray level image, and determining a contrast enhancement curve of the second gray level image according to a judgment result.
The second mode is determined, the number of the pixels of each gray value is balanced based on a number threshold, the number difference of the pixels of each gray value after balancing is smaller than the number difference of the pixels of each gray value in the distribution information, a first enhancement curve of the second gray image is determined according to the number of the pixels after balancing, and a contrast enhancement curve is determined based on the first enhancement curve.
Alternatively, the first determination method is similar to the first determination method in step 103 shown in fig. 1, and the second determination method is similar to the second determination method in step 103 shown in fig. 1, and reference is made to the related description, and the description is not repeated here.
In one possible implementation manner, after the image processor 901 acquires the first enhancement curve, the first enhancement curve may be directly used as a contrast enhancement curve of the second gray level image, or the first enhancement curve may be adjusted, and the adjusted first enhancement curve is used as the contrast enhancement curve. In this case, the image processor 901 may perform at least one of adjusting the first enhancement curve with a second enhancement curve for enhancing the global contrast of the image in the dark scene, or adjusting the first enhancement curve with a smoothing coefficient for reducing abrupt changes between the first image and neighboring frame images of the first image, in the case where the first image is a dark scene.
Illustratively, the process of adjusting the first enhancement curve by the image processor 901 is similar to the first and second adjustment processes in step 103 shown in fig. 1, which are described above, and reference is made to the related description, and the description thereof will not be repeated.
Regardless of the manner in which the image processor 901 obtains the contrast enhancement curve, the contrast enhancement curve can be used to adjust the second gray level image to obtain a third gray level image, and then the third gray level image is subjected to color reproduction to obtain a second image with a display condition closer to that observed by human eyes. The process of adjusting the second gray scale image by the image processor 901 is similar to the process of adjusting the second gray scale image in step 103 of the embodiment shown in fig. 1, and the process of color-recovering the third gray scale image by the image processor 901 is similar to the process of color-recovering the third gray scale image in step 103 of the embodiment shown in fig. 1, which will be described in detail herein.
In the process of enhancing the local contrast, the chip can refer to the brightness range of the first gray level image to accurately enhance the local contrast aiming at the brightness range, so that the local tone response effect of the second image obtained by color mixing is better, and the accuracy is higher.
In an exemplary embodiment, there is also provided an electronic apparatus provided with the image processing chip shown in fig. 8 or 9, which is capable of implementing the image processing methods provided in the above-described respective method embodiments. The electronic device is, for example, a terminal or a server. By way of example, the terminal may be any electronic product that can interact with a user by one or more of a keyboard, a touch pad, a touch screen, a remote control, a voice interaction or handwriting device, such as a PC (Personal Computer ), a mobile phone, a smart phone, a PDA (Personal DIGITAL ASSISTANT, a Personal digital assistant), a wearable device, a PPC (Pocket PC), a tablet, a notebook, a desktop, a smart car, a smart television, a smart speaker or player, etc. Terminals may also be referred to by other names as user equipment, portable terminals, laptop terminals, desktop terminals, etc. The electronic device may be a server, may be a server cluster formed by a plurality of servers, or may be any other device capable of implementing the image processing method described above.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein at least one computer program loaded and executed by an image processor of a computer to cause the computer to implement any one of the image processing methods described above.
In one possible implementation, the computer readable storage medium may be a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and so on.
In an exemplary embodiment, a computer program product or a computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The image processor of the computer device reads the computer instructions from the computer-readable storage medium, and the image processor executes the computer instructions so that the computer device performs any of the image processing methods described above.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of the related data is required to comply with the relevant laws and regulations and standards of the relevant countries and regions. For example, the first grayscale image referred to in the present application is acquired with sufficient authorization.
It should be understood that references herein to "a plurality" are to two or more. "and/or" describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate that there are three cases of a alone, a and B together, and B alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The above embodiments are merely exemplary embodiments of the present application and are not intended to limit the present application, any modifications, equivalent substitutions, improvements, etc. that fall within the principles of the present application should be included in the scope of the present application.
Claims (16)
1. An image processing chip, the chip comprising an image processor configured to:
Acquiring a first gray image based on a first image, the first gray image indicating a brightness level of the first image;
According to the brightness range of the first gray level image, enhancing the local contrast of the first gray level image to obtain a second gray level image, wherein the first gray level image comprises a plurality of gray level blocks, and the local contrast indicates the brightness difference of the gray level blocks in the first gray level image;
and performing color reproduction on the second gray level image based on the color of the first image to obtain a second image.
2. The chip of claim 1, wherein the image processor, when enhancing the local contrast of the first gray scale image according to the brightness range of the first gray scale image, is configured to:
Acquiring a base layer pixel value and a detail layer pixel value corresponding to any gray scale block, wherein the base layer pixel value is base layer data of the gray scale value of the central pixel point of the any gray scale block, and the detail layer pixel value is detail layer data of the gray scale value of the central pixel point of the any gray scale block;
according to the brightness range, adjusting the pixel value of the base layer to obtain a first pixel value;
And determining a second pixel value corresponding to the central pixel point of any gray scale block according to the first pixel value and the detail layer pixel value, and determining a second gray scale image based on the second pixel value corresponding to the central pixel point of each gray scale block.
3. The chip of claim 2, wherein the image processor, when determining a second pixel value corresponding to a center pixel of the any gray scale block from the first pixel value and the detail layer pixel value, is configured to:
And increasing the detail layer pixel value, decreasing the first pixel value, and determining the second pixel value based on the increased detail layer pixel value and the decreased first pixel value.
4. A chip according to any one of claims 1-3, wherein the luminance range comprises at least one of a local luminance range comprising a maximum and a minimum of the gray values of the respective pixels in the any one gray scale block or a global luminance range comprising a maximum and a minimum of the gray values of the respective pixels in the first gray scale image.
5. The chip of claim 4, wherein a global luminance range of the first gray scale image is the same as a global luminance range of an adjacent gray scale image, the adjacent gray scale image being a gray scale image corresponding to an adjacent frame image of the first image.
6. A chip according to any one of claims 1-3, wherein the image processor, when performing color reproduction on the second gray scale image based on the color of the first image, is configured to:
Determining the number of pixel points with each gray value according to the gray value of each pixel point in the second gray image to obtain the distribution information of the second gray image;
determining a contrast enhancement curve of the second gray level image according to the distribution information;
The global contrast of the second gray level image is enhanced according to the contrast enhancement curve, so that a third gray level image is obtained, and the global contrast indicates the brightness difference in the second gray level image;
And performing color restoration on the third gray level image based on the color of the first image to obtain the second image.
7. The chip of claim 6, wherein the image processor, when determining the contrast enhancement curve for the second gray scale image from the distribution information, is configured to:
acquiring distribution information of a fourth gray level image, wherein the fourth gray level image indicates brightness of a third image, and the third image is a previous frame image of the first image;
judging whether the first image is subjected to scene change relative to the third image according to the distribution information of the second gray level image and the distribution information of the fourth gray level image, and determining a contrast enhancement curve of the second gray level image according to a judgment result.
8. An image processing method, the method comprising:
Acquiring a first gray image based on a first image, the first gray image indicating a brightness level of the first image;
According to the brightness range of the first gray level image, enhancing the local contrast of the first gray level image to obtain a second gray level image, wherein the first gray level image comprises a plurality of gray level blocks, and the local contrast indicates the brightness difference of the gray level blocks in the first gray level image;
and performing color reproduction on the second gray level image based on the color of the first image to obtain a second image.
9. The method of claim 8, wherein the enhancing the local contrast of the first gray scale image based on the brightness range of the first gray scale image to obtain the second gray scale image comprises:
Acquiring a base layer pixel value and a detail layer pixel value corresponding to any gray scale block, wherein the base layer pixel value is base layer data of the gray scale value of the central pixel point of the any gray scale block, and the detail layer pixel value is detail layer data of the gray scale value of the central pixel point of the any gray scale block;
according to the brightness range, adjusting the pixel value of the base layer to obtain a first pixel value;
And determining a second pixel value corresponding to the central pixel point of any gray scale block according to the first pixel value and the detail layer pixel value, and determining a second gray scale image based on the second pixel value corresponding to the central pixel point of each gray scale block.
10. The method according to claim 9, wherein determining a second pixel value corresponding to a center pixel point of the arbitrary gray scale block according to the first pixel value and the detail layer pixel value comprises:
And increasing the detail layer pixel value, decreasing the first pixel value, and determining the second pixel value based on the increased detail layer pixel value and the decreased first pixel value.
11. The method of any of claims 8-10, wherein the luminance range comprises at least one of a local luminance range comprising a maximum and a minimum of gray values for respective pixels in the any gray scale block or a global luminance range comprising a maximum and a minimum of gray values for respective pixels in the first gray scale image.
12. The method of claim 11, wherein the global luminance range of the first gray scale image is the same as the global luminance range of an adjacent gray scale image, the adjacent gray scale image being a gray scale image corresponding to an adjacent frame image of the first image.
13. The method according to any one of claims 8-10, wherein performing color reduction on the second gray scale image based on the color of the first image to obtain a second image comprises:
Determining the number of pixel points with each gray value according to the gray value of each pixel point in the second gray image to obtain the distribution information of the second gray image;
determining a contrast enhancement curve of the second gray level image according to the distribution information;
The global contrast of the second gray level image is enhanced according to the contrast enhancement curve, so that a third gray level image is obtained, and the global contrast indicates the brightness difference in the second gray level image;
And performing color restoration on the third gray level image based on the color of the first image to obtain the second image.
14. The method of claim 13, wherein said determining a contrast enhancement curve for the second gray scale image from the distribution information comprises:
acquiring distribution information of a fourth gray level image, wherein the fourth gray level image indicates brightness of a third image, and the third image is a previous frame image of the first image;
judging whether the first image is subjected to scene change relative to the third image according to the distribution information of the second gray level image and the distribution information of the fourth gray level image, and determining a contrast enhancement curve of the second gray level image according to a judgment result.
15. An electronic device comprising an image processing chip as claimed in any one of claims 1 to 7.
16. A computer-readable storage medium, in which at least one computer program is stored, the at least one computer program being loaded and executed by an image processor to cause a computer to implement the image processing method according to any one of claims 8 to 14.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411911145.7A CN119722434A (en) | 2024-12-23 | 2024-12-23 | Image processing chip, method, device and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411911145.7A CN119722434A (en) | 2024-12-23 | 2024-12-23 | Image processing chip, method, device and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN119722434A true CN119722434A (en) | 2025-03-28 |
Family
ID=95078783
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411911145.7A Pending CN119722434A (en) | 2024-12-23 | 2024-12-23 | Image processing chip, method, device and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119722434A (en) |
-
2024
- 2024-12-23 CN CN202411911145.7A patent/CN119722434A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP2076013B1 (en) | Method of high dynamic range compression | |
| KR100782845B1 (en) | A digital image enhancement method and system using non-log domain illumination correction | |
| KR101309498B1 (en) | Histogram adjustment for high dynamic range image mapping | |
| US7020332B2 (en) | Method and apparatus for enhancing a digital image by applying an inverse histogram-based pixel mapping function to pixels of the digital image | |
| KR101309497B1 (en) | Histogram adjustment for high dynamic range image mapping | |
| US8860718B2 (en) | Method for converting input image data into output image data, image conversion unit for converting input image data into output image data, image processing apparatus, display device | |
| US6965416B2 (en) | Image processing circuit and method for processing image | |
| US9412155B2 (en) | Video system with dynamic contrast and detail enhancement | |
| US10013739B2 (en) | Image enhancement methods and systems using the same | |
| US8374430B2 (en) | Apparatus and method for feature-based dynamic contrast enhancement | |
| US7899267B2 (en) | Dynamic range compensation by filter cascade | |
| EP1326425A2 (en) | Apparatus and method for adjusting saturation of color image | |
| US20090317017A1 (en) | Image characteristic oriented tone mapping for high dynamic range images | |
| US8520134B2 (en) | Image processing apparatus and method, and program therefor | |
| US20070041636A1 (en) | Apparatus and method for image contrast enhancement using RGB value | |
| CN112019762B (en) | Video processing method and device, storage medium and electronic equipment | |
| CN115239578A (en) | Image processing method and apparatus, computer-readable storage medium, and terminal device | |
| JP2008305122A (en) | Image-processing apparatus, image processing method and program | |
| CN119722434A (en) | Image processing chip, method, device and storage medium | |
| CN114066748B (en) | Image processing method and electronic device | |
| CN119313598B (en) | High dynamic range image tone mapping method, apparatus, medium and program product | |
| Mehmood et al. | CIECAM16-based Tone Mapping of High Dynamic Range Images | |
| CN116051412B (en) | Histogram adjustment and image processing method, readable storage medium and electronic device | |
| Adams et al. | Perceptually based image processing algorithm design | |
| CN119515749A (en) | Image processing method, device, computer readable storage medium and electronic device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |