[go: up one dir, main page]

CN109348089B - Night scene image processing method, device, electronic device and storage medium - Google Patents

Night scene image processing method, device, electronic device and storage medium Download PDF

Info

Publication number
CN109348089B
CN109348089B CN201811399541.0A CN201811399541A CN109348089B CN 109348089 B CN109348089 B CN 109348089B CN 201811399541 A CN201811399541 A CN 201811399541A CN 109348089 B CN109348089 B CN 109348089B
Authority
CN
China
Prior art keywords
image
target area
noise reduction
preset
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811399541.0A
Other languages
Chinese (zh)
Other versions
CN109348089A (en
Inventor
黄杰文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811399541.0A priority Critical patent/CN109348089B/en
Publication of CN109348089A publication Critical patent/CN109348089A/en
Priority to PCT/CN2019/101430 priority patent/WO2020103503A1/en
Application granted granted Critical
Publication of CN109348089B publication Critical patent/CN109348089B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a night scene image processing method and device, electronic equipment and a storage medium, and belongs to the technical field of imaging. Wherein, the method comprises the following steps: sequentially collecting multiple frames of images according to a preset exposure compensation mode; recognizing a preset frame image in a multi-frame image by using a preset image recognition model to determine a target area and a non-target area in the preset frame image; and respectively carrying out noise reduction processing on a target area and a non-target area of the multi-frame image by adopting different noise reduction parameter values to generate a target image, wherein the noise reduction parameter value corresponding to the target area is larger than the noise reduction parameter value corresponding to the non-target area. Therefore, by the night scene image processing method, the overall noise reduction effect of the image and the purity of the target area are guaranteed, the detail information of the non-target area is well reserved, the quality of the shot image is improved, and the user experience is improved.

Description

Night scene image processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of imaging technologies, and in particular, to a method and an apparatus for processing a night scene image, an electronic device, and a storage medium.
Background
With the development of science and technology, intelligent mobile terminals (such as smart phones, tablet computers and the like) are increasingly popularized. The cameras are arranged in most smart phones and tablet computers, and along with the enhancement of the processing capacity of the mobile terminal and the development of the camera technology, the performance of the built-in cameras is more and more powerful, and the quality of shot images is more and more high. At present, the mobile terminal is simple to operate and convenient to carry, and people using the mobile terminals such as smart phones and tablet computers to take pictures in daily life become a normal state.
While the intelligent mobile terminal brings convenience to daily photographing of people, the requirement of people on the quality of photographed images is higher and higher. At present, in order to meet the requirement of people on image quality, in a night scene shooting scene, a mobile terminal usually shoots a night scene image by acquiring images with different multi-frame exposure durations and fusing the images. However, in such a shooting method, how to reduce noise in a night sky area is a big difficulty.
Because the brightness of the night scene sky region is relatively dark, the sky region is fused by generally adopting image data with longer exposure time during final image fusion, and the noise in the image with longer exposure time is generally higher, so that the noise of the night scene sky region is higher. The applicant finds that when the traditional noise reduction method is adopted to reduce the noise of the night scene shot image, the noise reduction effect of the sky area cannot be guaranteed, the detail information of other areas is kept good, and the user experience is influenced.
Disclosure of Invention
The night scene image processing method, the night scene image processing device, the electronic equipment and the storage medium are used for solving the problem that when a traditional noise reduction method is adopted to reduce noise of a night scene shooting image in the related art, the noise reduction effect of a sky area cannot be guaranteed, and the problem that the user experience is influenced because the detail information of other areas is kept good is solved.
An embodiment of the application provides a night scene image processing method, which includes: sequentially collecting multiple frames of images according to a preset exposure compensation mode; recognizing a preset frame image in the multi-frame image by using a preset image recognition model so as to determine a target area and a non-target area in the preset frame image; and respectively carrying out noise reduction processing on a target area and a non-target area of the multi-frame image by adopting different noise reduction parameter values to generate a target image, wherein the noise reduction parameter value corresponding to the target area is larger than the noise reduction parameter value corresponding to the non-target area.
Another embodiment of the present application provides a night scene image processing apparatus, including: the acquisition module is used for sequentially acquiring multi-frame images according to a preset exposure compensation mode; the determining module is used for identifying a preset frame image in the multi-frame image by using a preset image identification model so as to determine a target area and a non-target area in the preset frame image; and the noise reduction module is used for respectively performing noise reduction processing on a target region and a non-target region of the multi-frame image by adopting different noise reduction parameter values to generate a target image, wherein the noise reduction parameter value corresponding to the target region is larger than the noise reduction parameter value corresponding to the non-target region.
An embodiment of another aspect of the present application provides an electronic device, which includes: the camera module, the memory, the processor and the computer program stored on the memory and capable of running on the processor are characterized in that the processor implements the night scene image processing method when executing the program.
In another aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the night-scene image processing method as described above.
In another aspect of the present application, a computer program is provided, which is executed by a processor to implement the night scene image processing method according to the embodiment of the present application.
The night scene image processing method, the night scene image processing device, the electronic device, the computer-readable storage medium and the computer program provided by the embodiment of the application can sequentially acquire multiple frames of images according to a preset exposure compensation mode, recognize preset frames of images in the multiple frames of images by using a preset image recognition model to determine a target area and a non-target area in the preset frames of images, and further perform noise reduction processing on the target area and the non-target area of the multiple frames of images by using different noise reduction parameter values to generate a target image, wherein the noise reduction parameter value corresponding to the target area is larger than the noise reduction parameter value corresponding to the non-target area. Therefore, the preset frame image is segmented by using the preset image identification model, and then the noise reduction processing can be respectively carried out on the target area and the non-target area by adopting different noise reduction parameter values according to the characteristics of the target area and the non-target area, so that the integral noise reduction effect of the image and the purity of the target area are ensured, the detail information of the non-target area is better reserved, the quality of the shot image is improved, and the user experience is improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a night scene image processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another night-scene image processing method according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a night scene image processing apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the like or similar elements throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The embodiment of the application provides a night scene image processing method aiming at the problems that when a traditional noise reduction method is adopted to reduce noise of a night scene shot image in the related art, the noise reduction effect of a sky area cannot be guaranteed, detail information of other areas is kept good, and user experience is influenced.
The night scene image processing method provided by the embodiment of the application can be used for sequentially collecting multiple frames of images according to a preset exposure compensation mode, identifying preset frame images in the multiple frames of images by using a preset image identification model to determine a target area and a non-target area in the preset frame images, and respectively performing noise reduction processing on the target area and the non-target area of the multiple frames of images by adopting different noise reduction parameter values to generate a target image, wherein the noise reduction parameter value corresponding to the target area is larger than the noise reduction parameter value corresponding to the non-target area. Therefore, the preset frame image is segmented by using the preset image identification model, and then the noise reduction processing can be respectively carried out on the target area and the non-target area by adopting different noise reduction parameter values according to the characteristics of the target area and the non-target area, so that the integral noise reduction effect of the image and the purity of the target area are ensured, the detail information of the non-target area is better reserved, the quality of the shot image is improved, and the user experience is improved.
The night-scene image processing method, apparatus, electronic device, storage medium, and computer program provided by the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a night scene image processing method according to an embodiment of the present application.
As shown in fig. 1, the night scene image processing method includes the following steps:
step 101, sequentially collecting multiple frames of images according to a preset exposure compensation mode.
The preset exposure compensation mode may include parameters such as an exposure value, sensitivity, and an exposure duration corresponding to each frame of image. In the embodiment of the application, multiple frames of images can be sequentially collected according to the parameters in the preset exposure compensation mode.
It should be noted that, because the overall brightness of the night scene shooting scene is low, multiple frames of images with different exposure values can be acquired by adjusting the exposure duration corresponding to each frame of image, so as to obtain images with different dynamic ranges and synthesize the images, so that the synthesized images have a higher dynamic range, and the overall brightness and quality of the images are improved.
Furthermore, a preset exposure compensation mode can be determined in real time according to the current shooting scene so as to obtain the best shooting effect. That is, in a possible implementation form of the embodiment of the present application, before the step 101, the method may include:
and determining the preset exposure compensation mode according to the illuminance of the current shooting scene and the current jitter degree of the camera module.
It is understood that, when the illumination intensity of the current shooting scene is different, the exposure value corresponding to each frame of image may be different. For example, when the illuminance of the current shooting scene is large, the exposure value corresponding to each frame of image can be properly reduced; when the illumination intensity of the current shooting is small, the exposure value corresponding to each frame of image can be properly improved.
In the embodiment of the application, the current shaking degree of the camera module, that is, the current shaking degree of the camera module, can be determined by acquiring the current gyroscope (Gyro-sensor) information of the electronic device.
The gyroscope is also called as an angular velocity sensor and can measure the rotation angular velocity of the physical quantity during deflection and inclination. In the electronic equipment, the gyroscope can well measure the actions of rotation and deflection, so that the actual actions of a user can be accurately analyzed and judged. The gyroscope information (gyro information) of the electronic device may include motion information of the electronic device in three dimensional directions in a three-dimensional space, and the three dimensions of the three-dimensional space may be respectively expressed as three directions of an X axis, a Y axis, and a Z axis, where the X axis, the Y axis, and the Z axis are in a pairwise perpendicular relationship.
It should be noted that, in a possible implementation form of the embodiment of the present application, the current shake degree of the camera module may be determined according to the current gyro information of the electronic device. The larger the absolute value of gyro motion of the electronic apparatus in three directions is, the larger the degree of shake of the camera module is. Specifically, absolute value thresholds of gyro motion in three directions may be preset, and the current shake degree of the camera module may be determined according to a relationship between the sum of the acquired absolute values of gyro motion in the three directions and the preset threshold.
For example, it is assumed that the preset threshold values are a first threshold value a, a second threshold value B, and a third threshold value C, where a < B < C, and the sum of absolute values of gyro motion in three directions currently acquired is S. If S is less than A, determining that the current shaking degree of the camera module is 'no shaking'; if A < S < B, the current shaking degree of the camera module can be determined to be 'slight shaking'; if B < S < C, the current shaking degree of the camera module can be determined to be 'small shaking'; if S > C, the current shaking degree of the camera module can be determined to be large shaking.
It should be noted that the above examples are only illustrative and should not be construed as limiting the present application. During actual use, the number of the threshold values and the specific numerical values of the threshold values can be preset according to actual needs, and the mapping relation between gyro information and the jitter degree of the camera module can be preset according to the relation between the gyro information and the threshold values.
It can be understood that the number of the collected images and the sensitivity of the collected images influence the overall shooting duration, the shooting duration is too long, and the shaking degree of the camera module is aggravated when the camera module is shot in a handheld mode, so that the image quality is influenced. Therefore, the number of the collected images, the sensitivity corresponding to each frame of image and the exposure time corresponding to each frame of image to be collected can be determined according to the current shaking degree of the camera module, so that the shooting time is controlled in a proper range, and the ghost caused by the aggravation of the shaking degree is avoided.
Specifically, in a possible implementation form of the embodiment of the present application, the determining the preset exposure compensation mode according to the illuminance of the current shooting scene and the current shake degree of the camera module includes:
determining a target exposure value of each frame of image in the multi-frame images according to the illuminance of the current shooting scene;
determining the sensitivity of each frame of image according to the current jitter degree of the camera module;
and determining the exposure time of each frame of image according to the sensitivity of each frame of image and the target exposure value of each frame of image.
The exposure value refers to the amount of light passing through the lens during the exposure time.
The sensitivity, also called ISO value, is an index for measuring the sensitivity of a negative to light. For a lower sensitivity film, a longer exposure time is required to achieve the same imaging as for a higher sensitivity film. The sensitivity of a digital camera is an index similar to the sensitivity of a film, and the ISO of a digital camera can be adjusted by adjusting the sensitivity of a photosensitive device or combining photosensitive points, that is, the ISO can be improved by increasing the light sensitivity of the photosensitive device or combining several adjacent photosensitive points. It should be noted that whether digital or film photography, the use of relatively high sensitivity generally introduces more noise in order to reduce the exposure time, resulting in reduced image quality.
The exposure duration refers to the time of light passing through the lens.
The exposure value is related to the aperture size, the exposure time, and the sensitivity. The aperture, i.e., the clear aperture, determines the amount of light passing per unit time. When the sensitivity corresponding to each frame of image is the same and the aperture size is the same, the exposure corresponding to the illumination of the current shooting scene is larger, and the exposure time corresponding to each frame of image is longer.
In this embodiment of the present application, a photometric module in the camera module may be used to obtain the illuminance of the current shooting scene, and an Automatic Exposure Control (AEC) algorithm is used to determine an Exposure value corresponding to the current illuminance. In a shooting mode of collecting multiple frames of images, exposure values of each frame of image can be different so as to obtain images with different dynamic ranges, so that the synthesized image has a higher dynamic range, and the overall brightness and quality of the image are improved. When each frame of image is collected, different exposure compensation strategies are adopted, and the target exposure value corresponding to each frame of image is determined according to the exposure compensation strategies and the current illuminance.
In the embodiment of the application, different exposure compensation strategies can be adopted for each frame of image respectively through the preset exposure compensation strategy, so that each frame of image corresponds to different exposure values, and images with different dynamic ranges can be obtained.
In the embodiment of the present application, the preset Exposure compensation strategy refers to a combination of Exposure Values (EVs) preset for each frame of image. In the initial definition of exposure value, exposure value does not mean an exact numerical value, but means "a combination of all camera apertures and exposure periods that can give the same exposure amount". The sensitivity, aperture and exposure time determine the exposure of the camera, and different combinations of parameters can produce equal exposures, i.e., the EV values of these different combinations are the same, e.g., using an 1/125 second exposure time and F/11 aperture combination with the 1/250 second exposure time and F/8.0 shutter combination, the same exposure is obtained, i.e., the EV values are the same, with the same sensitivity. When the EV value is 0, the exposure is obtained when the light sensitivity is 100, the aperture coefficient is F/1 and the exposure time is 1 second; the exposure amount is increased by one step, namely, the exposure time is doubled, or the sensitivity is doubled, or the aperture is increased by one step, and the EV value is increased by 1, namely, the exposure amount corresponding to 1EV is twice as much as the exposure amount corresponding to 0 EV. As shown in table 1, the correspondence relationship between the exposure time, the aperture, and the sensitivity, when they were changed individually, and the EV value was obtained.
TABLE 1
Figure BDA0001876008950000051
After the digital era of photography, the photometric function inside the camera has been very powerful, EV is often used to represent a step difference on the exposure scale, and many cameras allow setting of exposure compensation and are usually represented by EV. In this case, EV refers to a difference between the exposure amount corresponding to the camera photometric data and the actual exposure amount, for example, exposure compensation of +1EV refers to an increase of one exposure with respect to the exposure amount corresponding to the camera photometric data, that is, the actual exposure amount is twice the exposure amount corresponding to the camera photometric data.
In the embodiment of the present application, when the exposure compensation strategy is preset, the EV value corresponding to the determined reference exposure amount may be preset to 0, where +1EV means increasing one-step exposure, that is, the exposure amount is 2 times of the reference exposure amount, +2EV means increasing two-step exposure, that is, the exposure amount is 4 times of the reference exposure amount, and-1 EV means decreasing one-step exposure, that is, the exposure amount is 0.5 times of the reference exposure amount.
For example, if the number of the multi-frame images is 7, the EV value range corresponding to the preset exposure compensation strategy may be [ +1, +1, +1, +1,0, -3, -6 ]. The exposure compensation strategy is a frame of +1EV, the noise problem can be solved, time domain noise reduction is carried out through a frame with higher brightness, and noise is suppressed while dark part details are improved; the exposure compensation strategy is a frame of-6 EV, so that the problem of high light overexposure can be solved, and the details of a high light area are reserved; the exposure compensation strategies are 0EV and-3 EV frames, and the method can be used for maintaining the transition between highlight and dark areas and maintaining the good effect of bright-dark transition.
It should be noted that each EV value corresponding to the preset exposure compensation strategy may be specifically set according to actual needs, or may be obtained according to a set EV value range and a principle that differences between the EV values are equal, which is not limited in this embodiment of the present application.
In a possible implementation form of the embodiment of the application, after an exposure value is determined through an ACE algorithm according to the illuminance of a current shooting scene, a target exposure value corresponding to each frame of image can be determined according to the exposure value and a preset exposure compensation strategy.
In the embodiment of the present application, the sensitivity of each frame of image refers to the lowest sensitivity that is determined according to the current shake degree of the camera module and is suitable for the current shake degree.
It should be noted that, in the embodiment of the present application, by acquiring multiple frames of images with lower sensitivity simultaneously and synthesizing the acquired multiple frames of images to generate the target image, not only the dynamic range and the overall brightness of the night view captured image can be improved, but also noise in the image is effectively suppressed by controlling the value of the sensitivity, and the quality of the night view captured image is improved.
In the embodiment of the application, the sensitivity of each frame of image can be determined according to the current shaking degree of the camera module, so that the shooting time length is controlled in a proper range. Specifically, if the current shake degree of the camera module is small, the sensitivity of each frame of image can be properly compressed into a small value, so that the noise of each frame of image is effectively inhibited, and the quality of the shot image is improved; if the current shaking degree of the camera module is larger, the sensitivity of each frame of image can be properly improved to a larger value so as to shorten the shooting time and avoid ghost image caused by aggravation of the shaking degree.
For example, if it is determined that the current shake degree of the camera module is "no shake", it may be determined that the camera module may be in a tripod photographing mode, and at this time, the sensitivity may be determined to be a smaller value so as to obtain an image with higher quality as much as possible, for example, the sensitivity is determined to be 100; if the current shake degree of the camera module is determined to be 'slight shake', the camera module can be determined to be possibly in a handheld shooting mode at present, and the sensitivity can be determined to be a larger value so as to reduce the shooting time, for example, the sensitivity is determined to be 200; if the current shaking degree of the camera module is determined to be small shaking, the number of images to be collected can be further reduced, and the sensitivity is further increased so as to reduce the shooting time, for example, the sensitivity is determined to be 220; if the current shake degree of the camera module is determined to be "large shake", it may be determined that the current shake degree is too large, and at this time, the sensitivity may be further increased to reduce the shooting time duration, for example, the sensitivity is determined to be 250.
It can be understood that the number of the collected images also affects the overall shooting time, and the more the collected images are, the longer the shooting time is, which may cause the aggravation of the shaking degree of the camera module during shooting, thereby affecting the image quality. Therefore, in another possible implementation form of the embodiment of the present application, the number of the multiple frames of images and the sensitivity of each frame of image can be simultaneously adjusted according to the current shake degree of the camera module, so that the shooting time duration is controlled within a suitable range.
Specifically, if the current shake degree of the camera module is small, a plurality of frames of images can be collected, and the sensitivity of each frame of image can be properly compressed into a small value, so that the noise of each frame of image can be effectively inhibited, and the quality of the shot image can be improved; if the current shake degree of the camera module is larger, images of fewer frames can be collected, and the sensitivity of each frame of image can be properly improved to a larger value so as to shorten the shooting time.
For example, if it is determined that the current shake degree of the camera module is "no shake", it may be determined that the current shooting mode may be a tripod shooting mode, and at this time, images of a plurality of frames may be collected, and the sensitivity is determined to be a smaller value, so as to obtain an image with higher quality as much as possible, for example, it is determined that the number of images of a plurality of frames is 17 frames, and the sensitivity is 100; if the current shake degree of the camera module is determined to be 'slight shake', the camera module can be determined to be possibly in a handheld shooting mode currently, images of fewer frames can be collected at the moment, and the sensitivity is determined to be a larger value so as to reduce the shooting time, for example, the number of multi-frame images is determined to be 7 frames, and the sensitivity is 200; if the current shaking degree of the camera module is determined to be small shaking, the number of multi-frame images can be further reduced, and the sensitivity can be further increased to reduce the shooting time length, for example, the number of the multi-frame images is determined to be 5 frames, and the sensitivity is 220; if the current shake degree of the camera module is determined to be "large shake", it may be determined that the current shake degree is too large, at this time, the number of the multi-frame images may be further reduced, and the sensitivity may be further increased to reduce the shooting time length, for example, it is determined that the number of the multi-frame images is 3 frames, and the sensitivity is 250.
Correspondingly, when the number of the multi-frame images is determined according to the shaking degree of the camera module, a plurality of groups of exposure compensation strategies can be preset, so that the preset exposure compensation strategies which are consistent with the number of the multi-frame images are determined according to the number of the multi-frame images.
It should be noted that the above examples are only illustrative and should not be construed as limiting the present application. In practical use, when the shake degree of the camera module is changed, the number of images to be acquired and the reference sensitivity can be changed simultaneously, and one of the images can be changed to obtain an optimal scheme. The mapping relation between the dithering degree of the camera module and the number of the images to be collected and the reference sensitivity corresponding to each frame of image to be collected can be preset according to actual needs.
In the embodiment of the application, after the target exposure value of each frame of image in a plurality of frames of images and the sensitivity of each frame of image are determined, the exposure time of each frame of image can be determined according to the sensitivity of each frame of image and the target exposure value of each frame of image.
And 102, identifying a preset frame image in the multi-frame image by using a preset image identification model to determine a target area and a non-target area in the preset frame image.
The preset image recognition model is an image segmentation model obtained by training a large amount of labeled night scene image data.
The target region is a flat luminance abnormal region where high-intensity noise reduction is required. The non-target area refers to an area other than the target area in the preset frame image, and is generally a texture area with constant brightness and containing more detailed information. That is, in a possible implementation form of the embodiment of the present application, the step 102 may include:
and identifying a preset frame image in the multi-frame image by using a preset image identification model so as to determine an area with abnormal brightness in the preset frame image as a target area.
The area with abnormal brightness may be an area whose brightness difference from the area around the area exceeds a preset threshold. For example, in a night-scene shooting scene, the brightness of a sky area is usually very dark compared with other areas, so the sky area in a night-scene image can be determined as an area with abnormal brightness, i.e., a target area; the brightness of the light reflecting region is usually very large compared with other regions in the image, so that the light reflecting region in the image can be determined as a region with abnormal brightness, namely a target region; for another example, a halo region near the light source is generally lower in brightness than a light source region, and thus the halo region in the image may also be determined as a region with abnormal brightness, i.e., a target region.
During practical use, the threshold value of the preset brightness difference value can be preset according to actual needs, and the embodiment of the application does not limit the threshold value.
It should be noted that, in a night-scene shooting scene, when shooting quality of a night-scene image is improved by acquiring and synthesizing multiple frames of images, for a flat area with low brightness, such as the sky, data fusion of too many overexposed frames is usually adopted to obtain higher brightness, and the overexposed frames usually contain relatively large noise, so that more noise points are contained in the area, such as the sky. If when the image is subjected to noise reduction, the whole image is subjected to noise reduction processing by adopting the same noise reduction parameters, so that the noise reduction effect of regions such as sky and the like is not ideal, or detail information in other regions with rich detail information is damaged easily. Therefore, in the embodiment of the application, the target region and the non-target region can be determined from the preset frame image through the preset image recognition model, and different noise reduction parameters are adopted to respectively reduce the noise of the target region and the non-target region.
In the embodiment of the application, the preset image recognition model can be generated off-line and integrated into the electronic device. When the image recognition model is trained, a large amount of night scene image data can be collected firstly, pixel points of the collected night scene image data are labeled according to a preset rule so as to obtain labeling information capable of distinguishing a target area from a non-target area, the labeled night scene image data are trained in a deep learning mode so as to obtain regularity information between pixel values of the pixel points and the corresponding labeling information, and the image recognition model is obtained. The trained image recognition model can analyze the image data of the input image recognition model, determine the labeling information of each pixel point according to the pixel value of each pixel point and the relation between the pixel values of each pixel point, label the pixel points of the input image to obtain a mask capable of distinguishing a target area from a non-target area, and accordingly determine the target area from the non-target area.
For example, when labeling a large amount of collected night scene image data, the preset rule may be "label the pixel corresponding to the target region as 1, and label the pixel corresponding to the non-target region as 0". The image data of the input image recognition model can be analyzed by the image recognition model obtained by training the labeled image data, and the pixels of the input image are labeled according to the pixel values of the pixels and the relationship between the pixel values of the pixels by the same rule, namely the pixel with the label information of '1' is a target area, and the pixel with the label information of '0' is a non-target area.
It can be understood that the preset frame image in the multi-frame image is input into the preset image recognition model, and the preset image recognition model can recognize the preset frame image, that is, according to the pixel value of each pixel point in the preset frame image and the relationship between the pixel values of each pixel point, the labeling information of each pixel point is determined, and then the target region and the non-target region in the preset frame image are determined.
Furthermore, as the acquired images have multiple frames and the multiple frames of images are aligned, one frame of image can be selected from the multiple frames of images to be identified through a preset image identification model, and the target area and the non-target area in the multiple frames of images are obtained according to the identification result. Wherein, the selected frame image is the preset frame image. That is, in a possible implementation form of the embodiment of the present application, before the step 102, the method may further include:
and determining the preset frame image according to the exposure value corresponding to each frame image in the multi-frame images.
It should be noted that, in one possible implementation form of the embodiment of the present application, an image with high brightness and high definition may be used as a preset frame image to obtain an optimal segmentation effect. Generally, the larger the exposure value is, the higher the brightness and the definition of the image are, so that the image with the maximum exposure value can be preset as a preset frame image, that is, the image with the maximum exposure value, that is, the preset frame image, can be determined according to the exposure value corresponding to each frame of image. If the corresponding image with the maximum exposure value has multiple frames, one frame can be randomly selected from the multiple frame images with the maximum exposure value to serve as a preset frame image.
And 103, respectively performing noise reduction processing on a target region and a non-target region of the multi-frame image by adopting different noise reduction parameter values to generate a target image, wherein the noise reduction parameter value corresponding to the target region is larger than the noise reduction parameter value corresponding to the non-target region.
In the embodiment of the application, after the target area and the non-target area in the multi-frame image are determined by using the preset image recognition model, different noise reduction parameter values can be adopted according to the characteristics of the target area and the non-target area to perform noise reduction processing on the target area and the non-target area respectively.
It should be noted that, the target region contains relatively large noise and less detail information, while the non-target region contains relatively small noise and abundant detail information, when the image is subjected to noise reduction processing, the image detail is inevitably blurred, and the blurring may offset the improvement of the image quality due to noise reduction. Therefore, when the multi-frame image is subjected to noise reduction, a larger noise reduction parameter value can be adopted for the target area, for example, the noise reduction intensity is increased, so that a large amount of noise in the target area is removed, and the optimal noise reduction effect is achieved; when the non-target area is subjected to noise reduction processing, a smaller noise reduction parameter value can be adopted, for example, the noise reduction intensity is reduced, so that the damage of the noise reduction processing on image detail information is reduced while the noise is reduced, and the image quality is improved.
In a possible implementation form of the embodiment of the present application, a noise reduction processing may be performed on a plurality of frames of images in a manner of mixing spatial domain noise reduction and time domain noise reduction. The spatial domain noise reduction utilizes the spatial correlation in a single frame image to reduce noise, and the time domain noise reduction utilizes the correlation of a plurality of frames of images on time to reduce noise. The most common method of spatial filtering is low-pass filtering, which achieves the purpose of noise reduction by filtering the high-frequency part in the image signal, but because the edge and the jump part of the image are also in the high-frequency region, the edge and the jump part of the image are easily blurred, and the damage to the image detail information concentrated in the high frequency of the signal is caused. Meanwhile, time domain information is not considered in the spatial domain noise reduction, and noise at the same position between frames has randomness, so that the content of an image between adjacent frames after noise reduction is easily changed.
The most common method for time domain noise reduction is a multi-image averaging method, because image content has strong correlation between frames, the image content is continuously changed in each frame, noise always appears randomly between the frames and does not have correlation, and the noise is discontinuous in each frame, the time domain noise reduction utilizes the characteristic of the noise between the frames, can effectively remove the noise, and simultaneously protects the detail information of the image. However, if large jitter occurs during the collection of multi-frame images, matching failure or error easily occurs in the simple time-domain noise reduction, and a noise residue or a ghost phenomenon occurs. Therefore, noise reduction processing is carried out on the multi-frame image in a mode of mixing spatial domain noise reduction and time domain noise reduction, noise in the multi-frame image can be effectively inhibited, and detailed information such as edges and textures of the image can be well reserved.
The night scene image processing method provided by the embodiment of the application can be used for sequentially collecting multiple frames of images according to a preset exposure compensation mode, identifying preset frame images in the multiple frames of images by using a preset image identification model to determine a target area and a non-target area in the preset frame images, and respectively performing noise reduction processing on the target area and the non-target area of the multiple frames of images by adopting different noise reduction parameter values to generate a target image, wherein the noise reduction parameter value corresponding to the target area is larger than the noise reduction parameter value corresponding to the non-target area. Therefore, the preset frame image is segmented by using the preset image identification model, and then the noise reduction processing can be respectively carried out on the target area and the non-target area by adopting different noise reduction parameter values according to the characteristics of the target area and the non-target area, so that the integral noise reduction effect of the image and the purity of the target area are ensured, the detail information of the non-target area is better reserved, the quality of the shot image is improved, and the user experience is improved.
In a possible implementation form of the present application, feathering may be performed on the target region to determine a transition region between the target region and the non-target region, and different denoising parameter values are used to perform denoising processing on the target region, the non-target region, and the transition region, respectively, so as to achieve natural transition of the objective denoising effect, and further improve image quality.
Another night scene image processing method provided in the embodiment of the present application is further described below with reference to fig. 2.
Fig. 2 is a schematic flow chart of another night-scene image processing method according to an embodiment of the present application.
As shown in fig. 2, the night scene image processing method includes the following steps:
step 201, sequentially collecting multiple frames of images according to a preset exposure compensation mode.
Step 202, a preset image recognition model is used for recognizing a preset frame image in the multi-frame image so as to determine a target area and a non-target area in the preset frame image.
The detailed implementation process and principle of the steps 201-202 can refer to the detailed description of the above embodiments, and are not described herein again.
Step 203, performing feathering processing on the target region to determine a transition region between the target region and the non-target region.
The feathering is to blur the edges of the image to make the edges of the image hazy. Feathering the target area, that is, blurring the edge of the target area to achieve the effect of natural transition between the target area and the non-target area.
It should be noted that the haze range of the edge, i.e., the size of the transition region, can be controlled by adjusting the feathering radius value. The larger the eclosion radius value is, the wider the haziness range is, and the larger the transition area is; the smaller the feathering radius value, the narrower the haze range, and the smaller the transition region.
In the embodiment of the present application, the feathering process is performed on the target region, that is, a process of re-determining the pixel values of the pixel points near the edge of the target region within the feathering radius by a certain method. For example, the feathering process is performed by mean smoothing, which means that the mean of the neighborhood pixel of a certain pixel is determined as the pixel value of the pixel again. For example, if the neighborhood size is 11 × 11, the pixel value of the pixel a is 100, and the average of the pixel values of the pixels in the 11 × 11 neighborhood is 85, the pixel value of the pixel a after feathering is 85.
And 204, respectively performing noise reduction processing on the target area, the transition area and the non-target area of the multi-frame image by adopting different noise reduction parameter values to generate a target image.
In the embodiment of the application, after the transition region between the target region and the non-target region is determined, different denoising parameter values can be adopted to perform denoising processing on the target region, the transition region and the non-target region in the multi-frame image respectively, so as to generate the target image. And the noise reduction parameter value corresponding to the target area is greater than that corresponding to the transition area, and the noise reduction parameter value corresponding to the transition area is greater than that corresponding to the non-target area. The noise reduction parameter value corresponding to each region can be determined according to the noise intensity contained in the image, so as to obtain the optimal noise reduction effect.
Further, the noise level in the captured multi-frame image is related to the current shooting scene, such as the sensitivity during shooting, the illuminance of the shooting scene, and the details in the scene, which all affect the noise level in the captured image. For example, the greater the sensitivity, the higher the noise level in the acquired image; the more the detailed information in the scene is, the less the noise that can be perceived by human eyes is, so that the reference noise reduction parameter can be determined according to the current shooting scene, and then the noise reduction parameter value corresponding to each region is determined according to the reference noise reduction parameter. That is, in a possible implementation form of the embodiment of the present application, before the step 204, the method may further include:
determining a reference noise reduction parameter according to the current shooting scene;
and determining the current noise reduction parameter values respectively corresponding to the target area, the transition area and the non-target area according to the preset weight values corresponding to the areas and the reference noise reduction parameters.
It should be noted that the current shooting scene is closely related to the noise level in the acquired multi-frame image, so that the noise level in the multi-frame image can be estimated according to the current shooting scene, and a reference noise reduction parameter adapted to the estimated noise level is determined.
For example, if the current sensitivity is low or the detail information in the scene to be shot is rich, that is, the noise level in the acquired multi-frame image is determined to be low, the reference noise reduction parameter may be determined to be a small value; if the current sensitivity is higher or the detail information in the scene to be shot is less, that is, the noise level in the acquired multi-frame image can be determined to be higher, the reference noise reduction parameter can be determined to be a larger value.
In a possible implementation form of the embodiment of the application, a weight value for denoising may be preset for each region, and a denoising parameter value corresponding to each region is determined according to the weight value and a reference denoising parameter. For example, the non-target region may be denoised by using the reference denoising parameter, that is, the weight value corresponding to the non-target region may be 1; the noise reduction parameter value corresponding to the target region is greater than the noise reduction parameter value corresponding to the non-target region, that is, the weight value corresponding to the target region is greater than 1, for example, 1.5; the corresponding noise reduction parameter value of the transition region needs to be between the noise reduction parameter value corresponding to the target region and the noise reduction parameter value corresponding to the non-target region, which may be 1.2, for example.
It should be noted that the above examples are only illustrative and should not be construed as limiting the present application. In practical use, the weight values corresponding to the regions can be preset according to the principle that the noise reduction parameter value corresponding to the target region is greater than the noise reduction parameter value corresponding to the transition region, and the noise reduction parameter value corresponding to the transition region is greater than the noise reduction parameter value corresponding to the non-target region, so as to obtain the optimal noise reduction effect.
The night scene image processing method provided by the embodiment of the application can be used for sequentially collecting multiple frames of images according to a preset exposure compensation mode, identifying preset frame images in the multiple frames of images by using a preset image identification model to determine a target area and a non-target area in the preset frame images, then performing feathering on the target area to determine a transition area between the target area and the non-target area, and further performing noise reduction processing on the target area, the transition area and the non-target area of the multiple frames of images by using different noise reduction parameter values to generate the target image, wherein the noise reduction parameter value corresponding to the target area is larger than the noise reduction parameter value corresponding to the transition area, and the noise reduction parameter value corresponding to the transition area is larger than the noise reduction parameter value corresponding to the non-target area. Therefore, by utilizing the preset image recognition model, the preset frame image is segmented, the identified target area is feathered, the transition area is determined, then different noise reduction parameter values can be adopted to respectively perform noise reduction on the target area, the transition area and the non-target area, the integral noise reduction effect of the image and the purity of the target area are ensured, the detail information of the non-target area is well reserved, the natural transition of the noise reduction effect between the target area and the non-target area is realized, the quality of the shot image is further improved, and the user experience is improved.
In order to implement the above embodiments, the present application further provides a night scene image processing apparatus.
Fig. 3 is a schematic structural diagram of a night scene image processing apparatus according to an embodiment of the present application.
As shown in fig. 3, the night-scene image processing apparatus 30 includes:
the acquisition module 31 is configured to sequentially acquire multiple frames of images according to a preset exposure compensation mode;
a first determining module 32, configured to perform recognition processing on a preset frame image in the multiple frame images by using a preset image recognition model, so as to determine a target area and a non-target area in the preset frame image;
the denoising module 33 is configured to perform denoising processing on a target region and a non-target region of the multi-frame image respectively by using different denoising parameter values to generate a target image, where the denoising parameter value corresponding to the target region is greater than the denoising parameter value corresponding to the non-target region.
In practical use, the night-scene image processing apparatus provided in the embodiment of the present application may be configured in any electronic device to execute the foregoing night-scene image processing method.
The night scene image processing device provided by the embodiment of the application can collect multiple frames of images in sequence according to a preset exposure compensation mode, and utilizes a preset image identification model to identify preset frame images in the multiple frames of images so as to determine a target area and a non-target area in the preset frame images, and further adopts different noise reduction parameter values to respectively perform noise reduction on the target area and the non-target area of the multiple frames of images so as to generate a target image, wherein the noise reduction parameter value corresponding to the target area is larger than the noise reduction parameter value corresponding to the non-target area. Therefore, the preset frame image is segmented by using the preset image identification model, and then the noise reduction processing can be respectively carried out on the target area and the non-target area by adopting different noise reduction parameter values according to the characteristics of the target area and the non-target area, so that the integral noise reduction effect of the image and the purity of the target area are ensured, the detail information of the non-target area is better reserved, the quality of the shot image is improved, and the user experience is improved.
In one possible implementation form of the present application, the night-scene image processing apparatus 30 further includes:
and the second determining module is used for determining the preset exposure compensation mode according to the illuminance of the current shooting scene and the current shaking degree of the camera module.
Further, in another possible implementation form of the present application, the second determining module is specifically configured to:
determining a target exposure value of each frame of image in the multi-frame images according to the illuminance of the current shooting scene;
determining the sensitivity of each frame of image according to the current jitter degree of the camera module;
and determining the exposure time of each frame of image according to the sensitivity of each frame of image and the target exposure value of each frame of image.
Further, in another possible implementation form of the present application, the night-scene image processing apparatus 30 further includes:
and the third determining module is used for determining the preset frame image according to the exposure value corresponding to each frame image in the multi-frame images.
Further, in another possible implementation form of the present application, the night-scene image processing apparatus 30 further includes:
and the fourth determination module is used for performing feathering processing on the target area so as to determine a transition area between the target area and the non-target area.
Correspondingly, the denoising module 33 is specifically configured to:
and respectively carrying out noise reduction processing on the target area, the transition area and the non-target area of the multi-frame image by adopting different noise reduction parameter values.
Further, in another possible implementation form of the present application, the night-scene image processing apparatus 30 further includes:
the fifth determining module is used for determining a reference noise reduction parameter according to the current shooting scene;
and the sixth determining module is used for determining the noise reduction parameter values respectively corresponding to the target area and the non-target area at present according to the preset weight values corresponding to the areas and the reference noise reduction parameters.
Further, in another possible implementation form of the present application, the first determining module 32 is specifically configured to:
and identifying a preset frame image in the multi-frame image by using a preset image identification model so as to determine an area with abnormal brightness in the preset frame image as a target area.
It should be noted that the above explanation of the embodiment of the night-scene image processing method shown in fig. 1 and fig. 2 is also applicable to the night-scene image processing apparatus 30 of this embodiment, and is not repeated here.
The night scene image processing device provided by the embodiment of the application can sequentially acquire multi-frame images according to a preset exposure compensation mode, and utilize a preset image recognition model to recognize preset frame images in the multi-frame images to determine a target area and a non-target area in the preset frame images, then perform feathering on the target area to determine a transition area between the target area and the non-target area, and further adopt different noise reduction parameter values to perform noise reduction processing on the target area, the transition area and the non-target area of the multi-frame images respectively to generate the target image, wherein the noise reduction parameter value corresponding to the target area is greater than the noise reduction parameter value corresponding to the transition area, and the noise reduction parameter value corresponding to the transition area is greater than the noise reduction parameter value corresponding to the non-target area. Therefore, by utilizing the preset image recognition model, the preset frame image is segmented, the identified target area is feathered, the transition area is determined, then different noise reduction parameter values can be adopted to respectively perform noise reduction on the target area, the transition area and the non-target area, the integral noise reduction effect of the image and the purity of the target area are ensured, the detail information of the non-target area is well reserved, the natural transition of the noise reduction effect between the target area and the non-target area is realized, the quality of the shot image is further improved, and the user experience is improved.
In order to implement the above embodiments, the present application further provides an electronic device.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 4, the electronic device 200 includes:
a memory 210 and a processor 220, a bus 230 connecting different components (including the memory 210 and the processor 220), wherein the memory 210 stores a computer program, and when the processor 220 executes the program, the night scene image processing method according to the embodiment of the present application is implemented.
Bus 230 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 200 typically includes a variety of electronic device readable media. Such media may be any available media that is accessible by electronic device 200 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 210 may also include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)240 and/or cache memory 250. The electronic device 200 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 260 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, and commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 230 by one or more data media interfaces. Memory 210 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 280 having a set (at least one) of program modules 270, including but not limited to an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment, may be stored in, for example, the memory 210. The program modules 270 generally perform the functions and/or methodologies of the embodiments described herein.
Electronic device 200 may also communicate with one or more external devices 290 (e.g., keyboard, pointing device, display 291, etc.), with one or more devices that enable a user to interact with electronic device 200, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 200 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 292. Also, the electronic device 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 293. As shown, the network adapter 293 communicates with the other modules of the electronic device 200 via the bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 200, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 220 executes various functional applications and data processing by executing programs stored in the memory 210.
It should be noted that, for the implementation process and the technical principle of the electronic device of this embodiment, reference is made to the foregoing explanation of the night scene image processing method in the embodiment of the present application, and details are not described here again.
The electronic device provided by the embodiment of the application can execute the night scene image processing method, sequentially acquire multiple frames of images according to a preset exposure compensation mode, and recognize preset frame images in the multiple frames of images by using a preset image recognition model to determine a target area and a non-target area in the preset frame images, and further perform noise reduction processing on the target area and the non-target area of the multiple frames of images respectively by using different noise reduction parameter values to generate a target image, wherein the noise reduction parameter value corresponding to the target area is greater than the noise reduction parameter value corresponding to the non-target area. Therefore, the preset frame image is segmented by using the preset image identification model, and then the noise reduction processing can be respectively carried out on the target area and the non-target area by adopting different noise reduction parameter values according to the characteristics of the target area and the non-target area, so that the integral noise reduction effect of the image and the purity of the target area are ensured, the detail information of the non-target area is better reserved, the quality of the shot image is improved, and the user experience is improved.
In order to implement the above embodiments, the present application also proposes a computer-readable storage medium.
The computer-readable storage medium stores thereon a computer program, and the computer program is executed by a processor to implement the night-scene image processing method according to the embodiment of the present application.
In order to implement the foregoing embodiments, a further embodiment of the present application provides a computer program, which when executed by a processor, implements the night scene image processing method according to the embodiments of the present application.
In an alternative implementation, the embodiments may be implemented in any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic devices may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., through the internet using an internet service provider).
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (9)

1. A night scene image processing method is characterized by comprising the following steps:
sequentially collecting multiple frames of images according to a preset exposure compensation mode;
identifying a preset frame image in the multi-frame image by using a preset image identification model to determine a target area and a non-target area in the preset frame image, wherein the target area in the preset frame image is an area with abnormal brightness in the preset frame image;
and respectively carrying out noise reduction processing on a target area and a non-target area of the multi-frame image by adopting different noise reduction parameter values to generate a target image, wherein the noise reduction parameter value corresponding to the target area is larger than the noise reduction parameter value corresponding to the non-target area.
2. The method of claim 1, wherein before sequentially acquiring a plurality of frames of images according to a preset exposure compensation mode, further comprising:
and determining the preset exposure compensation mode according to the illuminance of the current shooting scene and the current jitter degree of the camera module.
3. The method as claimed in claim 2, wherein the determining the preset exposure compensation mode according to the illuminance of the current shooting scene and the current shake degree of the camera module comprises:
determining a target exposure value of each frame of image in the multi-frame images according to the illuminance of the current shooting scene;
determining the sensitivity of each frame of image according to the current jitter degree of the camera module;
and determining the exposure time of each frame of image according to the sensitivity of each frame of image and the target exposure value of each frame of image.
4. The method as claimed in claim 1, wherein before the identifying process for the preset frame image of the multi-frame images, the method further comprises:
and determining the preset frame image according to the exposure value corresponding to each frame image in the multi-frame images.
5. The method of claim 1, wherein after determining the target region and the non-target region in the preset frame image, further comprising:
performing feathering processing on the target area to determine a transition area between the target area and the non-target area;
the respectively denoising the target region and the non-target region of the multi-frame image by adopting different denoising parameter values comprises:
and respectively carrying out noise reduction processing on the target area, the transition area and the non-target area of the multi-frame image by adopting different noise reduction parameter values.
6. The method according to any one of claims 1 to 5, wherein before performing noise reduction processing on the target region and the non-target region of the multi-frame image respectively by using different noise reduction parameter values, the method further comprises:
determining a reference noise reduction parameter according to the current shooting scene;
and determining the current noise reduction parameter values respectively corresponding to the target area and the non-target area according to the preset weight values corresponding to the areas and the reference noise reduction parameters.
7. A night scene image processing apparatus, comprising:
the acquisition module is used for sequentially acquiring multi-frame images according to a preset exposure compensation mode;
the determining module is used for identifying a preset frame image in the multi-frame image by using a preset image identification model so as to determine a target area and a non-target area in the preset frame image, wherein the target area in the preset frame image is an area with abnormal brightness in the preset frame image;
and the noise reduction module is used for respectively performing noise reduction processing on a target region and a non-target region of the multi-frame image by adopting different noise reduction parameter values to generate a target image, wherein the noise reduction parameter value corresponding to the target region is larger than the noise reduction parameter value corresponding to the non-target region.
8. An electronic device, comprising: the night-scene image processing method comprises a camera module, a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the computer program, the night-scene image processing method according to any one of claims 1-6 is realized.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the night scene image processing method according to any one of claims 1 to 6.
CN201811399541.0A 2018-11-22 2018-11-22 Night scene image processing method, device, electronic device and storage medium Active CN109348089B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811399541.0A CN109348089B (en) 2018-11-22 2018-11-22 Night scene image processing method, device, electronic device and storage medium
PCT/CN2019/101430 WO2020103503A1 (en) 2018-11-22 2019-08-19 Night scene image processing method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811399541.0A CN109348089B (en) 2018-11-22 2018-11-22 Night scene image processing method, device, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN109348089A CN109348089A (en) 2019-02-15
CN109348089B true CN109348089B (en) 2020-05-22

Family

ID=65317460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811399541.0A Active CN109348089B (en) 2018-11-22 2018-11-22 Night scene image processing method, device, electronic device and storage medium

Country Status (2)

Country Link
CN (1) CN109348089B (en)
WO (1) WO2020103503A1 (en)

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109348089B (en) * 2018-11-22 2020-05-22 Oppo广东移动通信有限公司 Night scene image processing method, device, electronic device and storage medium
CN109873953A (en) 2019-03-06 2019-06-11 深圳市道通智能航空技术有限公司 Image processing method, shooting at night method, picture processing chip and aerial camera
CN110136085B (en) * 2019-05-17 2022-03-29 凌云光技术股份有限公司 Image noise reduction method and device
CN110264473B (en) * 2019-06-13 2022-01-04 Oppo广东移动通信有限公司 Image processing method and device based on multi-frame image and electronic equipment
CN110246101B (en) * 2019-06-13 2023-03-28 Oppo广东移动通信有限公司 Image processing method and device
CN110443766B (en) * 2019-08-06 2022-05-31 厦门美图之家科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN112651909B (en) * 2019-10-10 2024-03-15 北京字节跳动网络技术有限公司 Image synthesis method, device, electronic equipment and computer-readable storage medium
CN111191498A (en) * 2019-11-07 2020-05-22 腾讯科技(深圳)有限公司 Behavior recognition method and related product
CN112907454B (en) * 2019-11-19 2023-08-08 杭州海康威视数字技术股份有限公司 Method, device, computer equipment and storage medium for acquiring image
CN111028189B (en) * 2019-12-09 2023-06-27 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN111147695B (en) * 2019-12-31 2022-05-13 Oppo广东移动通信有限公司 Image processing method, image processor, photographing device and electronic device
CN111246091B (en) * 2020-01-16 2021-09-03 北京迈格威科技有限公司 Dynamic automatic exposure control method and device and electronic equipment
CN111583144B (en) * 2020-04-30 2023-08-25 深圳市商汤智能传感科技有限公司 Image noise reduction method and device, electronic equipment and storage medium
CN113920009B (en) * 2020-07-10 2024-11-26 宁波舜宇光电信息有限公司 Stray light quantification method based on high dynamic image synthesis
CN114445414B (en) * 2020-11-04 2025-09-23 深圳绿米联创科技有限公司 Method, device and electronic equipment for predicting indoor space layout
CN114463190B (en) * 2020-11-09 2025-07-29 伟光有限公司 Image noise reduction method, device, electronic equipment and computer readable storage medium
CN114511450B (en) * 2020-11-17 2025-05-16 北京小米移动软件有限公司 Image denoising method, image denoising device, terminal and storage medium
CN112418322B (en) * 2020-11-24 2024-08-06 苏州爱医斯坦智能科技有限公司 Image data processing method and device, electronic equipment and storage medium
CN112785535B (en) * 2020-12-30 2024-07-30 北京迈格威科技有限公司 Method and device for acquiring night scene light track image and hand-held terminal
CN112926576B (en) * 2021-01-26 2024-12-27 宁波方太厨具有限公司 Method, system, electronic device and storage medium for detecting oil fume concentration
CN112989933B (en) * 2021-02-05 2024-09-20 宁波方太厨具有限公司 Recognition method, system, electronic equipment and storage medium for lampblack concentration
CN112987008A (en) * 2021-02-09 2021-06-18 上海眼控科技股份有限公司 Relative depth measuring method, device, equipment and storage medium
CN115482156A (en) * 2021-05-31 2022-12-16 北京小米移动软件有限公司 Image processing method and device, image processing device and storage medium
CN115696019A (en) * 2021-07-30 2023-02-03 哲库科技(上海)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN116095509B (en) * 2021-11-05 2024-04-12 荣耀终端有限公司 Method, device, electronic device and storage medium for generating video frames
CN114581327B (en) * 2022-03-03 2025-05-06 锐芯微电子股份有限公司 Image processing method and device, storage medium, and electronic device
CN114697469B (en) * 2022-03-15 2024-02-06 华能大理风力发电有限公司洱源分公司 Video processing method and device suitable for photovoltaic power station and electronic equipment
CN114928694A (en) * 2022-04-25 2022-08-19 深圳市慧鲤科技有限公司 Image acquisition method and apparatus, device, and medium
CN114783454B (en) * 2022-04-27 2024-06-04 北京百度网讯科技有限公司 A model training, audio noise reduction method, device, equipment and storage medium
CN115334250B (en) * 2022-08-09 2024-03-08 阿波罗智能技术(北京)有限公司 Image processing method and device and electronic equipment
CN115379130B (en) * 2022-08-25 2024-03-29 上海联影医疗科技股份有限公司 Automatic exposure control system, method, device and storage medium
CN115619695A (en) * 2022-10-28 2023-01-17 西安闻泰信息技术有限公司 Image processing method, device, electronic device and storage medium
CN115767259A (en) * 2022-11-11 2023-03-07 北京集创北方科技股份有限公司 Image acquisition method and device, electronic equipment and storage medium
CN116977411B (en) * 2022-12-01 2024-03-19 开立生物医疗科技(武汉)有限公司 Endoscope moving speed estimation method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105812618A (en) * 2016-03-17 2016-07-27 浙江大华技术股份有限公司 Motion detection method and motion detection device
CN108111762A (en) * 2017-12-27 2018-06-01 努比亚技术有限公司 A kind of image processing method, terminal and computer readable storage medium
CN108833804A (en) * 2018-09-20 2018-11-16 Oppo广东移动通信有限公司 Imaging method, device and electronic equipment

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7065255B2 (en) * 2002-05-06 2006-06-20 Eastman Kodak Company Method and apparatus for enhancing digital images utilizing non-image data
US8023733B2 (en) * 2006-06-08 2011-09-20 Panasonic Corporation Image processing device, image processing method, image processing program, and integrated circuit
US8355059B2 (en) * 2009-02-06 2013-01-15 Canon Kabushiki Kaisha Image capturing apparatus and control method thereof
KR101161979B1 (en) * 2010-08-19 2012-07-03 삼성전기주식회사 Image processing apparatus and method for night vision
CN105208281B (en) * 2015-10-09 2019-12-03 Oppo广东移动通信有限公司 A method and device for shooting a night scene
CN105608676B (en) * 2015-12-23 2018-06-05 浙江宇视科技有限公司 The Enhancement Method and device of a kind of video image
CN105681665B (en) * 2016-02-29 2018-12-04 广东欧珀移动通信有限公司 Control method, control device and electronic device
CN106303250A (en) * 2016-08-26 2017-01-04 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN106375676A (en) * 2016-09-20 2017-02-01 广东欧珀移动通信有限公司 Camera control method and device for terminal equipment, and terminal equipment
CN106412385B (en) * 2016-10-17 2019-06-07 湖南国科微电子股份有限公司 A kind of video image 3 D noise-reduction method and device
CN106875347A (en) * 2016-12-30 2017-06-20 努比亚技术有限公司 A kind of picture processing device and method
CN107770438B (en) * 2017-09-27 2019-11-29 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN107948538B (en) * 2017-11-14 2020-02-21 Oppo广东移动通信有限公司 Imaging method, device, mobile terminal and storage medium
CN108629747B (en) * 2018-04-25 2019-12-10 腾讯科技(深圳)有限公司 Image enhancement method and device, electronic equipment and storage medium
CN108683859A (en) * 2018-08-16 2018-10-19 Oppo广东移动通信有限公司 Photographing optimization method and device, storage medium and terminal equipment
CN109348089B (en) * 2018-11-22 2020-05-22 Oppo广东移动通信有限公司 Night scene image processing method, device, electronic device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105812618A (en) * 2016-03-17 2016-07-27 浙江大华技术股份有限公司 Motion detection method and motion detection device
CN108111762A (en) * 2017-12-27 2018-06-01 努比亚技术有限公司 A kind of image processing method, terminal and computer readable storage medium
CN108833804A (en) * 2018-09-20 2018-11-16 Oppo广东移动通信有限公司 Imaging method, device and electronic equipment

Also Published As

Publication number Publication date
CN109348089A (en) 2019-02-15
WO2020103503A1 (en) 2020-05-28

Similar Documents

Publication Publication Date Title
CN109348089B (en) Night scene image processing method, device, electronic device and storage medium
CN109005366B (en) Camera module night scene camera processing method, device, electronic device and storage medium
CN110062160B (en) Image processing method and device
CN109218628B (en) Image processing method, device, electronic device and storage medium
CN109194882B (en) Image processing method, image processing device, electronic equipment and storage medium
CN110072052B (en) Image processing method and device based on multi-frame image and electronic equipment
CN109218627B (en) Image processing method, image processing device, electronic equipment and storage medium
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110290289B (en) Image noise reduction method, device, electronic device and storage medium
CN108335279B (en) Image fusion and HDR imaging
CN110191291B (en) Image processing method and device based on multi-frame images
CN109618102B (en) Focusing processing method, device, electronic device and storage medium
CN109361853B (en) Image processing method, device, electronic device and storage medium
US11233948B2 (en) Exposure control method and device, and electronic device
WO2020038087A1 (en) Method and apparatus for photographic control in super night scene mode and electronic device
WO2020207261A1 (en) Image processing method and apparatus based on multiple frames of images, and electronic device
CN110248106A (en) Image noise reduction method, device, electronic device and storage medium
CN113298735A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110264420A (en) Image processing method and device based on multiple image
CN108234858A (en) Image virtualization processing method, device, storage medium and electronic equipment
HK40001708A (en) Camera module night scene camera processing method, device, electronic equipment and storage medium
HK40001708B (en) Camera module night scene camera processing method, device, electronic equipment and storage medium
HK1261061A1 (en) Image processing method, apparatus, electronic device and storage medium
HK1261061B (en) Image processing method, apparatus, electronic device and storage medium
HK40001704B (en) Exposure control method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant