[go: up one dir, main page]

CN110838088B - Multi-frame noise reduction method and device based on deep learning and terminal equipment - Google Patents

Multi-frame noise reduction method and device based on deep learning and terminal equipment Download PDF

Info

Publication number
CN110838088B
CN110838088B CN201810931283.XA CN201810931283A CN110838088B CN 110838088 B CN110838088 B CN 110838088B CN 201810931283 A CN201810931283 A CN 201810931283A CN 110838088 B CN110838088 B CN 110838088B
Authority
CN
China
Prior art keywords
frames
preset number
image
original images
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810931283.XA
Other languages
Chinese (zh)
Other versions
CN110838088A (en
Inventor
李松南
马岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Technology Group Co Ltd
Original Assignee
TCL Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Technology Group Co Ltd filed Critical TCL Technology Group Co Ltd
Priority to CN201810931283.XA priority Critical patent/CN110838088B/en
Publication of CN110838088A publication Critical patent/CN110838088A/en
Application granted granted Critical
Publication of CN110838088B publication Critical patent/CN110838088B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Picture Signal Circuits (AREA)

Abstract

The invention is applicable to the technical field of image processing, and provides a multi-frame noise reduction method and device based on deep learning, wherein the method comprises the following steps: acquiring an original image of a first preset number of frames; selecting one frame of original images in the original images of the first preset number of frames as a reference frame image; preliminary spatial domain noise reduction is carried out on the original images of the first preset number of frames through a spatial domain convolutional neural network; performing alignment operation on other original images of each frame in the original images of the first preset number of frames and the reference frame image through a motion compensation convolutional neural network, so that all original images in the original images of the first preset number of frames are aligned; and splicing all the original images in the aligned original images of the first preset number of frames, and performing convolution processing according to a time-space domain convolution neural network to generate a target image after noise reduction. The invention reduces the noise intensity contained in the image shot by the intelligent terminal and improves the quality of the image.

Description

Multi-frame noise reduction method and device based on deep learning and terminal equipment
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a multi-frame noise reduction method and device based on deep learning and terminal equipment.
Background
Due to the popularity of intelligent terminal devices and the continuous improvement of the hardware quality of cameras thereof, and the portability of photography of intelligent terminal devices, more and more people use intelligent terminal devices to shoot, edit and share their images and video content, such as mobile phones and digital cameras.
However, the picture quality of the photographed image of the smart terminal device is affected by various factors such as noise, resolution, sharpness, color fidelity, etc., wherein noise is a very critical influencing factor. There are various sources of noise in the image shot by the intelligent terminal device, such as photon shot noise, dark current noise, dead pixels, fixed pattern noise or readout noise. Photon shot noise is a main source of noise, is limited by a physical rule, and exists in an image no matter how hardware technology is developed. Therefore, the image quality of the image photographed by people using the smart terminal device is inevitably degraded due to the influence of noise.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a multi-frame noise reduction method, a multi-frame noise reduction device and a terminal device based on deep learning, so as to solve the problem that in the prior art, an image shot by people using intelligent terminal equipment reduces image quality due to the influence of noise.
A first aspect of an embodiment of the present invention provides a multi-frame noise reduction method based on deep learning, including:
acquiring an original image of a first preset number of frames;
selecting one frame of original images in the original images of the first preset number of frames as a reference frame image;
performing preliminary spatial domain noise reduction on the original image of the first preset number of frames through a spatial domain convolutional neural network;
performing alignment operation on other original images of each frame in the original images of the first preset number of frames and the reference frame image through a motion compensation convolutional neural network, so that all original images in the original images of the first preset number of frames are aligned;
and splicing all the original images in the aligned original images of the first preset number of frames, and performing convolution processing according to a time-space domain convolution neural network to generate a target image after noise reduction.
Optionally, the selecting one frame of original images in the original images of the first preset number of frames as the reference frame image includes:
sequencing the original images of the first preset number of frames according to the shooting time of each frame of original images in the original images of the first preset number of frames, and selecting the original images of a second preset number of frames positioned in the middle of the sequencing as alternative reference frame images of the second preset number of frames;
Respectively carrying out edge filtering processing on the alternative reference frame images of each frame, obtaining edge filtering response values of pixels of the alternative reference frame images of each frame, and sequencing;
acquiring an average value of pixels with preset proportions, which are in front of an edge filtering response value, in each frame of the alternative reference frame image;
and selecting the candidate reference frame image with the maximum average value as the reference frame image.
Optionally, before the acquiring the original image of the first preset number of frames, the method includes:
acquiring image data of the simulation noise and performing pre-training according to the image data of the simulation noise to obtain a pre-training model;
and obtaining image data of the real noise, and carrying out optimization training on the pre-training model according to the image data of the real noise to obtain an optimization training model.
Optionally, the obtaining the image data of the simulation noise and performing pre-training according to the image data of the simulation noise to obtain a pre-training model includes:
and acquiring the image data of the simulation noise, and performing independent pre-training on the time-space domain convolutional neural network, the space domain convolutional neural network and the motion compensation convolutional neural network according to the image data of the simulation noise to obtain a pre-training model.
Optionally, the obtaining image data of the real noise and performing optimization training on the pre-training model according to the image data of the real noise to obtain an optimized training model includes:
acquiring image data of real noise of a fourth preset number of frames, and selecting one frame of image in the image data of the real noise of the fourth preset number of frames as a training reference frame image;
carrying out noise reduction processing on the training reference frame image according to a preset noise reduction algorithm of a fourth preset number of frames to obtain first image data;
selecting image data of a fifth preset number of frames from the image data of the real noise of the fourth preset number of frames as input image data, and performing end-to-end optimization training on the pre-training model by taking the first image data as output image data to obtain an optimization training model; wherein the fourth preset number is greater than the fifth preset number, and the image data of the fifth preset number of frames includes the training reference frame image.
A second aspect of an embodiment of the present invention provides a multi-frame noise reduction device based on deep learning, including:
the first acquisition module is used for acquiring the original images of the first preset number of frames;
The selection module is used for selecting one frame of original images in the original images of the first preset number of frames as a reference frame image;
the noise reduction module is used for performing preliminary spatial domain noise reduction on the original images of the first preset number of frames through a spatial domain convolutional neural network;
the alignment module is used for performing alignment operation on other original images of each frame in the original images of the first preset number of frames and the reference frame image through the motion compensation convolutional neural network so as to align all the original images in the original images of the first preset number of frames;
the splicing module is used for splicing all the original images in the aligned original images of the first preset number of frames, carrying out convolution processing according to the time-space domain convolution neural network and generating a target image after noise reduction.
A third aspect of an embodiment of the present invention provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method as described above when the computer program is executed.
A fourth aspect of the embodiments of the present invention provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method as described above.
According to the embodiment of the invention, the original images of the first preset number of frames are acquired; selecting one of the original images as a reference frame image; performing preliminary spatial domain noise reduction on the space domain convolutional neural network; performing alignment operation on other original images of each frame and the reference frame image through a motion compensation convolutional neural network, so that all the original images are aligned; all the aligned original images are spliced, convolution processing is carried out according to a time-space domain convolution neural network, and a target image after noise reduction is generated, so that the noise intensity contained in the image shot by the intelligent terminal is reduced, the quality of the image is improved, and particularly, the image quality under the condition of low luminous flux is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a multi-frame noise reduction method based on deep learning according to an embodiment of the invention;
Fig. 2 is a flow chart of a multi-frame noise reduction method based on deep learning according to a second embodiment of the present invention;
fig. 3 is a flow chart of a multi-frame noise reduction method based on deep learning according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a multi-frame noise reduction device based on deep learning according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of a selection module according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of a multi-frame noise reduction device based on deep learning according to a sixth embodiment of the present invention;
fig. 7 is a schematic diagram of a terminal device according to a seventh embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution of an embodiment of the present invention will be clearly described below with reference to the accompanying drawings in the embodiment of the present invention, and it is apparent that the described embodiment is a part of the embodiment of the present invention, but not all the embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
The term "comprising" in the description of the invention and the claims and in the above figures and any variants thereof is intended to cover a non-exclusive inclusion. For example, a process, method, or system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include additional steps or elements not listed or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used for distinguishing between different objects and not for describing a particular sequential order.
In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
Example 1
As shown in fig. 1, the present embodiment provides a multi-frame noise reduction method based on deep learning, which can be applied to terminal devices such as a mobile phone, a digital camera, a microscope, an astronomical telescope, and the like. The multi-frame noise reduction method based on deep learning provided by the embodiment comprises the following steps:
s101, acquiring original images of a first preset number of frames.
In a specific application, the original images of the first preset number of frames are obtained (the original images of the first preset number of frames can be selected by a user according to actual conditions, or the current terminal equipment is set to automatically obtain the original images of the first preset number of frames, screening conditions can also be set, the original images stored in the current terminal equipment or other cloud end connected with the current terminal equipment and the original images stored in a database are screened, the original images of the first preset number of frames meeting the conditions are selected, in the embodiment, the first preset number of frames is selected by ISO (ISO) automatic setting, the original images of the first preset number of frames are selected, the first preset number of frames can be set to 3 to 6), wherein the first preset number refers to the number of the images which are preset to be subjected to multi-frame noise reduction by the method, and in the embodiment, different numbers of images can be used as inputs. For example, 3, 5 and 7 frames can be used as input, three multi-frame noise reduction models can be trained to process three conditions of middle, low and extremely low illumination intensity, so that the number of images is large under low illumination, the number of images is small under high illumination, and hardware processing resources are saved); the original image refers to an image that has not undergone noise reduction processing. In this embodiment, an RGB three-channel image is selected as an original image, and the multi-frame noise reduction method based on deep learning provided in this embodiment also supports a Y-channel image or a bellformat image (Raw format), and it should be noted that, when the original image is a bellformat image (Raw format), the current multi-frame noise reduction method based on deep learning replaces the ISP to implement a complete image processing flow.
S102, selecting one frame of original images in the original images of the first preset number of frames as a reference frame image.
In a specific application, one frame of original images in the original images of the first preset number of frames is selected as a reference frame image for being used as a standard subsequently, so that the non-reference frame image is aligned by taking the reference frame image as a reference contrast.
S103, performing preliminary spatial domain noise reduction on the original image of the first preset number of frames through a spatial domain convolutional neural network.
In a specific application, an initial spatial domain noise reduction process is performed on each frame of original image in the original images of the first preset number of frames through a spatial domain convolution neural network, in this embodiment, spatial domain noise reduction is performed on each frame of original image through other single frame convolution noise reduction neural networks such as a group-wise convolution neural network (group-wise convolution) or DnCNN, redNet, WIN, so as to obtain an original image with lower noise, and further, a better effect is achieved on the subsequent image alignment operation.
And S104, performing alignment operation on other original images of each frame in the original images of the first preset number of frames and the reference frame image through a motion compensation convolutional neural network, so that all the original images in the original images of the first preset number of frames are aligned.
In a specific application, taking the reference frame image as a standard, performing alignment operation on other original images in the original images of the first preset number of frames and the reference frame image through a motion compensation convolutional neural network. Wherein, the other original images of each frame refer to non-reference frame images except the reference frame image in the original images of the first preset number of frames. In this embodiment, it may be configured to implement an alignment operation for each other original image of the first preset number of frames and the reference frame image through a plurality of motion compensation convolutional neural networks such as FlowNet, flowNet or STN.
S105, all original images in the aligned original images of the first preset number of frames are spliced, convolution processing is carried out according to a time-space domain convolution neural network, and a target image after noise reduction is generated.
In a specific application, all original images in the aligned original images of the first preset number of frames are spliced, convolution processing is performed according to a time-space domain convolution neural network, a target image after noise reduction processing is generated, the training process is accelerated by removing the implementation of other time-space domain convolution neural networks of a pooling layer and a full-Connection layer (such as AlexNet or VGG and other time-space domain convolution neural networks), and by adding Skip Connection (Skip Connection), deep learning is performed on residual errors. In this embodiment, features for distinguishing noise, multi-frame alignment errors and image content are deeply learned from a large amount of data, so that the design difficulty of the multi-frame noise reduction algorithm can be reduced, and the effect of the algorithm is improved. In this embodiment, the image may be processed by matching with an image processing unit (ISP) on the intelligent terminal device, so that the target image quality after noise reduction is better. Because the ISP noise reduction is single-frame spatial domain noise reduction, the details of the image are often sacrificed while the noise is reduced. Therefore, in order to cooperate with the multi-frame noise reduction method based on deep learning provided in this embodiment, the noise reduction function of the ISP end needs to be weakened or turned off so as to preserve the details of the image. Also, depending on the intensity of the ambient light, a different number of images may be used as input in this embodiment. For example, three multi-frame noise reduction models can be trained to handle three situations of low, and very low illumination intensities, respectively, using 3, 5, and 7 frames as inputs. For each of the above cases, it is necessary to determine the exposure parameters of the image, i.e., the exposure time and ISO (the aperture of the existing mobile phone is a fixed value and cannot be adjusted), and obtain a reference of the exposure parameters using the automatic exposure function of the ISP. Also, in order to prevent overexposure of the image at high light, the intensity of the reference exposure needs to be appropriately turned down. The low-exposure input image can be enhanced by using an image enhancement algorithm or a noise reduction model with an image enhancement function such as (HDR), so that the trained and optimized training model has the image enhancement capability and can also store details of the image. It should be noted that, the multi-frame noise reduction method based on deep learning can be used alone or together with the image enhancement module. For example, the scheme used with various image enhancement modules:
Scheme one: multi-frame noise reduction followed by "HDR";
scheme II: multi-frame noise reduction is followed by 'night scene enhancement';
scheme III: multi-frame noise reduction followed by super resolution;
scheme IV: the multi-frame noise reduction is followed by "text enhancement".
In the embodiment, the original images of a first preset number of frames are acquired; selecting one of the original images as a reference frame image; performing preliminary spatial domain noise reduction on the space domain convolutional neural network; performing alignment operation on other original images of each frame and the reference frame image through a motion compensation convolutional neural network, so that all the original images are aligned; all the aligned original images are spliced, convolution processing is carried out according to a time-space domain convolution neural network, and a target image after noise reduction is generated, so that the noise intensity contained in the image shot by the intelligent terminal is reduced, the quality of the image is improved, and particularly, the image quality under the condition of low luminous flux is improved.
Example two
As shown in fig. 2, this embodiment is a further illustration of the method steps in embodiment one. In this embodiment, step S102 includes:
s201, sorting the original images of the first preset number of frames according to the shooting time of each frame of original images in the original images of the first preset number of frames, and selecting the original images of the second preset number of frames positioned in the middle of the sorting as the alternative reference frame images of the second preset number of frames.
In a specific application, according to the shooting time of each frame of original images in the original images of the first preset number of frames, sorting the original images of the first preset number of frames according to the shooting time of each frame of original images, and selecting the original images of the second preset number of frames positioned in the middle as alternative reference frame images of the second preset number of frames, wherein the original images of the second preset number of frames positioned in the middle are ranked with higher correlation degree before or after the original images of the second preset number of frames, so that the original images can be selected as alternative reference frame images; for example, if the first preset number is 50, the second preset number is 4, i.e. the candidate reference frame images of the second preset number of frames are the original images in the middle (i.e. 24 th, 25 th, 26 th and 27 th) of the sequence; the ranking can be set as follows according to the actual situation: from the current moment of shooting time interval being far to the current moment of shooting time interval being near, or the sequence from the current moment of the shooting time interval to be far.
S202, respectively carrying out edge filtering processing on the candidate reference frame images of each frame, and obtaining edge filtering response values of pixels of the candidate reference frame images of each frame and sequencing the values.
In a specific application, edge filtering processing (such as a sobel operator) is performed in each frame of candidate reference frames, edge filtering processing response values of pixels of each frame of candidate reference frames are obtained, and the edge filtering processing response values of the pixels of each frame of candidate reference frames are ordered.
S203, obtaining an average value of pixels with preset proportion, which are in front of an edge filtering response value in the alternative reference frame image of each frame.
In a specific application, pixels with preset proportions of the edge filter response values in the front in the sequence in each frame of alternative reference images are obtained, and the average value of the pixels with the preset proportions of the edge filter response values in the front in the sequence is obtained. The front pixel is the pixel with the front edge filter response value in the sequence, and the preset proportion can be selected according to the actual situation, for example, the preset proportion is set to be 5%, and the edge filter processing response value of the selected pixel is sequenced from big to small, then the obtained pixel with the front edge filter response value in each frame of candidate reference image is the pixel with the front 5% in the sequence, that is, the pixel with the front edge filter processing response value of the selected 5% in the sequence is larger.
S204, selecting the candidate reference frame image with the maximum average value as the reference frame image.
In a specific application, an alternative reference frame image with the largest average value of pixels of a preset proportion, the edge filter response value of which is earlier in the sequence, is selected as the reference frame image. The edge filtered corresponding value of a pixel may reflect the sharpness of the image, with the greater the edge filtered corresponding value of the pixel, the sharper the image.
According to the method, the device and the system, the definition degree of the image is effectively and rapidly judged through edge filter response processing, and the alternative reference frame image with the largest average value of the pixels with the preset proportion, which are in front of the edge filter response value in the sequence, is obtained through calculation and used as the reference frame image, so that the reference frame with higher correlation degree and higher definition with other non-reference frames is selected, and the effects of subsequent alignment operation and fusion operation are effectively improved.
Example III
As shown in fig. 3, this embodiment is a further illustration of the method steps in embodiment one. In this embodiment, before step S101, the method further includes:
s106, acquiring image data of the simulation noise and performing pre-training according to the image data of the simulation noise to obtain a pre-training model.
In specific application, image data of the simulation noise is obtained, and the time-space domain convolutional neural network, the space domain convolutional neural network and the motion compensation convolutional neural network are respectively and independently pre-trained according to the image data of the simulation noise, so that a pre-training model of the system is obtained. The image data simulating noise refers to processing an original high-quality image according to a preset method, and simulates a low-quality image caused by various noise and other factors.
S107, acquiring image data of real noise, and carrying out optimization training on the pre-training model according to the image data of the real noise to obtain an optimization training model.
In a specific application, image data of real noise is obtained, and the pre-training model is optimally trained according to the image data of the real noise to obtain an optimized training model, wherein the image data of the real noise refers to image data with the real noise obtained through real shooting. In this embodiment, by a: the mobile phone is held, N images are shot by using the same exposure parameters (N is a positive integer, and the numerical value is larger and better); b: the method for denoising the multi-frame image is commonly used in the prior art, N images are used as input, and multi-frame denoising is performed to obtain a high-quality denoising image; c: the method comprises the steps of obtaining high-quality noise-free training data by further optimizing the sequential operation of the quality of the noise reduction images by using an image enhancement method (such as HDR) and the like, so as to realize optimized training on a pre-training model.
In one embodiment, before the step S106, the method includes:
acquiring a third preset number of high-quality images;
And adding noise to the high-quality image according to a preset method to obtain image data simulating noise.
In a specific application, a third preset number of high-quality images are acquired, noise adding processing is performed on the acquired high-quality images according to a preset method, and image data simulating noise is obtained. The high-quality image refers to an image with a relatively high quality (quality of image data with respect to normal real noise) that is relatively low in noise or a high-quality image that has undergone noise reduction processing. The preset method is a method for adding noise to the image according to actual conditions, for example, radiation, homography transformation, gaussian noise or poisson noise are added to the image.
In one embodiment, the step S106 includes:
and acquiring the image data of the simulation noise, and performing independent pre-training on the time-space domain convolutional neural network, the space domain convolutional neural network and the motion compensation convolutional neural network according to the image data of the simulation noise to obtain a pre-training model.
In specific application, image data of the simulation noise is obtained, and the time-space domain convolutional neural network, the space domain convolutional neural network and the motion compensation convolutional neural network are respectively and independently pre-trained according to the image data of the simulation noise, so that the convergence degree of each neural network training model is accelerated, and a pre-training model with a good initial state is obtained.
In one embodiment, the step S107 includes:
acquiring image data of real noise of a fourth preset number of frames, and selecting one frame of image in the image data of the real noise of the fourth preset number of frames as a training reference frame image;
carrying out noise reduction processing on the training reference frame image according to a preset noise reduction algorithm of a fourth preset number of frames to obtain first image data;
selecting image data of a fifth preset number of frames from the image data of the real noise of the fourth preset number of frames as input image data, and performing end-to-end optimization training on the pre-training model by taking the first image data as output image data to obtain an optimization training model; wherein the fourth preset number is greater than the fifth preset number, and the image data of the fifth preset number of frames includes the training reference frame image.
In a specific application, acquiring image data of real noise of a fourth preset number of frames, selecting one frame of image in the image data of the real noise of the fourth preset number of frames as a training reference frame image, carrying out noise reduction processing on the training reference frame image according to a preset noise reduction algorithm of the fourth preset number of frames to obtain first image data, selecting image data of a fifth preset number of frames from the image data of the real noise of the fourth preset number of frames as input image data, taking the first image data as output image data, carrying out End-to-End (End-to-End) optimization training on a pre-training model (namely, carrying out optimization training on the whole network), and obtaining an optimization training model for processing the image data with the real noise; the fourth preset number refers to the number of preset image data of real noise for optimization training; the fifth preset number is the number of real noise which is preset and actually carries out the deep learning algorithm to realize the optimization training, and the fourth preset number is set to be larger than the fifth preset number; the image data of the fifth preset number of frames includes training reference frame images. The training reference frame image is an image which is obtained from image data of real noise of a fourth preset number of frames and used for carrying out optimized training on the pre-training model; the reference frame image in the first embodiment is an image that is used in the optimized model to perform the alignment operation on other original images in the first preset number of frames and the reference frame image through the motion compensation convolutional neural network (i.e., one frame image in the first preset number of frames). It should be noted that, selecting one frame of image in the image data of the real noise of the fourth preset number of frames as the training reference frame image may perform different selection operations according to different actual situations; for example, the selection of the training reference frame may be performed through the selection operation in the second embodiment, or the random selection operation of the training reference frame may be performed, or the selection condition of the training reference frame may be preset, so as to select, from the image data of the real noise of the fourth preset number of frames, one frame of image satisfying the selection condition as the training reference frame image. For example, 30 frame images (i.e., a fourth preset number of frame images) are shot for a scene, one of the frame images is selected as a training reference frame image (through the selection operation in the second embodiment), and the training reference frame image is subjected to noise reduction processing through a preset noise reduction algorithm (a noise reduction algorithm of a complex multi-frame image commonly used in the prior art) by the shot 30 frame images, so that a high-quality image (i.e., first image data) is obtained. For example, during model training of depth, we use M-frame noise images (e.g., m=6, i.e., the fifth preset number of frame images is any 6 frame images selected from the 30 frame images described above, which must include training reference frame images) as input image data for the model, and the first image data described above as output image data for the model to optimize the pre-training model. In this embodiment, according to the actual situation, it may be preferable that the first image data is optimized by a preset image enhancement method to obtain second image data and store the second image data, so that the pre-training model is subjected to end-to-end optimization training by the second image data to obtain an optimization training model to process the subsequent image data with real noise. The preset noise reduction algorithm in this embodiment includes any one of complex methods for reducing noise of multiple frame images (the same as the noise reduction algorithm of multiple frame images in the prior art used in the above step B) commonly used in the prior art; the first image data is image data of real noise after the training reference frame image is subjected to noise reduction processing; the second image data is image data of real noise obtained by optimizing the first image data through a preset image enhancement method; the preset image enhancement method refers to a preset image enhancement method, such as an HDR method, for performing preliminary optimization on the first image data. By the method, multiple groups of training data can be generated, and the efficiency of optimizing the pre-training model is improved.
According to the embodiment, the image data of the simulated noise is used for pre-training, and a large number of images of the real noise are used for optimizing training, so that the noise reduction efficiency of the images is improved, and the quality of the images is further improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
Example IV
As shown in fig. 4, the present embodiment provides a multi-frame noise reduction device 100 based on deep learning, which is used to perform the method steps in the first embodiment. The multi-frame noise reduction device 100 based on deep learning provided in this embodiment includes:
a first obtaining module 101, configured to obtain an original image of a first preset number of frames;
a selection module 102, configured to select one frame of original images in the first preset number of frames of original images as a reference frame image;
the noise reduction module 103 is configured to perform preliminary spatial domain noise reduction on the original image of the first preset number of frames through a spatial domain convolutional neural network;
an alignment module 104, configured to perform an alignment operation on each of the other original images of the first preset number of frames and the reference frame image through a motion compensation convolutional neural network, so that all the original images of the first preset number of frames are aligned;
And the stitching module 105 is configured to stitch all the aligned original images in the first preset number of frames, and perform convolution processing according to the time-space domain convolution neural network, so as to generate a target image after noise reduction.
In the embodiment, the original images of a first preset number of frames are acquired; selecting one of the original images as a reference frame image; performing preliminary spatial domain noise reduction on the space domain convolutional neural network; performing alignment operation on other original images of each frame and the reference frame image through a motion compensation convolutional neural network, so that all the original images are aligned; all the aligned original images are spliced, convolution processing is carried out according to a time-space domain convolution neural network, and a target image after noise reduction is generated, so that the noise intensity contained in the image shot by the intelligent terminal is reduced, the quality of the image is improved, and particularly, the image quality under the condition of low luminous flux is improved.
Example five
As shown in fig. 5, in this embodiment, the selection module 102 in the fourth embodiment further includes the following structures for performing the method steps in the second embodiment:
a first sorting unit 1021, configured to sort original images of the first preset number of frames according to a capturing time of each frame of original image in the original images of the first preset number of frames, and select an original image of a second preset number of frames located in the middle of the sorting as an alternative reference frame image of the second preset number of frames;
A second sorting unit 1022, configured to perform edge filtering processing on the candidate reference frame images of each frame, obtain edge filtering response values of pixels of the candidate reference frame images of each frame, and sort the pixels;
a first obtaining unit 1023, configured to obtain an average value of pixels with a preset proportion, where the edge filter response value is earlier in the candidate reference frame image of each frame;
and a selecting unit 1024, configured to select the candidate reference frame image with the largest average value as the reference frame image.
According to the method, the device and the system, the definition degree of the image is effectively and rapidly judged through edge filter response processing, and the alternative reference frame image with the largest average value of the pixels with the preset proportion, which are in front of the edge filter response value in the sequence, is obtained through calculation and used as the reference frame image, so that the reference frame with higher correlation degree and higher definition with other non-reference frames is selected, and the effects of subsequent alignment operation and fusion operation are effectively improved.
Example six
As shown in fig. 6, in the present embodiment, the multi-frame noise reduction device 100 based on deep learning in the fourth embodiment further includes the following structure for performing the method steps in the third embodiment:
a second obtaining module 106, configured to obtain image data of the analog noise and perform pre-training according to the image data of the analog noise to obtain a pre-training model;
And a third obtaining module 107, configured to obtain image data of real noise, and perform optimization training on the pre-training model according to the image data of real noise, so as to obtain an optimization training model.
In one embodiment, the multi-frame noise reduction device 100 based on deep learning further includes:
a fourth acquisition module for acquiring a third preset number of high-quality images;
and the noise adding module is used for adding noise to the high-quality image according to a preset method to obtain image data of analog noise.
In one embodiment, the second acquisition module 106 includes:
the pre-training unit is used for acquiring the image data of the simulation noise, and performing independent pre-training on the time-space domain convolutional neural network, the space domain convolutional neural network and the motion compensation convolutional neural network according to the image data of the simulation noise to obtain a pre-training model.
In one embodiment, the third obtaining module 107 includes:
a second obtaining unit, configured to obtain image data of real noise of a fourth preset number of frames, and select one frame image in the image data of real noise of the fourth preset number of frames as a training reference frame image;
The noise reduction unit is used for carrying out noise reduction processing on the training reference frame image according to a preset noise reduction algorithm of a fourth preset number of frames to obtain first image data;
the training unit is used for selecting image data of a fifth preset number of frames from the image data of the real noise of the fourth preset number of frames as input image data, taking the first image data as output image data, and performing end-to-end optimization training on the pre-training model to obtain an optimized training model; wherein the fourth preset number is greater than the fifth preset number, and the image data of the fifth preset number of frames includes the training reference frame image.
According to the embodiment, the image data of the simulated noise is used for pre-training, and a large number of images of the real noise are used for optimizing training, so that the noise reduction efficiency of the images is improved, and the quality of the images is further improved.
Example seven
Fig. 7 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 7, the terminal device 7 of this embodiment includes: a processor 70, a memory 71 and a computer program 72 stored in the memory 71 and executable on the processor 70, such as a multi-frame noise reduction program based on deep learning. The processor 70, when executing the computer program 72, implements the steps of the various embodiments of the multi-frame noise reduction method based on deep learning described above, such as steps S101 to S105 shown in fig. 1. Alternatively, the processor 70, when executing the computer program 72, performs the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules 101 to 105 shown in fig. 4.
By way of example, the computer program 72 may be partitioned into one or more modules/units that are stored in the memory 71 and executed by the processor 70 to complete the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program 72 in the terminal device 7. For example, the computer program 72 may be divided into a first obtaining module, a selecting module, a noise reduction module, an alignment module and a splicing module, where specific functions of each module are described in the fourth embodiment, and are not described herein.
The terminal device 7 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The terminal device may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is merely an example of the terminal device 7 and does not constitute a limitation of the terminal device 7, and may include more or less components than illustrated, or may combine certain components, or different components, e.g., the terminal device may further include an input-output device, a network access device, a bus, etc.
The processor 70 may be a central processing unit (Central Processing Unit, CPU), or may be another general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the terminal device 7, such as a hard disk or a memory of the terminal device 7. The memory 71 may be an external storage device of the terminal device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital Card (SD), a Flash memory Card (Flash Card) or the like, which are provided in the terminal device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the terminal device 7. The memory 71 is used for storing the computer program as well as other programs and data required by the terminal device. The memory 71 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (8)

1. A multi-frame noise reduction method based on deep learning, comprising:
acquiring an original image of a first preset number of frames; the first preset number of frames is determined based on an intensity of ambient light; the lower the intensity of the ambient light, the more the first preset number of frames;
selecting one frame of original images in the original images of the first preset number of frames as a reference frame image;
performing preliminary spatial domain noise reduction on the original image of the first preset number of frames through a spatial domain convolutional neural network;
performing alignment operation on other original images of each frame in the original images of the first preset number of frames and the reference frame image through a motion compensation convolutional neural network, so that all original images in the original images of the first preset number of frames are aligned;
All original images in the aligned original images of the first preset number of frames are spliced, convolution processing is carried out according to a time-space domain convolution neural network, and a target image after noise reduction is generated;
the selecting one frame of original images in the original images of the first preset number of frames as the reference frame image includes:
sequencing the original images of the first preset number of frames according to the shooting time of each frame of original images in the original images of the first preset number of frames, and selecting the original images of a second preset number of frames positioned in the middle of the sequencing as alternative reference frame images of the second preset number of frames;
respectively carrying out edge filtering processing on the alternative reference frame images of each frame, obtaining edge filtering response values of pixels of the alternative reference frame images of each frame, and sequencing;
acquiring an average value of pixels with preset proportions, which are in front of an edge filtering response value, in each frame of the alternative reference frame image;
and selecting the candidate reference frame image with the maximum average value as the reference frame image.
2. The method for multi-frame noise reduction based on deep learning according to claim 1, wherein before the obtaining the original image of the first preset number of frames, the method comprises:
Acquiring image data of the simulation noise and performing pre-training according to the image data of the simulation noise to obtain a pre-training model;
and obtaining image data of the real noise, and carrying out optimization training on the pre-training model according to the image data of the real noise to obtain an optimization training model.
3. The multi-frame noise reduction method based on deep learning according to claim 2, wherein the obtaining the image data of the simulation noise and performing the pre-training according to the image data of the simulation noise to obtain the pre-training model includes:
and acquiring the image data of the simulation noise, and performing independent pre-training on the time-space domain convolutional neural network, the space domain convolutional neural network and the motion compensation convolutional neural network according to the image data of the simulation noise to obtain a pre-training model.
4. The multi-frame noise reduction method based on deep learning according to claim 2, wherein the obtaining image data of real noise and performing optimization training on the pre-training model according to the image data of real noise to obtain an optimized training model comprises:
acquiring image data of real noise of a fourth preset number of frames, and selecting one frame of image in the image data of the real noise of the fourth preset number of frames as a training reference frame image;
Carrying out noise reduction processing on the training reference frame image according to a preset noise reduction algorithm of a fourth preset number of frames to obtain first image data;
selecting image data of a fifth preset number of frames from the image data of the real noise of the fourth preset number of frames as input image data, and performing end-to-end optimization training on the pre-training model by taking the first image data as output image data to obtain an optimization training model; wherein the fourth preset number is greater than the fifth preset number, and the image data of the fifth preset number of frames includes the training reference frame image.
5. A multi-frame noise reduction device based on deep learning, comprising:
the first acquisition module is used for acquiring the original images of the first preset number of frames; the first preset number of frames is determined based on an intensity of ambient light; the lower the intensity of the ambient light, the more the first preset number of frames;
the selection module is used for selecting one frame of original images in the original images of the first preset number of frames as a reference frame image;
the noise reduction module is used for performing preliminary spatial domain noise reduction on the original images of the first preset number of frames through a spatial domain convolutional neural network;
The alignment module is used for performing alignment operation on other original images of each frame in the original images of the first preset number of frames and the reference frame image through the motion compensation convolutional neural network so as to align all the original images in the original images of the first preset number of frames;
the splicing module is used for splicing all the original images in the aligned original images of the first preset number of frames, carrying out convolution processing according to the time-space domain convolution neural network and generating a target image after noise reduction;
the selection module comprises:
the first ordering unit is used for ordering the original images of the first preset number of frames according to the shooting time of each frame of original image in the original images of the first preset number of frames, and selecting the original images of the second preset number of frames positioned in the middle of the ordering as the alternative reference frame images of the second preset number of frames;
the second ordering unit is used for respectively carrying out edge filtering processing on the alternative reference frame images of each frame, obtaining edge filtering response values of pixels of the alternative reference frame images of each frame and ordering the edge filtering response values;
a first obtaining unit, configured to obtain an average value of pixels with a preset proportion, where the edge filter response value is earlier in each frame of the candidate reference frame image;
And the selection unit is used for selecting the candidate reference frame image with the maximum average value as the reference frame image.
6. The deep learning based multi-frame noise reduction apparatus of claim 5, further comprising:
the second acquisition module is used for acquiring image data of the simulation noise and performing pre-training according to the image data of the simulation noise to obtain a pre-training model;
and the third acquisition module is used for acquiring image data of real noise and carrying out optimization training on the pre-training model according to the image data of the real noise to obtain an optimization training model.
7. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 4 when the computer program is executed.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 4.
CN201810931283.XA 2018-08-15 2018-08-15 Multi-frame noise reduction method and device based on deep learning and terminal equipment Active CN110838088B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810931283.XA CN110838088B (en) 2018-08-15 2018-08-15 Multi-frame noise reduction method and device based on deep learning and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810931283.XA CN110838088B (en) 2018-08-15 2018-08-15 Multi-frame noise reduction method and device based on deep learning and terminal equipment

Publications (2)

Publication Number Publication Date
CN110838088A CN110838088A (en) 2020-02-25
CN110838088B true CN110838088B (en) 2023-06-02

Family

ID=69574031

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810931283.XA Active CN110838088B (en) 2018-08-15 2018-08-15 Multi-frame noise reduction method and device based on deep learning and terminal equipment

Country Status (1)

Country Link
CN (1) CN110838088B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709324B (en) * 2020-05-21 2024-11-15 武汉Tcl集团工业研究院有限公司 Video noise reduction method, video noise reduction device and video noise reduction terminal
CN114693857B (en) * 2020-12-30 2025-08-08 华为技术有限公司 Ray tracing multi-frame noise reduction method, electronic device, chip and readable storage medium
CN115002297A (en) * 2021-03-02 2022-09-02 北京迈格威科技有限公司 Image denoising method, training method, device, electronic equipment and storage medium
CN113177497B (en) * 2021-05-10 2024-04-12 百度在线网络技术(北京)有限公司 Training method of visual model, vehicle identification method and device
CN113379633A (en) * 2021-06-15 2021-09-10 支付宝(杭州)信息技术有限公司 Multi-frame image processing method and device
CN115243044A (en) * 2022-07-27 2022-10-25 中国科学技术大学 Reference frame selection method and apparatus, device, and storage medium
CN116320343A (en) * 2022-11-30 2023-06-23 京东方科技集团股份有限公司 Video noise reduction method and device, video processing method and device and computer equipment
CN117830104B (en) * 2023-12-29 2025-03-28 摩尔线程智能科技(成都)有限责任公司 Image super-resolution processing method, device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104427240A (en) * 2013-09-09 2015-03-18 深圳富泰宏精密工业有限公司 Electronic device and image adjustment method thereof
CN107230192A (en) * 2017-05-31 2017-10-03 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
CN107273894A (en) * 2017-06-15 2017-10-20 珠海习悦信息技术有限公司 Recognition methods, device, storage medium and the processor of car plate
CN107895145A (en) * 2017-10-31 2018-04-10 南京信息工程大学 Method based on convolutional neural networks combination super-Gaussian denoising estimation finger stress
CN108090501A (en) * 2017-11-24 2018-05-29 华南农业大学 Based on plate experiment and the bacteriostatic level recognition methods of deep learning
CN108391035A (en) * 2018-03-26 2018-08-10 华为技术有限公司 A shooting method, device and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104427240A (en) * 2013-09-09 2015-03-18 深圳富泰宏精密工业有限公司 Electronic device and image adjustment method thereof
CN107230192A (en) * 2017-05-31 2017-10-03 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
CN107273894A (en) * 2017-06-15 2017-10-20 珠海习悦信息技术有限公司 Recognition methods, device, storage medium and the processor of car plate
CN107895145A (en) * 2017-10-31 2018-04-10 南京信息工程大学 Method based on convolutional neural networks combination super-Gaussian denoising estimation finger stress
CN108090501A (en) * 2017-11-24 2018-05-29 华南农业大学 Based on plate experiment and the bacteriostatic level recognition methods of deep learning
CN108391035A (en) * 2018-03-26 2018-08-10 华为技术有限公司 A shooting method, device and equipment

Also Published As

Publication number Publication date
CN110838088A (en) 2020-02-25

Similar Documents

Publication Publication Date Title
CN110838088B (en) Multi-frame noise reduction method and device based on deep learning and terminal equipment
US11055827B2 (en) Image processing apparatus and method
CN111353948B (en) Image noise reduction method, device and equipment
CN106920221B (en) Take into account the exposure fusion method that Luminance Distribution and details are presented
CN112272832B (en) Method and system for DNN-based imaging
RU2397542C2 (en) Method and device for creating images with high dynamic range from multiple exposures
CN110675336A (en) Low-illumination image enhancement method and device
JP2020530920A (en) Image lighting methods, devices, electronics and storage media
WO2014172059A2 (en) Reference image selection for motion ghost filtering
CN109886892A (en) Image processing method, image processing apparatus and storage medium
Jaroensri et al. Generating training data for denoising real rgb images via camera pipeline simulation
CN114581355B (en) A method, terminal and electronic device for reconstructing HDR image
KR20190018760A (en) Method for generating high-dynamic range image, camera device, terminal and imaging method
Zhang et al. Deep motion blur removal using noisy/blurry image pairs
CN114494044A (en) Attention mechanism-based image enhancement method and system and related equipment
CN110807735A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
CN110717864A (en) Image enhancement method and device, terminal equipment and computer readable medium
Conde et al. Bsraw: Improving blind raw image super-resolution
CN112418279A (en) Image fusion method, device, electronic device and readable storage medium
CN113538265B (en) Image denoising method and device, computer readable medium, and electronic device
CN113256501B (en) Image processing method, storage medium and terminal equipment
CN111953888B (en) Dim light imaging method and device, computer readable storage medium and terminal equipment
CN111383171B (en) Picture processing method, system and terminal equipment
Han et al. Low brightness PCB image enhancement algorithm for FPGA
CN111383188B (en) Image processing method, system and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 516006 TCL technology building, No.17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province

Applicant after: TCL Technology Group Co.,Ltd.

Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District

Applicant before: TCL Corp.

REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40018355

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant