CN112950502B - Image processing method and device, electronic equipment and storage medium - Google Patents
Image processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112950502B CN112950502B CN202110221009.5A CN202110221009A CN112950502B CN 112950502 B CN112950502 B CN 112950502B CN 202110221009 A CN202110221009 A CN 202110221009A CN 112950502 B CN112950502 B CN 112950502B
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- light images
- motion
- visible light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域Technical field
本申请涉及计算机技术领域,尤其涉及一种图像处理方法及装置、电子设备、存储介质。The present application relates to the field of computer technology, and in particular, to an image processing method and device, electronic equipment, and storage media.
背景技术Background technique
目前,针对视频降噪主要有两类方法:一类是空域降噪方法,另一类是时域降噪方法。空域降噪方法仅需要对当前视频帧进行处理,而时域降噪方法需要参考除当前视频帧以外的视频帧(简称为参考帧)来对当前视频帧进行处理。对于时域降噪方法来说,在进行降噪处理之前需要将参考帧与当前视频帧进行配准,目前的配准方法具有配准出错率较高的问题。Currently, there are two main methods for video noise reduction: one is the spatial domain noise reduction method, and the other is the temporal domain noise reduction method. The spatial domain noise reduction method only needs to process the current video frame, while the temporal domain noise reduction method needs to refer to video frames other than the current video frame (referred to as reference frames) to process the current video frame. For the time domain noise reduction method, the reference frame needs to be registered with the current video frame before noise reduction processing. The current registration method has the problem of high registration error rate.
发明内容Contents of the invention
为解决上述技术问题,本申请实施例提供了一种图像处理方法及装置、电子设备、存储介质。In order to solve the above technical problems, embodiments of the present application provide an image processing method and device, electronic equipment, and storage media.
本申请实施例提供了一种图像处理方法,所述方法包括:An embodiment of the present application provides an image processing method, which includes:
获取多张可见光图像和多张不可见光图像;Acquire multiple visible light images and multiple invisible light images;
基于所述多张不可见光图像,确定图像配准参数;Based on the multiple invisible light images, determine image registration parameters;
基于所述图像配准参数,对所述多张可见光图像进行配准。Based on the image registration parameters, the multiple visible light images are registered.
在本申请一些可选实施方式中,所述获取多张可见光图像和多张不可见光图像,包括:In some optional implementations of this application, acquiring multiple visible light images and multiple invisible light images includes:
获取第一图像传感器采集的多张可见光图像,以及第二图像传感器采集的多张不可见光图像,其中,所述多张可见光图像与所述多张不可见光图像按照采集时刻具有对应关系。A plurality of visible light images collected by the first image sensor and a plurality of invisible light images collected by the second image sensor are acquired, wherein the plurality of visible light images and the plurality of invisible light images have a corresponding relationship according to the collection time.
在本申请一些可选实施方式中,所述多张可见光图像包括第一图像和第二图像,所述多张不可见光图像包括第三图像和第四图像,其中,所述第一图像和所述第三图像对应,所述第二图像和所述第四图像对应;In some optional implementations of the present application, the multiple visible light images include a first image and a second image, and the multiple invisible light images include a third image and a fourth image, wherein the first image and the The third image corresponds to the second image and the fourth image corresponds;
所述基于所述多张不可见光图像,确定图像配准参数,包括:Determining image registration parameters based on the plurality of invisible light images includes:
从所述第三图像中提取第一特征点集,以及从所述第四图像中提取第二特征点集;Extract a first set of feature points from the third image, and extract a second set of feature points from the fourth image;
从所述第一特征点集和所述第二特征点集中,确定出具有匹配关系的至少一对特征点;Determine at least one pair of feature points with a matching relationship from the first feature point set and the second feature point set;
基于所述至少一对特征点的坐标信息,确定图像配准参数。Based on the coordinate information of the at least one pair of feature points, image registration parameters are determined.
在本申请一些可选实施方式中,所述方法还包括:In some optional implementations of this application, the method further includes:
基于标定数据,对所述图像配准参数进行调整;其中,所述标定数据基于所述第一图像传感器和所述第二图像传感器的相对位置关系确定。The image registration parameters are adjusted based on calibration data; wherein the calibration data is determined based on the relative positional relationship between the first image sensor and the second image sensor.
在本申请一些可选实施方式中,所述基于所述图像配准参数,对所述多张可见光图像进行配准,包括:In some optional implementations of this application, the registration of the multiple visible light images based on the image registration parameters includes:
基于所述图像配准参数对所述第二图像进行变换,得到与所述第一图像配准的第五图像。The second image is transformed based on the image registration parameter to obtain a fifth image registered with the first image.
在本申请一些可选实施方式中,所述方法还包括:In some optional implementations of this application, the method further includes:
对所述第五图像和所述第一图像进行图像融合处理,得到第六图像,所述第六图像为所述第一图像去除噪声后的图像。Image fusion processing is performed on the fifth image and the first image to obtain a sixth image, where the sixth image is an image after removing noise from the first image.
在本申请一些可选实施方式中,所述对所述第五图像和所述第一图像进行图像融合处理,包括:In some optional implementations of this application, performing image fusion processing on the fifth image and the first image includes:
将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求平均,得到第六图像中的各个像素点的像素值。The pixel value of each pixel point in the fifth image is averaged correspondingly with the pixel value of each pixel point in the first image to obtain the pixel value of each pixel point in the sixth image.
在本申请一些可选实施方式中,所述对所述第五图像和所述第一图像进行图像融合处理,包括:In some optional implementations of this application, performing image fusion processing on the fifth image and the first image includes:
生成运动掩码图像,所述运动掩码图像用于确定运动区域和非运动区域;Generating a motion mask image for determining motion areas and non-motion areas;
将所述第一图像中的运动区域内的各个像素点的像素值,确定为第六图像中的运动区域内的各个像素点的像素值;Determine the pixel value of each pixel point in the motion area in the first image as the pixel value of each pixel point in the motion area in the sixth image;
将所述第五图像中的非运动区域内的各个像素点的像素值与所述第一图像中的非运动区域内的各个像素点的像素值对应求平均,得到所述第六图像中的非运动区域内的各个像素点的像素值。The pixel values of each pixel in the non-motion area in the fifth image and the pixel values of each pixel in the non-motion area in the first image are averaged to obtain the pixel value in the sixth image. The pixel value of each pixel in the non-motion area.
在本申请一些可选实施方式中,所述生成运动掩码图像,包括:In some optional implementations of this application, generating a motion mask image includes:
将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求差值,得到运动掩码图像中的各个像素点的像素值;Calculate the difference between the pixel value of each pixel in the fifth image and the pixel value of each pixel in the first image to obtain the pixel value of each pixel in the motion mask image;
其中,所述运动掩码图像中的像素值大于等于阈值的像素点形成的区域为运动区域,所述运动掩码图像中的像素值小于所述阈值的像素点形成的区域为非运动区域。The area formed by the pixel points in the motion mask image with a pixel value greater than or equal to the threshold is a motion area, and the area formed by the pixel points in the motion mask image with a pixel value less than the threshold is a non-motion area.
在本申请一些可选实施方式中,所述不可见光图像为红外图像。In some optional implementations of this application, the invisible light image is an infrared image.
本申请实施例提供了一种图像处理装置,所述装置包括:An embodiment of the present application provides an image processing device, which includes:
获取单元,用于获取多张可见光图像和多张不可见光图像;An acquisition unit is used to acquire multiple visible light images and multiple invisible light images;
参数确定单元,用于基于所述多张不可见光图像,确定图像配准参数;A parameter determination unit configured to determine image registration parameters based on the plurality of invisible light images;
图像配准单元,用于基于所述图像配准参数,对所述多张可见光图像进行配准。An image registration unit is used to register the plurality of visible light images based on the image registration parameters.
在本申请一些可选实施方式中,所述获取单元,用于获取第一图像传感器采集的多张可见光图像,以及第二图像传感器采集的多张不可见光图像,其中,所述多张可见光图像与所述多张不可见光图像按照采集时刻具有对应关系。In some optional implementations of this application, the acquisition unit is used to acquire multiple visible light images collected by the first image sensor and multiple invisible light images collected by the second image sensor, wherein the multiple visible light images There is a corresponding relationship with the plurality of invisible light images according to the collection time.
在本申请一些可选实施方式中,所述多张可见光图像包括第一图像和第二图像,所述多张不可见光图像包括第三图像和第四图像,其中,所述第一图像和所述第三图像对应,所述第二图像和所述第四图像对应;In some optional implementations of the present application, the multiple visible light images include a first image and a second image, and the multiple invisible light images include a third image and a fourth image, wherein the first image and the The third image corresponds to the second image and the fourth image corresponds;
所述参数确定单元,用于从所述第三图像中提取第一特征点集,以及从所述第四图像中提取第二特征点集;从所述第一特征点集和所述第二特征点集中,确定出具有匹配关系的至少一对特征点;基于所述至少一对特征点的坐标信息,确定图像配准参数。The parameter determination unit is configured to extract a first feature point set from the third image and a second feature point set from the fourth image; from the first feature point set and the second feature point set The feature points are concentrated to determine at least one pair of feature points with a matching relationship; based on the coordinate information of the at least one pair of feature points, image registration parameters are determined.
在本申请一些可选实施方式中,所述参数确定单元,还用于基于标定数据,对所述图像配准参数进行调整;其中,所述标定数据基于所述第一图像传感器和所述第二图像传感器的相对位置关系确定。In some optional implementations of the present application, the parameter determination unit is also used to adjust the image registration parameters based on calibration data; wherein the calibration data is based on the first image sensor and the third image sensor. The relative positional relationship between the two image sensors is determined.
在本申请一些可选实施方式中,所述图像配准单元,用于基于所述图像配准参数对所述第二图像进行变换,得到与所述第一图像配准的第五图像。In some optional implementations of this application, the image registration unit is configured to transform the second image based on the image registration parameters to obtain a fifth image that is registered with the first image.
在本申请一些可选实施方式中,所述装置还包括:In some optional implementations of this application, the device further includes:
图像融合单元,用于对所述第五图像和所述第一图像进行图像融合处理,得到第六图像,所述第六图像为所述第一图像去除噪声后的图像。An image fusion unit is configured to perform image fusion processing on the fifth image and the first image to obtain a sixth image, where the sixth image is an image after removing noise from the first image.
在本申请一些可选实施方式中,所述图像融合单元,用于将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求平均,得到第六图像中的各个像素点的像素值。In some optional implementations of the present application, the image fusion unit is configured to average the pixel values of each pixel in the fifth image and the pixel values of each pixel in the first image, Get the pixel value of each pixel in the sixth image.
在本申请一些可选实施方式中,所述图像融合单元,用于生成运动掩码图像,所述运动掩码图像用于确定运动区域和非运动区域;将所述第一图像中的运动区域内的各个像素点的像素值,确定为第六图像中的运动区域内的各个像素点的像素值;将所述第五图像中的非运动区域内的各个像素点的像素值与所述第一图像中的非运动区域内的各个像素点的像素值对应求平均,得到所述第六图像中的非运动区域内的各个像素点的像素值。In some optional implementations of this application, the image fusion unit is used to generate a motion mask image, and the motion mask image is used to determine the motion area and the non-motion area; the motion area in the first image is The pixel value of each pixel point in the motion area in the sixth image is determined as the pixel value of each pixel point in the motion area in the fifth image; the pixel value of each pixel point in the non-motion area in the fifth image is compared with the pixel value of each pixel point in the motion area in the fifth image. The pixel values of each pixel point in the non-motion area in an image are averaged to obtain the pixel value of each pixel point in the non-motion area in the sixth image.
在本申请一些可选实施方式中,所述图像融合单元,用于将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求差值,得到运动掩码图像中的各个像素点的像素值;其中,所述运动掩码图像中的像素值大于等于阈值的像素点形成的区域为运动区域,所述运动掩码图像中的像素值小于所述阈值的像素点形成的区域为非运动区域。In some optional implementations of the present application, the image fusion unit is configured to calculate the difference between the pixel value of each pixel in the fifth image and the pixel value of each pixel in the first image. , obtain the pixel value of each pixel point in the motion mask image; wherein, the area formed by the pixel points in the motion mask image with a pixel value greater than or equal to the threshold is a motion area, and the pixel value in the motion mask image The area formed by pixels smaller than the threshold is a non-motion area.
在本申请一些可选实施方式中,所述不可见光图像为红外图像。In some optional implementations of this application, the invisible light image is an infrared image.
本申请实施例还提供了一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行上述实施例所述的图像处理方法。An embodiment of the present application also provides an electronic device, including a memory and a processor. A computer program is stored in the memory. When the computer program is executed by the processor, it causes the processor to execute the steps described in the above embodiments. image processing methods.
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现上述实施例所述的图像处理方法。An embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored. The feature is that when the computer program is executed by a processor, the image processing method described in the above embodiment is implemented.
本申请实施例的技术方案,基于多张不可见光图像确定图像配准参数,相较于直接基于多张可见光图像确定图像配准参数,由于不可见光图像的暗部信息更为丰富,采集到的真实物体的特征更为准确,这样,在计算图像配准参数会更为精准,通过该图像配准参数对多张可见光图像进行配准,提高了多张可见光图像的配准准确度,为后续对可见光图像进行降噪处理提供了保障。The technical solution of the embodiment of the present application determines the image registration parameters based on multiple invisible light images. Compared with determining the image registration parameters directly based on multiple visible light images, since the dark part information of the invisible light images is richer, the collected real The characteristics of the object are more accurate. In this way, the image registration parameters will be calculated more accurately. Multiple visible light images are registered through the image registration parameters, which improves the registration accuracy of multiple visible light images and provides a basis for subsequent alignment. Noise reduction processing of visible light images is provided.
附图说明Description of the drawings
图1是本申请实施例提供的一种时域降噪的流程图;Figure 1 is a flow chart of time domain noise reduction provided by an embodiment of the present application;
图2是本申请实施例提供的图像处理方法的流程示意图;Figure 2 is a schematic flowchart of an image processing method provided by an embodiment of the present application;
图3是本申请实施例提供的可见光图像序列和不可见光图像序列之间的对应关系示意图;Figure 3 is a schematic diagram of the correspondence between the visible light image sequence and the invisible light image sequence provided by the embodiment of the present application;
图4是本申请实施例提供的根据运动掩码图像进行图像融合的流程图;Figure 4 is a flow chart of image fusion based on motion mask images provided by an embodiment of the present application;
图5是本申请实施例提供的原理示意图一;Figure 5 is a schematic diagram 1 of the principle provided by the embodiment of the present application;
图6是本申请实施例提供的计算配准变换矩阵的流程示意图;Figure 6 is a schematic flow chart of calculating the registration transformation matrix provided by the embodiment of the present application;
图7是本申请实施例提供的原理示意图二;Figure 7 is a second schematic diagram of the principle provided by the embodiment of the present application;
图8是本申请实施例提供的分区域确定配准变换参数的示意图;Figure 8 is a schematic diagram of determining registration transformation parameters by region according to an embodiment of the present application;
图9是本申请实施例提供的图像处理装置的结构组成示意图;Figure 9 is a schematic structural diagram of an image processing device provided by an embodiment of the present application;
图10是本申请实施例的电子设备的结构组成示意图。Figure 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
具体实施方式Detailed ways
现在将参照附图来详细描述本申请的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本申请的范围。Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that the relative arrangement of components and steps, numerical expressions, and numerical values set forth in these examples do not limit the scope of the present application unless otherwise specifically stated.
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。At the same time, it should be understood that, for convenience of description, the dimensions of various parts shown in the drawings are not drawn according to actual proportional relationships.
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本申请及其应用或使用的任何限制。The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application or its application or uses.
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。Techniques, methods and devices known to those of ordinary skill in the relevant art may not be discussed in detail, but where appropriate, such techniques, methods and devices should be considered a part of the specification.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。It should be noted that similar reference numerals and letters refer to similar items in the following figures, so that once an item is defined in one figure, it does not need further discussion in subsequent figures.
为便于理解本申请实施例的技术方案,以下对本申请实施例的相关技术进行说明,需要说明的是,以下相关技术的描述是用于理解本申请实施例的技术方案,并不造成对本申请实施例技术方案的限定。In order to facilitate the understanding of the technical solutions of the embodiments of the present application, the relevant technologies of the embodiments of the present application are described below. It should be noted that the following description of the related technologies is for understanding the technical solutions of the embodiments of the present application and does not cause the implementation of the present application. Example of limitations of technical solutions.
目前,针对视频降噪主要有两类方法:一类是空域降噪方法,另一类是时域降噪方法。空域降噪方法是仅针对单独一帧图像的降噪方法,具体地,在单独一帧图像中进行邻域分析和处理并实现噪声滤除,作为示例,空域降噪方法可以通过双边滤波算法、或者非局部均值(Non-Local Means)算法来实现;而时域降噪方法是参考除当前视频帧以外的视频帧(简称为参考帧)的信息进行分析和处理的降噪方法,例如将当前帧的前一帧作为参考帧,由于时域噪声在时域上是闪动的而真实信息是不变的,有参考帧的加入则能更好地区分真实的图像信息和噪声,如此便可更准确地去除噪声。Currently, there are two main methods for video noise reduction: one is the spatial domain noise reduction method, and the other is the temporal domain noise reduction method. The spatial domain noise reduction method is a noise reduction method that only targets a single frame of image. Specifically, neighborhood analysis and processing is performed in a single frame of image to achieve noise filtering. As an example, the spatial domain noise reduction method can be achieved through a bilateral filtering algorithm, Or non-local means (Non-Local Means) algorithm to achieve; and the temporal noise reduction method is a noise reduction method that refers to the information of video frames other than the current video frame (referred to as the reference frame) for analysis and processing, for example, the current video frame The previous frame of the frame is used as the reference frame. Since the temporal noise flashes in the time domain and the real information remains unchanged, the addition of the reference frame can better distinguish the real image information and noise, so that Remove noise more accurately.
图1是一个较为典型的视频时域降噪的流程图,如图1所示,对参考帧进行图像配准,使得配准后的参考帧与当前帧对齐;配准后的参考帧与当前帧进行图像融合处理,也即实现了去噪处理,得到去噪输出帧。需要指出的是,去噪输出帧可以理解为当前帧去噪后的帧图像。Figure 1 is a typical flow chart of video temporal noise reduction. As shown in Figure 1, the reference frame is image registered so that the registered reference frame is aligned with the current frame; the registered reference frame is aligned with the current frame. The frames are subjected to image fusion processing, that is, denoising processing is implemented, and the denoising output frame is obtained. It should be pointed out that the denoised output frame can be understood as the frame image after denoising the current frame.
在图1所示的流程图中,图像配准这一步骤为关键步骤。由于在连续视频帧中,每一帧的拍摄位置都可能移动,每一帧中也都可能有运动物体,那么在相邻两帧中图像信息可能会有偏移,如果一个物体在相邻两帧中的相对位置不一致的话,后续的图像融合处理则不能够区分出相邻两帧上的同一点。图像配准算法则是处理这一问题的关键,通过图像配准算法可以分析出相邻两帧各自的典型特征,根据特征匹配将相邻两帧的图像信息进行位置上的对齐。In the flow chart shown in Figure 1, the image registration step is a key step. Since in continuous video frames, the shooting position of each frame may move, and there may be moving objects in each frame, the image information may be offset in two adjacent frames. If an object is in two adjacent frames, If the relative positions in the frames are inconsistent, subsequent image fusion processing will not be able to distinguish the same point on two adjacent frames. The image registration algorithm is the key to solving this problem. The image registration algorithm can analyze the typical characteristics of two adjacent frames, and position the image information of the two adjacent frames based on feature matching.
在一些方案中,图像配准算法都是采用特征点匹配的方案,所谓特征点就是图像中特征较为明显的点,如角点。这种方案的缺点在于,在较为阴暗的场景下,例如夜景下漆黑的街道,视频中的画面信息会非常暗,难以提取出画面中真实的特征点,而且夜景下噪声也非常大,会干扰图像真实信息的判断,导致图像配准出错,从而会使视频经过时域降噪后出现较为严重的鬼影等问题。In some solutions, image registration algorithms use feature point matching. The so-called feature points are points with obvious features in the image, such as corner points. The disadvantage of this solution is that in darker scenes, such as dark streets at night, the picture information in the video will be very dark, making it difficult to extract the real feature points in the picture. Moreover, the noise in the night scene is also very large, which will interfere with the scene. The judgment of the real information of the image will lead to errors in image registration, which will cause serious ghosting and other problems in the video after time domain noise reduction.
为此,提出了本申请实施例的以下技术方案,本申请实施例的技术方案中,利用第一图像传感器和第二图像传感器,同时采集可见光图像序列和不可见光图像序列,通过不可见光图像序列确定适用于可见光图像序列的图像配准参数,能够提高可见光图像序列的图像配准准确度,进而可以辅助提升可见光图像序列的信噪比。To this end, the following technical solution of the embodiment of the present application is proposed. In the technical solution of the embodiment of the present application, the first image sensor and the second image sensor are used to simultaneously collect a visible light image sequence and an invisible light image sequence. Through the invisible light image sequence Determining the image registration parameters suitable for the visible light image sequence can improve the image registration accuracy of the visible light image sequence, which can in turn help improve the signal-to-noise ratio of the visible light image sequence.
本申请实施例提供的图像处理方法应用于图像处理装置,该图像处理装置可设置在电子设备上。在一些实施方式中,该电子设备例如是:手机、平板电脑、穿戴式设备、互动广告机、游戏机、台式机、一体机、车载终端等等。The image processing method provided by the embodiment of the present application is applied to an image processing device, and the image processing device can be installed on an electronic device. In some embodiments, the electronic device is, for example: a mobile phone, a tablet computer, a wearable device, an interactive advertising machine, a game console, a desktop computer, an all-in-one machine, a vehicle-mounted terminal, and the like.
以下对本申请实施例提供的图像处理方法以及图像处理装置进行说明。The image processing method and image processing device provided by the embodiments of the present application are described below.
图2是本申请实施例提供的图像处理方法的流程示意图,如图2所示,所述图像处理方法包括以下步骤:Figure 2 is a schematic flowchart of an image processing method provided by an embodiment of the present application. As shown in Figure 2, the image processing method includes the following steps:
步骤201:获取多张可见光图像和多张不可见光图像。Step 201: Obtain multiple visible light images and multiple invisible light images.
本申请实施例中,电子设备具有第一图像传感器和第二图像传感器,其中,第一图像传感器用于采集可见光图像,第二图像传感器用于采集不可见光图像。在一些可选实施方式中,所述不可见光图像为红外图像。In the embodiment of the present application, the electronic device has a first image sensor and a second image sensor, wherein the first image sensor is used to collect visible light images, and the second image sensor is used to collect invisible light images. In some optional implementations, the invisible light image is an infrared image.
在一些可选实施例中,所述第一图像传感器为互补金属氧化物半导体(Complementary Metal-Oxide-Semiconductor,CMOS)传感器(Sensor),所述第二图像传感器为红外线(Infrared,IR)传感器(Sensor)。其中,CMOS Sensor用于采集(也可以称为捕捉或者拍摄)可见光图像,具体地,CMOS Sensor采集的是场景中的可见光波段,这里,可见光波段为红(R),绿(G),蓝(B)波段,可见光波段经过处理器(如图像信号处理器(ImageSignal Processing,ISP))处理后则呈现出彩色的图像,称为可见光图像,作为示例,可见光图像的数据格式可以是YUV格式。IR Sensor用于采集不可见光图像,具体地,IRSensor采集的是场景中的频率低于红光的红外光(或者称为红外线),由于高于绝对零度的物质都可以产生红外线,所以即使在极暗环境,IR Sensor都可以采集到环境中各物体产生的红外线,可见,通过IR Sensor采集到的红外图像可以包含较暗环境下的物体信息。In some optional embodiments, the first image sensor is a Complementary Metal-Oxide-Semiconductor (CMOS) sensor (Sensor), and the second image sensor is an Infrared (IR) sensor ( Sensor). Among them, the CMOS Sensor is used to collect (also called capturing or shooting) visible light images. Specifically, the CMOS Sensor collects the visible light band in the scene. Here, the visible light band is red (R), green (G), blue ( B) band, the visible light band is processed by a processor (such as an image signal processor (Image Signal Processing, ISP)) to present a colored image, which is called a visible light image. As an example, the data format of the visible light image can be the YUV format. The IR Sensor is used to collect invisible light images. Specifically, the IRSensor collects infrared light (or infrared ray) with a frequency lower than red light in the scene. Since substances above absolute zero can produce infrared rays, even in extreme In dark environments, the IR Sensor can collect infrared rays generated by objects in the environment. It can be seen that the infrared images collected by the IR Sensor can contain information about objects in darker environments.
本申请实施例中,获取多张可见光图像和多张不可见光图像,包括:获取第一图像传感器采集的多张可见光图像,以及第二图像传感器采集的多张不可见光图像,其中,所述多张可见光图像与所述多张不可见光图像按照采集时刻具有对应关系。In the embodiment of the present application, acquiring multiple visible light images and multiple invisible light images includes: acquiring multiple visible light images collected by the first image sensor, and multiple invisible light images collected by the second image sensor, wherein the plurality of visible light images are acquired by the second image sensor. A visible light image has a corresponding relationship with the plurality of invisible light images according to the collection time.
这里,所述多张可见光图像与所述多张不可见光图像按照采集时刻具有对应关系,是指:在同一时刻或者同一时间周期内采集到的可见光图像和不可见光图像具有对应关系。举个例子,参照图3,第一图像传感器和第二图像传感器同时进行图像序列的采集,得到可见光图像序列(也即原始图像序列)和不可见光图像序列(如红外图像序列)。在t1时刻,第一图像传感器采集到可见光图像1,第二图像传感器采集到不可见光图像1,可见光图像1和不可见光图像1对应。在t2时刻,第一图像传感器采集到可见光图像2,第二图像传感器采集到不可见光图像2,可见光图像2和不可见光图像2对应。在t3时刻,第一图像传感器采集到可见光图像3,第二图像传感器采集到不可见光图像3,可见光图像3和不可见光图像3对应。以此类推,第一图像传感器采集到的多张可见光图像和第二图像传感器采集到的多张不可见光图像按照采集时刻具有对应关系。Here, the multiple visible light images and the multiple invisible light images have a corresponding relationship according to the collection time, which means that the visible light images and the invisible light images collected at the same time or within the same time period have a corresponding relationship. For example, referring to Figure 3, the first image sensor and the second image sensor collect image sequences at the same time to obtain a visible light image sequence (that is, the original image sequence) and an invisible light image sequence (such as an infrared image sequence). At time t1, the first image sensor collects the visible light image 1, and the second image sensor collects the invisible light image 1. The visible light image 1 corresponds to the invisible light image 1. At time t2, the first image sensor collects the visible light image 2, and the second image sensor collects the invisible light image 2. The visible light image 2 and the invisible light image 2 correspond to each other. At time t3, the first image sensor collects the visible light image 3, and the second image sensor collects the invisible light image 3. The visible light image 3 and the invisible light image 3 correspond to each other. By analogy, the multiple visible light images collected by the first image sensor and the multiple invisible light images collected by the second image sensor have a corresponding relationship according to the collection time.
步骤202:基于所述多张不可见光图像,确定图像配准参数。Step 202: Determine image registration parameters based on the multiple invisible light images.
这里,第一图像传感器采集到的可见光图像(也即原始图像)在较暗场景(如夜景)下的信噪比相对较差,可见光图像中暗部信息的可见度较低,若直接对第一图像传感器输出的多张可见光图像进行配准(即相对位置配准),则配准出错率可能较高;第二图像传感器采集到的不可见光图像无论在较亮场景还是较暗场景下,其中的暗部信息的可见度较高,即使在视觉难以感知的场景第二图像传感器依然能够采集到场景的真实信息;由于第一图像传感器与第二图像传感器同时采集图像,因此,多张可见光图像的相对位置关系与多张不可见光图像的相对位置关系近似。Here, the visible light image (that is, the original image) collected by the first image sensor has a relatively poor signal-to-noise ratio in a darker scene (such as a night scene), and the visibility of dark information in the visible light image is low. If the first image is directly If multiple visible light images output by the sensor are registered (i.e., relative position registration), the registration error rate may be high; whether the invisible light image collected by the second image sensor is in a brighter scene or a darker scene, the The visibility of dark information is high, and even in scenes that are difficult to perceive visually, the second image sensor can still collect the real information of the scene; because the first image sensor and the second image sensor collect images at the same time, the relative positions of multiple visible light images The relationship is similar to the relative positional relationship of multiple invisible light images.
综上考虑,可以基于多张不可见光图像确定图像配准参数,然后利用该图像配准参数对多张可见光图像进行配准。下面以“多张”为“两张”对本申请实施例的技术方案进行说明,需要指出的是,“两张以上”的方案同样适用于本申请实施例的技术方案。To sum up, the image registration parameters can be determined based on multiple invisible light images, and then the image registration parameters can be used to register multiple visible light images. The following describes the technical solutions of the embodiments of the present application using "multiple pictures" as "two pictures". It should be noted that the solution of "more than two pictures" is also applicable to the technical solutions of the embodiments of the present application.
在一些可选实施方式中,所述多张可见光图像包括第一图像和第二图像,所述多张不可见光图像包括第三图像和第四图像,其中,所述第一图像和所述第三图像对应,所述第二图像和所述第四图像对应。作为示例,第一图像传感器和第二图像传感器在同一时刻(如第一时刻)分别采集了第一图像和第三图像,因此所述第一图像和所述第三图像对应;第一图像传感器和第二图像传感器在同一时刻(如第二时刻)分别采集了第二图像和第四图像,因此所述第二图像和所述第四图像对应。In some optional implementations, the plurality of visible light images include a first image and a second image, and the plurality of invisible light images include a third image and a fourth image, wherein the first image and the third image Three images correspond, and the second image corresponds to the fourth image. As an example, the first image sensor and the second image sensor respectively collect the first image and the third image at the same time (such as the first time), so the first image and the third image correspond; the first image sensor The second image and the fourth image are respectively collected by the second image sensor at the same time (such as the second time), so the second image and the fourth image correspond to each other.
上述方案中,第一图像的采集时刻位于第二图像的采集时刻之后,第三图像的采集时刻位于第四图像的采集时刻之后。作为示例,第一图像和第三图像分别为第一图像传感器当前采集的可见光图像和不可见光图像,第二图像和第四图像分别为第一图像传感器之前采集的可见光图像和不可见光图像。作为示例,第二图像是第一图像的前一帧图像,可以将第一图像称为第一当前帧,将第二图像称为第一参考帧;第四图像是第三图像的前一帧图像,可以将第三图像称为第二当前帧,将第四图像称为第二参考帧。其中,第二当前帧例如是红外当前帧,第二参考帧例如是红外参考帧。In the above solution, the collection time of the first image is located after the collection time of the second image, and the collection time of the third image is located after the collection time of the fourth image. As an example, the first image and the third image are respectively the visible light image and the invisible light image currently collected by the first image sensor, and the second image and the fourth image are respectively the visible light image and the invisible light image collected previously by the first image sensor. As an example, the second image is the previous frame of the first image, the first image can be called the first current frame, and the second image is called the first reference frame; the fourth image is the previous frame of the third image. image, the third image can be called the second current frame, and the fourth image can be called the second reference frame. The second current frame is, for example, an infrared current frame, and the second reference frame is, for example, an infrared reference frame.
本申请实施例中,图像配准参数是指一个图像相对于另一个图像的图像配准参数,例如图像配准参数是指图像A相对于图像B的图像配准参数,通过图像A与图像配准参数相乘,可以得到配准后的图像A,并且配准后的图像A和图像B是对齐的。由于第一图像传感器采集到的多张可见光图像和第二图像传感器采集到的多张不可见光图像按照采集时刻具有对应关系,因此多张不可见光图像对应的图像配准参数,与多张可见光图像对应的图像配准参数一致,因此,可以根据多张不可见光图像确定图像配准参数,然后将该图像配准参数应用于多张可见光图像,从而实现对多张可见光图像的配准。以下对如何根据多张不可见光图像确定图像配准参数进行说明。In the embodiment of the present application, the image registration parameter refers to the image registration parameter of one image relative to another image. For example, the image registration parameter refers to the image registration parameter of image A relative to image B. Through image A and image registration, By multiplying the quasi-parameters, the registered image A can be obtained, and the registered image A and image B are aligned. Since the multiple visible light images collected by the first image sensor and the multiple invisible light images collected by the second image sensor have a corresponding relationship according to the collection time, the image registration parameters corresponding to the multiple invisible light images are related to the multiple visible light images. The corresponding image registration parameters are consistent. Therefore, the image registration parameters can be determined based on multiple invisible light images, and then the image registration parameters are applied to multiple visible light images, thereby achieving registration of multiple visible light images. The following describes how to determine image registration parameters based on multiple invisible light images.
本申请实施例中,从所述第三图像中提取第一特征点集,以及从所述第四图像中提取第二特征点集;从所述第一特征点集和所述第二特征点集中,确定出具有匹配关系的至少一对特征点;基于所述至少一对特征点的坐标信息,确定图像配准参数。In this embodiment of the present application, a first feature point set is extracted from the third image, and a second feature point set is extracted from the fourth image; from the first feature point set and the second feature point Focus on determining at least one pair of feature points with a matching relationship; and determine image registration parameters based on the coordinate information of the at least one pair of feature points.
举个例子:第三图像为红外当前帧,第四图像为红外参考帧,红外参考帧可以是红外当前帧的前一帧图像。1)对红外当前帧和红外参考帧进行特征点的提取,得到红外当前帧中的第一特征点集以及红外参考帧中的第二特征点集。这里,第一特征点集和第二特征点集中包括的特征点的数目可以相同,也可以不同,本申请实施例的技术方案对此不作限制。具体实现时,可采用Harris算法、或者尺度不变特征变换(Scale-invariant featuretransform,SIFT)算法等特征点提取算法来实现特征点的提取。作为示例,特征点提取算法的原理是:根据图像中每一个像素点的邻域信息判断该像素点坐在的位置是否为特征显著的点,如是否为处于物体棱角上的点,从而判断该点是否属于特征点。2)匹配红外当前帧和红外参考帧中提取出的特征点,并根据匹配的特征点计算图像配准参数。这里,图像配准参数可以体现为一个矩阵,称为配准变换矩阵。配准变换矩阵为3×3的矩阵,即配准变换矩阵包括9个参数。作为示例,配准变换矩阵是一个仿射变换矩阵,配准变换矩阵的最后一行可以是(0,0,1),以下给出了配准变换矩阵的一个示例:For example: the third image is the current infrared frame, the fourth image is the infrared reference frame, and the infrared reference frame can be the frame image preceding the current infrared frame. 1) Extract feature points from the current infrared frame and the infrared reference frame to obtain the first set of feature points in the current infrared frame and the second set of feature points in the infrared reference frame. Here, the number of feature points included in the first feature point set and the second feature point set may be the same or different, and the technical solutions of the embodiments of the present application do not limit this. In specific implementation, feature point extraction algorithms such as the Harris algorithm or the Scale-invariant feature transform (SIFT) algorithm can be used to implement feature point extraction. As an example, the principle of the feature point extraction algorithm is: based on the neighborhood information of each pixel in the image, determine whether the position where the pixel sits is a point with significant features, such as whether it is a point on the edge of the object, and then determine whether the pixel is located at the corner of the object. Whether the point is a feature point. 2) Match the feature points extracted from the current infrared frame and the infrared reference frame, and calculate the image registration parameters based on the matched feature points. Here, the image registration parameters can be embodied as a matrix, called the registration transformation matrix. The registration transformation matrix is a 3×3 matrix, that is, the registration transformation matrix includes 9 parameters. As an example, the registration transformation matrix is an affine transformation matrix, and the last row of the registration transformation matrix can be (0, 0, 1). An example of the registration transformation matrix is given below:
其中,a,b,c,d,e,f为配准变换矩阵中需要确定的参数。Among them, a, b, c, d, e, f are the parameters that need to be determined in the registration transformation matrix.
假设第一特征点集中的特征点A与第二特征点集中的特征点a匹配,特征点A的坐标为(x1,y1,1),特征点a的坐标为(x2,y2,1),那么,这两个特征点的坐标满足如下公式:Assume that feature point A in the first feature point set matches feature point a in the second feature point set, the coordinates of feature point A are (x1, y1, 1), and the coordinates of feature point a are (x2, y2, 1), Then, the coordinates of these two feature points satisfy the following formula:
将多对匹配的特征点拟合上述公式,可以确定出配准变换矩阵中的各个参数,从而确定出配准变换矩阵。具体实现时,可采用随机抽样一致(Random Sample Consensus,RANSAC)算法等算法来实现匹配特征点以及计算配准变换矩阵。RANSAC算法的原理是:随机抽取两帧图像中的特征点,将特征点对应的坐标向量进行匹配,根据匹配的特征点的坐标向量拟合出配准变换矩阵。By fitting multiple pairs of matching feature points to the above formula, each parameter in the registration transformation matrix can be determined, thereby determining the registration transformation matrix. In specific implementation, algorithms such as the Random Sample Consensus (RANSAC) algorithm can be used to match feature points and calculate the registration transformation matrix. The principle of the RANSAC algorithm is to randomly extract feature points from two frames of images, match the coordinate vectors corresponding to the feature points, and fit the registration transformation matrix based on the coordinate vectors of the matched feature points.
在一些可选实施方式中,第一图像传感器与第二图像传感器共用同一镜头组件(简称为镜组),对于这种情况,可以理解为,第一图像传感器与第二图像传感器在电子设备上的位置相同,第一图像传感器采集到的可见光图像和第二图像传感器采集到的不可见光图像没有位置偏差,可以直接根据由不可见光图像确定的图像配准参数来对可见光图像进行配准。In some optional implementations, the first image sensor and the second image sensor share the same lens assembly (referred to as lens assembly for short). In this case, it can be understood that the first image sensor and the second image sensor are on the electronic device. have the same position, there is no positional deviation between the visible light image collected by the first image sensor and the invisible light image collected by the second image sensor, and the visible light image can be registered directly according to the image registration parameters determined by the invisible light image.
在一些可选实施方式中,第一图像传感器与第二图像传感器分别使用不同的镜头组件,对于这种情况,可以理解为,第一图像传感器与第二图像传感器在电子设备上的位置有细小偏差,第一图像传感器采集到的可见光图像和第二图像传感器采集到的不可见光图像存在位置偏差,为了弥补这种偏差,需要对由不可见光图像确定的图像配准参数进行调整。具体地,基于标定数据,对所述图像配准参数进行调整;其中,所述标定数据基于所述第一图像传感器和所述第二图像传感器的相对位置关系确定。这里,标定数据可以是电子设备出厂时的标定数据,该标定数据与第一图像传感器和第二图像传感器的相对位置关系有关,通过该标定数据对图像配准参数进行调整,可以实现不可见光图像向可见光图像进行位置对齐,从而得到可以给可见光图像使用的图像配准参数。In some optional implementations, the first image sensor and the second image sensor respectively use different lens components. In this case, it can be understood that the positions of the first image sensor and the second image sensor on the electronic device are slightly different. Deviation, there is a positional deviation between the visible light image collected by the first image sensor and the invisible light image collected by the second image sensor. In order to compensate for this deviation, the image registration parameters determined by the invisible light image need to be adjusted. Specifically, the image registration parameters are adjusted based on calibration data; wherein the calibration data is determined based on the relative positional relationship between the first image sensor and the second image sensor. Here, the calibration data may be calibration data when the electronic device leaves the factory. The calibration data is related to the relative positional relationship between the first image sensor and the second image sensor. The image registration parameters are adjusted through the calibration data to achieve invisible light images. Perform position alignment to the visible light image to obtain image registration parameters that can be used for the visible light image.
步骤203:基于所述图像配准参数,对所述多张可见光图像进行配准。Step 203: Register the multiple visible light images based on the image registration parameters.
在一些可选实施方式中,图像配准参数基于多张不可见光图像中的第三图像和第四图像确定,其中,第三图像与多张可见光图像中的第一图像对应,第四图像与多张可见光图像中的第二图像对应。基于所述图像配准参数对所述第二图像进行变换,得到与所述第一图像配准的第五图像。这里,第五图像是指第二图像进行变换后得到的图像,该图像与第一图像是对齐的。In some optional implementations, the image registration parameters are determined based on a third image and a fourth image among the plurality of invisible light images, wherein the third image corresponds to the first image among the plurality of visible light images, and the fourth image corresponds to Second image correspondence among multiple visible light images. The second image is transformed based on the image registration parameter to obtain a fifth image registered with the first image. Here, the fifth image refers to an image obtained by transforming the second image, and the image is aligned with the first image.
举个例子:第一图像为CMOS Sensor输出的当前帧,第二图像为CMOS Sensor输出的参考帧,参考帧可以是当前帧的前一帧图像。参考帧根据配准变换矩阵进行变换,得到与当前帧对齐的参考帧。具体地,参考帧中各个像素点的坐标乘以配准变换矩阵后得到新坐标,各个像素点按照新坐标重排后则完成了变换,此时,当前帧和参考帧完成了对齐操作。For example: the first image is the current frame output by the CMOS Sensor, and the second image is the reference frame output by the CMOS Sensor. The reference frame can be the previous frame of the current frame. The reference frame is transformed according to the registration transformation matrix to obtain a reference frame aligned with the current frame. Specifically, the coordinates of each pixel point in the reference frame are multiplied by the registration transformation matrix to obtain new coordinates. After each pixel point is rearranged according to the new coordinates, the transformation is completed. At this time, the current frame and the reference frame complete the alignment operation.
通过本申请实施例的上述技术方案完成第一图像和第二图像的配准后,可以融合第一图像和配准后的第二图像(即第五图像),从而完成时域去噪。具体地,对所述第五图像和所述第一图像进行图像融合处理,得到第六图像,所述第六图像为所述第一图像去除噪声后的图像。After the registration of the first image and the second image is completed through the above technical solutions of the embodiments of the present application, the first image and the registered second image (ie, the fifth image) can be fused, thereby completing temporal denoising. Specifically, image fusion processing is performed on the fifth image and the first image to obtain a sixth image, where the sixth image is an image after removing noise from the first image.
在一些可选实施方式中,可以采用以下方式完成对所述第五图像和所述第一图像进行图像融合处理:In some optional implementations, the image fusion processing of the fifth image and the first image can be accomplished in the following manner:
方式一:将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求平均,得到第六图像中的各个像素点的像素值。Method 1: Average the pixel values of each pixel point in the fifth image and the pixel values of each pixel point in the first image to obtain the pixel value of each pixel point in the sixth image.
举个例子:假设图像的像素为N×M,N和M为正整数,那么,第五图像中的坐标为(xi,yj,1)的像素点的像素值和第一图像中的坐标为(xi,yj,1)的像素点的像素值求平均,得到第六图像中的坐标为(xi,yj,1)的像素点的像素值。其中,i为大于等于1且小于等于N的正整数,j为大于等于1且小于等于M的正整数。For example: Assume that the pixels of the image are N×M, and N and M are positive integers. Then, the pixel value of the pixel point with coordinates (xi, yj, 1) in the fifth image and the coordinates in the first image are The pixel values of the pixel point (xi, yj, 1) are averaged to obtain the pixel value of the pixel point with coordinates (xi, yj, 1) in the sixth image. Among them, i is a positive integer greater than or equal to 1 and less than or equal to N, and j is a positive integer greater than or equal to 1 and less than or equal to M.
方式二:生成运动掩码(Mask)图像,所述运动掩码图像用于确定运动区域和非运动区域;将所述第一图像中的运动区域内的各个像素点的像素值,确定为第六图像中的运动区域内的各个像素点的像素值;将所述第五图像中的非运动区域内的各个像素点的像素值与所述第一图像中的非运动区域内的各个像素点的像素值对应求平均,得到所述第六图像中的非运动区域内的各个像素点的像素值。Method 2: Generate a motion mask (Mask) image, the motion mask image is used to determine the motion area and the non-motion area; determine the pixel value of each pixel point in the motion area in the first image as the first The pixel value of each pixel point in the motion area in the sixth image; the pixel value of each pixel point in the non-motion area in the fifth image and the pixel value of each pixel point in the non-motion area in the first image The corresponding pixel values of are averaged to obtain the pixel values of each pixel point in the non-motion area in the sixth image.
这里,为了更精准的去噪,可以对图像进行运动检测,具体地,通过对第五图像和第一图像进行比较,确定出图像中的运动区域和非运动区域。具体实现时,通过生成掩码图像来体现图像中哪些是运动区域,哪些是非运动区域。Here, in order to remove noise more accurately, motion detection can be performed on the image. Specifically, by comparing the fifth image and the first image, the moving area and the non-moving area in the image are determined. In specific implementation, a mask image is generated to reflect which areas in the image are moving areas and which areas are non-moving areas.
在一些可选实施方式中,可以通过以下方式生成掩码图像:将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求差值,得到运动掩码图像中的各个像素点的像素值;其中,所述运动掩码图像中的像素值大于等于阈值的像素点形成的区域为运动区域,所述运动掩码图像中的像素值小于所述阈值的像素点形成的区域为非运动区域。In some optional implementations, the mask image can be generated in the following manner: calculating the difference between the pixel value of each pixel point in the fifth image and the pixel value of each pixel point in the first image, Obtain the pixel value of each pixel point in the motion mask image; wherein, the area formed by the pixel points in the motion mask image with a pixel value greater than or equal to the threshold is a motion area, and the pixel value in the motion mask image is less than The area formed by the threshold pixels is a non-motion area.
举个例子:假设图像的像素为N×M,N和M为正整数,i为大于等于1且小于等于N的正整数,j为大于等于1且小于等于M的正整数,若坐标(xi,yj,1)位于运动区域,则第六图像中的坐标为(xi,yj,1)的像素点的像素值等于第一图像中的坐标为(xi,yj,1)的像素点的像素值;若坐标(xi,yj,1)位于非运动区域,则第六图像中的坐标为(xi,yj,1)的像素点的像素值等于第五图像中的坐标为(xi,yj,1)的像素点的像素值和第一图像中的坐标为(xi,yj,1)的像素点的像素值的平均值。For example: Suppose the pixels of the image are N , yj, 1) is located in the motion area, then the pixel value of the pixel point with coordinates (xi, yj, 1) in the sixth image is equal to the pixel value of the pixel point with coordinates (xi, yj, 1) in the first image value; if the coordinate (xi, yj, 1) is located in the non-motion area, then the pixel value of the pixel point with the coordinate (xi, yj, 1) in the sixth image is equal to the coordinate (xi, yj, 1) in the fifth image. The average value of the pixel value of the pixel point 1) and the pixel value of the pixel point whose coordinates are (xi, yj, 1) in the first image.
在一些可选实施方式中,可以通过以下方式生成掩码图像:将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求差值,将差值与阈值进行比较,若差值大于等于阈值,则将运动掩码图像中对应的像素点的像素值设置为1,若差值小于阈值,则将运动掩码图像中对应的像素点的像素值设置为0;其中,所述运动掩码图像中的像素值为1的像素点形成的区域为运动区域,所述运动掩码图像中的像素值为0的像素点形成的区域为非运动区域。In some optional implementations, the mask image can be generated in the following manner: calculating the difference between the pixel value of each pixel point in the fifth image and the pixel value of each pixel point in the first image, Compare the difference with the threshold. If the difference is greater than or equal to the threshold, the pixel value of the corresponding pixel in the motion mask image is set to 1. If the difference is less than the threshold, the corresponding pixel in the motion mask image is set to 1. The pixel value of is set to 0; wherein, the area formed by the pixel points with the pixel value of 1 in the motion mask image is the motion area, and the area formed by the pixel points with the pixel value of 0 in the motion mask image is Non-sports area.
通过上述方案确定出图像中的运动区域和非运动区域后,第五图像和第一图像的图像融合方式为:运动区域仅采用第一图像的各个像素点的像素值,非运动区域则采用第一图像的各个像素点的像素值与第五图像的各个像素点的像素值对应求平均。After the moving area and non-moving area in the image are determined through the above solution, the image fusion method of the fifth image and the first image is: the moving area only uses the pixel value of each pixel point of the first image, and the non-moving area uses the third The pixel values of each pixel point in one image are averaged corresponding to the pixel values of each pixel point in the fifth image.
举个例子:参照图4,第二图像经过配准变换后得到第五图像,第五图像与第一图像是对齐的,根据第五图像和第一图像的差值可以得到运动掩码图像,参考该运动掩码图像对第一图像和第五图像进行图像融合处理,其中,第五图像和第一图像的图像融合方式为:运动区域仅采用第一图像的各个像素点的像素值,非运动区域则采用第一图像的各个像素点的像素值与第五图像的各个像素点的像素值对应求平均。For example: Referring to Figure 4, the fifth image is obtained after the second image is registered and transformed. The fifth image is aligned with the first image. The motion mask image can be obtained based on the difference between the fifth image and the first image. Perform image fusion processing on the first image and the fifth image with reference to the motion mask image, where the image fusion method of the fifth image and the first image is: the motion area only uses the pixel values of each pixel point of the first image, and does not The moving area uses the pixel value of each pixel point in the first image and the pixel value of each pixel point in the fifth image to calculate the average.
以下再结合图5至图7对本申请实施例的技术方案进行说明。需要说明的是,在图5至图7相关的实施例中,以第一图像传感器为CMOS Sensor,第一图像传感器采集到的第一图像和第二图像分别称为当前帧和参考帧,第二图像传感器为IR Sensor,第二图像传感器采集到的第三图像和第四图像分别称为红外当前帧和红外参考帧,为例进行说明。The technical solution of the embodiment of the present application will be described below with reference to FIGS. 5 to 7 . It should be noted that in the embodiments related to Figures 5 to 7, the first image sensor is a CMOS Sensor, and the first image and the second image collected by the first image sensor are respectively called the current frame and the reference frame. The second image sensor is an IR Sensor, and the third image and the fourth image collected by the second image sensor are respectively called the infrared current frame and the infrared reference frame. This is explained as an example.
参照图5,CMOS Sensor输出原始图像序列(即原始视频),IR Sensor输出红外图像序列。对IR Sensor输出红外当前帧和红外参考帧进行特征点匹配,根据匹配的特征点的坐标信息计算配准变换矩阵。CMOS Sensor输出的参考帧经过配准变换矩阵进行变换,得到配准后的参考帧,其中,配准后的参考帧是与CMOS Sensor输出的当前帧对齐的。将CMOSSensor输出的当前帧与配准后的参考帧进行图像融合处理,从而得到去噪输出帧,完成对当前帧的去噪处理。当前帧随着时间不断更新变化,从而可以输出连续的视频去噪帧。Referring to Figure 5, the CMOS Sensor outputs the original image sequence (i.e., the original video), and the IR Sensor outputs the infrared image sequence. Match the feature points between the current infrared frame output by the IR Sensor and the infrared reference frame, and calculate the registration transformation matrix based on the coordinate information of the matched feature points. The reference frame output by the CMOS Sensor is transformed through the registration transformation matrix to obtain the registered reference frame. The registered reference frame is aligned with the current frame output by the CMOS Sensor. The current frame output by the CMOSSensor and the registered reference frame are image fused to obtain a denoised output frame, and the denoising process of the current frame is completed. The current frame is constantly updated and changed over time, so that continuous video denoising frames can be output.
对于图5中的配准变换矩阵的计算可以参照图6,分别对红外当前帧和红外参考帧进行特征点提取,然后对红外当前帧中的特征点和红外参考帧中的特征点进行匹配,并基于匹配的特征点的坐标信息计算出配准变换矩阵。将图6所示的流程结合到图5所示的流程中可以得到图7所示的流程,如图7所示,图7所示的流程大致分为两大部分:一部分是红外图像处理,即通过红外图像序列确定配准变换矩阵;另一部分是原始图像处理,即通过配准变换矩阵对原始图像序列进行配准并进行图像融合处理,从而完成视频的降噪。For the calculation of the registration transformation matrix in Figure 5, you can refer to Figure 6. Feature points are extracted from the current infrared frame and the infrared reference frame respectively, and then the feature points in the current infrared frame and the feature points in the infrared reference frame are matched. And calculate the registration transformation matrix based on the coordinate information of the matched feature points. Combining the process shown in Figure 6 with the process shown in Figure 5 can obtain the process shown in Figure 7. As shown in Figure 7, the process shown in Figure 7 is roughly divided into two parts: one part is infrared image processing, That is, the registration transformation matrix is determined through the infrared image sequence; the other part is the original image processing, that is, the original image sequence is registered through the registration transformation matrix and image fusion processing is performed to complete the video noise reduction.
本申请实施例的上述技术方案,在确定配准变换参数的时候可以不区分图像区域来确定配准变换参数,即整个图像对应一个配准变换参数,对于这种情况,整个图像的各个像素点对应于同一配准变换参数。不局限于此,也可以区分区域来确定配准变换参数,例如参照图8,将图像划分为2个区域,区域1对应配准变换参数1,区域2对应配准变换参数2,这里,每个区域确定其对应的配准变换参数的方法可以参照前述方案,那么,区域1内的各个像素点对应于配准变换参数1,区域2内的各个像素点对应于配准变换参数2。确定出不同区域的配准变换参数后,相应地,对于第一传感器采集到的第二图像也需要分区域进行配准变换,具体地,第二图像的第一区域内的各个像素点的坐标依据配准变换参数1进行变换,第二图像的第二区域内的各个像素点的坐标依据配准变换参数2进行变换。According to the above technical solution of the embodiment of the present application, when determining the registration transformation parameters, the registration transformation parameters can be determined without distinguishing the image area, that is, the entire image corresponds to one registration transformation parameter. In this case, each pixel point of the entire image Corresponds to the same registration transformation parameters. Not limited to this, the registration transformation parameters can also be determined by distinguishing regions. For example, referring to Figure 8, the image is divided into two regions. Region 1 corresponds to registration transformation parameter 1, and region 2 corresponds to registration transformation parameter 2. Here, each The method for determining the corresponding registration transformation parameters for each area can refer to the previous solution. Then, each pixel point in area 1 corresponds to registration transformation parameter 1, and each pixel point in area 2 corresponds to registration transformation parameter 2. After determining the registration transformation parameters of different areas, correspondingly, the second image collected by the first sensor also needs to be registered and transformed by area. Specifically, the coordinates of each pixel point in the first area of the second image are Transformation is performed according to the registration transformation parameter 1, and the coordinates of each pixel point in the second area of the second image are transformed according to the registration transformation parameter 2.
作为本申请实施例上述技术方案的一种变形,采用本申请实施例的上述技术方案确定配准变换参数的时候可以只针对图像的部分区域,而图像的另一部分区域则可以采用张可见光图像来确定。依然参照图8,将图像划分为2个区域,区域1对应配准变换参数1,该配准变换参数1基于多张不可见光图像中的区域1的匹配特征点确定;区域2对应配准变换参数2,该配准变换参数2基于多张可见光图像中的区域2的匹配特征点确定。在一种应用场景下,区域1可以是图像中的暗部区域,区域2可以是图像中的亮部区域。这里,暗部区域和亮部区域的划分是以可见光图像为依据的,具体实现时,可以分析可见光图像中的当前帧的各个像素点的亮度值,从而划分出暗部区域和亮度区域,针对暗部区域,通过不可见光图像来辅助确定对应的配准变换参数,针对亮部区域,通过可见光图像直接确定对应的配准变换参数。需要说明的是,确定某个区域的配准变换参数的方式均可以参照前述相关方案的描述,具体地,从该区域中提取两张图像的特征点,对特征点进行匹配并基于匹配的特征点的坐标信息计算该区域对应的配准变换参数。As a variation of the above technical solution of the embodiment of the present application, when the above technical solution of the embodiment of the present application is used to determine the registration transformation parameters, only part of the image area can be used, while another part of the image can be determined using a visible light image. Sure. Still referring to Figure 8, the image is divided into two areas. Area 1 corresponds to the registration transformation parameter 1, which is determined based on the matching feature points of area 1 in multiple invisible light images; area 2 corresponds to the registration transformation. Parameter 2. This registration transformation parameter 2 is determined based on the matching feature points of area 2 in multiple visible light images. In one application scenario, area 1 may be a dark area in the image, and area 2 may be a bright area in the image. Here, the division of dark areas and bright areas is based on the visible light image. In specific implementation, the brightness value of each pixel point of the current frame in the visible light image can be analyzed to divide the dark area and the bright area. For the dark area , the invisible light image is used to assist in determining the corresponding registration transformation parameters, and for the bright area, the corresponding registration transformation parameters are directly determined through the visible light image. It should be noted that the method of determining the registration transformation parameters of a certain area can refer to the description of the above-mentioned related solutions. Specifically, the feature points of the two images are extracted from the area, the feature points are matched and based on the matched features The coordinate information of the point calculates the registration transformation parameters corresponding to the area.
作为本申请实施例上述技术方案的一种变形,图像融合处理阶段可以采用更多帧数进行图像融合处理而不只是两帧,对于这种情况,需要对多帧可见光图像进行配准,例如需要对L帧可见光图像进行配准,L为大于2的整数,那么,需要对L帧可见光图像对应的L帧不可见光图像进行分析,确定出L-1个配准变换参数,以L=3为例,可以确定出不可见光图像1和不可见光图像2之间的配准变换参数,以及不可见光图像1和不可见光图像3之间的配准变换参数,如此,可以通过不可见光图像1和不可见光图像2之间的配准变换参数实现将可见光图像1和可见光图像2进行配准,通过不可见光图像1和不可见光图像3之间的配准变换参数实现将可见光图像1和可见光图像3进行配准,如此,就完成了可见光图像1,可见光图像2和可见光图像3之间的配准,通过对配准后的可见光图像1,可见光图像2和可见光图像3进行图像融合处理,可以获得更好的去噪效果。作为示例,不可见光图像1可以是当前红外帧,不可见光图像2可以是当前红外帧的前一帧红外帧,不可见光图像3可以是当前红外帧的前两帧红外帧,可见光图像1可以是当前帧,可见光图像2可以是当前帧的前一帧,可见光图像3可以是当前帧的前两帧。As a variation of the above technical solution of the embodiment of the present application, the image fusion processing stage can use more frames for image fusion processing instead of just two frames. In this case, multiple frames of visible light images need to be registered, for example To register L frames of visible light images, L is an integer greater than 2, then it is necessary to analyze the L frames of invisible light images corresponding to the L frames of visible light images, and determine L-1 registration transformation parameters, taking L=3 as For example, the registration transformation parameters between invisible light image 1 and invisible light image 2 can be determined, and the registration transformation parameters between invisible light image 1 and invisible light image 3 can be determined. In this way, the registration transformation parameters between invisible light image 1 and invisible light image 3 can be determined. The registration transformation parameters between the visible light image 2 realize the registration of the visible light image 1 and the visible light image 2, and the registration transformation parameters between the invisible light image 1 and the invisible light image 3 realize the registration of the visible light image 1 and the visible light image 3 Registration, in this way, the registration between visible light image 1, visible light image 2 and visible light image 3 is completed. By performing image fusion processing on the registered visible light image 1, visible light image 2 and visible light image 3, more information can be obtained. Good denoising effect. As an example, invisible light image 1 can be the current infrared frame, invisible light image 2 can be the infrared frame preceding the current infrared frame, invisible light image 3 can be the two infrared frames preceding the current infrared frame, and visible light image 1 can be In the current frame, the visible light image 2 can be the frame before the current frame, and the visible light image 3 can be the two frames before the current frame.
本申请实施例的技术方案,通过分析第二图像传感器输出的不可见光图像,得到配准变换参数,相较于直接从第一图像传感器输出的原始图像中计算配准变换参数,不可见光图像的暗部信息更为丰富,采集到的真实物体特征更为准确,这样计算出的配准变换参数会更为精准,通过该配准变换参数对可见光图像进行配准后,可以在后续进行多帧去噪时避免产生鬼影或拖影现象。对于较暗场景,视频拍摄本身的噪点较大,且暗部信息不清晰,需要通过相邻帧间的多帧去噪来去除噪声和恢复信息,而暗部噪点和可见度低的问题会阻碍多帧去噪的效果,不可见光图像的利用刚好弥补这一不足,使得即使在很暗的环境依然可以计算出准确地配准变换参数,推动时域降噪更好的进行。The technical solution of the embodiment of the present application obtains the registration transformation parameters by analyzing the invisible light image output by the second image sensor. Compared with calculating the registration transformation parameters directly from the original image output by the first image sensor, the invisible light image Dark information is richer, and the collected real object features are more accurate. In this way, the calculated registration transformation parameters will be more accurate. After registering the visible light image through the registration transformation parameters, multiple frames can be processed in the future. Avoid ghosting or smearing when noisy. For darker scenes, the video shooting itself has large noise, and the information in the dark parts is unclear. It is necessary to remove the noise and restore the information through multi-frame denoising between adjacent frames. However, the problem of noise and low visibility in the dark parts will hinder multi-frame denoising. The use of invisible light images just makes up for this shortcoming, allowing accurate registration transformation parameters to be calculated even in very dark environments, promoting better temporal noise reduction.
图9是本申请实施例提供的图像处理装置的结构组成示意图,如图9所示,所述图像处理装置包括:Figure 9 is a schematic structural diagram of an image processing device provided by an embodiment of the present application. As shown in Figure 9, the image processing device includes:
获取单元901,用于获取多张可见光图像和多张不可见光图像;The acquisition unit 901 is used to acquire multiple visible light images and multiple invisible light images;
参数确定单元902,用于基于所述多张不可见光图像,确定图像配准参数;Parameter determination unit 902, configured to determine image registration parameters based on the multiple invisible light images;
图像配准单元903,用于基于所述图像配准参数,对所述多张可见光图像进行配准。The image registration unit 903 is configured to register the multiple visible light images based on the image registration parameters.
在本申请一些可选实施方式中,所述获取单元901,用于获取第一图像传感器采集的多张可见光图像,以及第二图像传感器采集的多张不可见光图像,其中,所述多张可见光图像与所述多张不可见光图像按照采集时刻具有对应关系。In some optional implementations of this application, the acquisition unit 901 is used to acquire multiple visible light images collected by the first image sensor and multiple invisible light images collected by the second image sensor, wherein the multiple visible light images The image has a corresponding relationship with the plurality of invisible light images according to the collection time.
在本申请一些可选实施方式中,所述多张可见光图像包括第一图像和第二图像,所述多张不可见光图像包括第三图像和第四图像,其中,所述第一图像和所述第三图像对应,所述第二图像和所述第四图像对应;In some optional implementations of the present application, the multiple visible light images include a first image and a second image, and the multiple invisible light images include a third image and a fourth image, wherein the first image and the The third image corresponds to the second image and the fourth image corresponds;
所述参数确定单元902,用于从所述第三图像中提取第一特征点集,以及从所述第四图像中提取第二特征点集;从所述第一特征点集和所述第二特征点集中,确定出具有匹配关系的至少一对特征点;基于所述至少一对特征点的坐标信息,确定图像配准参数。The parameter determination unit 902 is configured to extract a first feature point set from the third image and a second feature point set from the fourth image; from the first feature point set and the third feature point set Two feature points are concentrated to determine at least one pair of feature points with a matching relationship; based on the coordinate information of the at least one pair of feature points, image registration parameters are determined.
在本申请一些可选实施方式中,所述参数确定单元902,还用于基于标定数据,对所述图像配准参数进行调整;其中,所述标定数据基于所述第一图像传感器和所述第二图像传感器的相对位置关系确定。In some optional implementations of this application, the parameter determination unit 902 is also used to adjust the image registration parameters based on calibration data; wherein the calibration data is based on the first image sensor and the The relative position relationship of the second image sensor is determined.
在本申请一些可选实施方式中,所述图像配准单元903,用于基于所述图像配准参数对所述第二图像进行变换,得到与所述第一图像配准的第五图像。In some optional implementations of this application, the image registration unit 903 is configured to transform the second image based on the image registration parameters to obtain a fifth image that is registered with the first image.
在本申请一些可选实施方式中,所述装置还包括:In some optional implementations of this application, the device further includes:
图像融合单元904,用于对所述第五图像和所述第一图像进行图像融合处理,得到第六图像,所述第六图像为所述第一图像去除噪声后的图像。The image fusion unit 904 is configured to perform image fusion processing on the fifth image and the first image to obtain a sixth image, where the sixth image is an image after removing noise from the first image.
在本申请一些可选实施方式中,所述图像融合单元904,用于将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求平均,得到第六图像中的各个像素点的像素值。In some optional implementations of the present application, the image fusion unit 904 is configured to average the pixel values of each pixel in the fifth image and the pixel values of each pixel in the first image. , obtain the pixel value of each pixel in the sixth image.
在本申请一些可选实施方式中,所述图像融合单元904,用于生成运动掩码图像,所述运动掩码图像用于确定运动区域和非运动区域;将所述第一图像中的运动区域内的各个像素点的像素值,确定为第六图像中的运动区域内的各个像素点的像素值;将所述第五图像中的非运动区域内的各个像素点的像素值与所述第一图像中的非运动区域内的各个像素点的像素值对应求平均,得到所述第六图像中的非运动区域内的各个像素点的像素值。In some optional implementations of this application, the image fusion unit 904 is used to generate a motion mask image, the motion mask image is used to determine motion areas and non-motion areas; the motion in the first image is The pixel value of each pixel point in the area is determined as the pixel value of each pixel point in the motion area in the sixth image; the pixel value of each pixel point in the non-motion area in the fifth image is compared with the The pixel values of each pixel point in the non-motion area in the first image are averaged accordingly to obtain the pixel value of each pixel point in the non-motion area in the sixth image.
在本申请一些可选实施方式中,所述图像融合单元904,用于将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求差值,得到运动掩码图像中的各个像素点的像素值;其中,所述运动掩码图像中的像素值大于等于阈值的像素点形成的区域为运动区域,所述运动掩码图像中的像素值小于所述阈值的像素点形成的区域为非运动区域。In some optional implementations of the present application, the image fusion unit 904 is used to calculate the difference between the pixel value of each pixel in the fifth image and the pixel value of each pixel in the first image. value to obtain the pixel value of each pixel in the motion mask image; wherein, the area formed by the pixels in the motion mask image with a pixel value greater than or equal to the threshold is a motion region, and the pixels in the motion mask image The area formed by pixels with a value smaller than the threshold is a non-motion area.
在本申请一些可选实施方式中,所述不可见光图像为红外图像。In some optional implementations of this application, the invisible light image is an infrared image.
本领域技术人员应当理解,图9所示的图像处理装置中的各单元的实现功能可参照前述图像处理方法的相关描述而理解。图9所示的图像处理装置中的各单元的功能可通过运行于处理器上的程序而实现,也可通过具体的逻辑电路而实现。Those skilled in the art should understand that the functions implemented by each unit in the image processing device shown in FIG. 9 can be understood with reference to the relevant description of the foregoing image processing method. The functions of each unit in the image processing device shown in FIG. 9 can be implemented by programs running on the processor, or by specific logic circuits.
本申请实施例上述的神经网络的训练装置如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个存储介质(例如计算机可读取存储介质)中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台电子设备(可以是个人计算机、服务器、或者网络设备等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read OnlyMemory)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本申请实施例不限制于任何特定的硬件和软件结合。If the above-mentioned neural network training device in the embodiment of the present application is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a storage medium (such as a computer-readable storage medium). Based on this understanding, the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or those that contribute to the existing technology. The computer software products are stored in a storage medium and include a number of instructions to An electronic device (which may be a personal computer, a server, a network device, etc.) is caused to execute all or part of the methods described in various embodiments of this application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read OnlyMemory), magnetic disk or optical disk and other various media that can store program codes. As such, embodiments of the present application are not limited to any specific combination of hardware and software.
相应地,本申请实施例还提供一种计算机程序产品,其中存储有计算机可执行指令,该计算机可执行指令被执行时能够实现本申请实施例的上述的方法。Correspondingly, embodiments of the present application also provide a computer program product in which computer-executable instructions are stored. When the computer-executable instructions are executed, the above-mentioned methods of the embodiments of the present application can be implemented.
图10是本申请实施例的电子设备的结构组成示意图,如图10所示,电子设备可以包括一个或多个(图中仅示出一个)处理器1002(处理器1002可以包括但不限于微处理器(MCU,Micro Controller Unit)或可编程逻辑器件(FPGA,Field Programmable GateArray)等的处理装置)、用于存储数据的存储器1004、以及用于通信功能的传输装置1006。本领域普通技术人员可以理解,图10所示的结构仅为示意,其并不对上述电子装置的结构造成限定。例如,电子设备还可包括比图10中所示更多或者更少的组件,或者具有与图10所示不同的配置。Figure 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in Figure 10, the electronic device may include one or more (only one is shown in the figure) processors 1002 (the processor 1002 may include but is not limited to a microprocessor). A processing device such as a processor (MCU, Micro Controller Unit) or a programmable logic device (FPGA, Field Programmable GateArray)), a memory 1004 for storing data, and a transmission device 1006 for communication functions. Persons of ordinary skill in the art can understand that the structure shown in FIG. 10 is only illustrative, and it does not limit the structure of the above-mentioned electronic device. For example, the electronic device may also include more or fewer components than shown in FIG. 10 , or have a different configuration than shown in FIG. 10 .
存储器1004可用于存储应用软件的软件程序以及模块,如本申请实施例中的方法对应的程序指令/模块,处理器1002通过运行存储在存储器1004内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的方法。存储器1004可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器1004可进一步包括相对于处理器1002远程设置的存储器,这些远程存储器可以通过网络连接至电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 1004 can be used to store software programs and modules of application software, such as program instructions/modules corresponding to the methods in the embodiments of the present application. The processor 1002 executes various functional applications by running the software programs and modules stored in the memory 1004 and data processing, that is, implementing the above methods. Memory 1004 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 1004 may further include memory located remotely relative to processor 1002, and these remote memories may be connected to the electronic device through a network. Examples of the above-mentioned networks include but are not limited to the Internet, intranets, local area networks, mobile communication networks and combinations thereof.
传输装置1006用于经由一个网络接收或者发送数据。上述的网络具体实例可包括电子设备的通信供应商提供的无线网络。在一个实例中,传输装置1006包括一个网络适配器(NIC,Network Interface Controller),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输装置1006可以为射频(RF,Radio Frequency)模块,其用于通过无线方式与互联网进行通讯。The transmission device 1006 is used to receive or send data via a network. Specific examples of the above-mentioned network may include a wireless network provided by a communication provider of the electronic device. In one example, the transmission device 1006 includes a network adapter (NIC, Network Interface Controller), which can be connected to other network devices through a base station to communicate with the Internet. In one example, the transmission device 1006 may be a radio frequency (RF) module, which is used to communicate with the Internet wirelessly.
本申请实施例所记载的技术方案之间,在不冲突的情况下,可以任意组合。The technical solutions described in the embodiments of this application can be combined arbitrarily as long as there is no conflict.
在本申请所提供的几个实施例中,应该理解到,所揭露的方法和智能设备,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。In the several embodiments provided in this application, it should be understood that the disclosed methods and smart devices can be implemented in other ways. The device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods, such as: multiple units or components may be combined, or can be integrated into another system, or some features can be ignored, or not implemented. In addition, the coupling, direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be electrical, mechanical, or other forms. of.
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。The units described above as separate components may or may not be physically separated. The components shown as units may or may not be physical units, that is, they may be located in one place or distributed to multiple network units; Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各实施例中的各功能单元可以全部集成在一个第二处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application can be all integrated into a second processing unit, or each unit can be separately used as a unit, or two or more units can be integrated into one unit; The above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。The above are only specific embodiments of the present application, but the protection scope of the present application is not limited thereto. Any person familiar with the technical field can easily think of changes or substitutions within the technical scope disclosed in the present application. should be covered by the protection scope of this application.
Claims (18)
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110221009.5A CN112950502B (en) | 2021-02-26 | 2021-02-26 | Image processing method and device, electronic equipment and storage medium |
| PCT/CN2021/137515 WO2022179251A1 (en) | 2021-02-26 | 2021-12-13 | Image processing method and apparatus, electronic device, and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110221009.5A CN112950502B (en) | 2021-02-26 | 2021-02-26 | Image processing method and device, electronic equipment and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112950502A CN112950502A (en) | 2021-06-11 |
| CN112950502B true CN112950502B (en) | 2024-02-13 |
Family
ID=76246713
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110221009.5A Active CN112950502B (en) | 2021-02-26 | 2021-02-26 | Image processing method and device, electronic equipment and storage medium |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN112950502B (en) |
| WO (1) | WO2022179251A1 (en) |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112950502B (en) * | 2021-02-26 | 2024-02-13 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and storage medium |
| CN114792291B (en) * | 2022-03-24 | 2025-06-10 | 苏州工业园区智在天下科技有限公司 | Video processing method, device, terminal and computer readable storage medium |
| CN115361533B (en) * | 2022-08-19 | 2023-04-18 | 深圳市汇顶科技股份有限公司 | Image data processing method and electronic device |
| EP4623587A1 (en) * | 2022-11-22 | 2025-10-01 | Airy3d Inc. | Edge angle and baseline angle correction in depth imaging |
| CN116740155B (en) * | 2023-07-14 | 2025-08-29 | 中国科学院力学研究所 | A method for image registration in wind tunnel experiments of temperature-sensitive paint |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106548489A (en) * | 2016-09-20 | 2017-03-29 | 深圳奥比中光科技有限公司 | The method for registering of a kind of depth image and coloured image, three-dimensional image acquisition apparatus |
| CN110490811A (en) * | 2019-05-31 | 2019-11-22 | 杭州海康威视数字技术股份有限公司 | Image noise reduction apparatus and image denoising method |
| CN111968057A (en) * | 2020-08-24 | 2020-11-20 | 浙江大华技术股份有限公司 | Image noise reduction method and device, storage medium and electronic device |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107918929B (en) * | 2016-10-08 | 2019-06-21 | 杭州海康威视数字技术股份有限公司 | A kind of image interfusion method, apparatus and system |
| CN112950502B (en) * | 2021-02-26 | 2024-02-13 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and storage medium |
-
2021
- 2021-02-26 CN CN202110221009.5A patent/CN112950502B/en active Active
- 2021-12-13 WO PCT/CN2021/137515 patent/WO2022179251A1/en not_active Ceased
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106548489A (en) * | 2016-09-20 | 2017-03-29 | 深圳奥比中光科技有限公司 | The method for registering of a kind of depth image and coloured image, three-dimensional image acquisition apparatus |
| CN110490811A (en) * | 2019-05-31 | 2019-11-22 | 杭州海康威视数字技术股份有限公司 | Image noise reduction apparatus and image denoising method |
| CN111968057A (en) * | 2020-08-24 | 2020-11-20 | 浙江大华技术股份有限公司 | Image noise reduction method and device, storage medium and electronic device |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2022179251A1 (en) | 2022-09-01 |
| CN112950502A (en) | 2021-06-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112950502B (en) | Image processing method and device, electronic equipment and storage medium | |
| KR101121034B1 (en) | System and method for obtaining camera parameters from multiple images and computer program products thereof | |
| US9390511B2 (en) | Temporally coherent segmentation of RGBt volumes with aid of noisy or incomplete auxiliary data | |
| CN106991690B (en) | A Video Sequence Synchronization Method Based on Timing Information of Moving Objects | |
| KR101524548B1 (en) | Apparatus and method for alignment of images | |
| Hedborg et al. | Structure and motion estimation from rolling shutter video | |
| CN108230245B (en) | Image splicing method, image splicing device and electronic equipment | |
| WO2018176938A1 (en) | Method and device for extracting center of infrared light spot, and electronic device | |
| CN111382613B (en) | Image processing method, device, equipment and medium | |
| CN105049718A (en) | Image processing method and terminal | |
| CN106981078B (en) | Sight line correction method and device, intelligent conference terminal and storage medium | |
| WO2020029518A1 (en) | Monitoring video processing method, device and computer readable medium | |
| WO2016164166A1 (en) | Automated generation of panning shots | |
| WO2020156341A1 (en) | Method and apparatus for detecting moving target, and electronic device and storage medium | |
| CN109035307B (en) | Target tracking method and system for setting area based on natural light binocular vision | |
| CN110766706A (en) | Image fusion method and device, terminal equipment and storage medium | |
| CN113159229B (en) | Image fusion method, electronic equipment and related products | |
| CN113838151A (en) | Camera calibration method, device, equipment and medium | |
| Verstockt et al. | Silhouette-based multi-sensor smoke detection: coverage analysis of moving object silhouettes in thermal and visual registered images | |
| Huang et al. | Image registration among UAV image sequence and Google satellite image under quality mismatch | |
| JPWO2017029784A1 (en) | Image registration system, method and recording medium | |
| CN106651918B (en) | Foreground extraction method under dither background | |
| US9875549B2 (en) | Change detection in video data | |
| EP3407297B1 (en) | Method and device for determining a characteristic of a display device | |
| CN107087219A (en) | smart home tv |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |