CN112131968A - Change detection method of dual-phase remote sensing image based on DCNN - Google Patents
Change detection method of dual-phase remote sensing image based on DCNN Download PDFInfo
- Publication number
- CN112131968A CN112131968A CN202010903557.1A CN202010903557A CN112131968A CN 112131968 A CN112131968 A CN 112131968A CN 202010903557 A CN202010903557 A CN 202010903557A CN 112131968 A CN112131968 A CN 112131968A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- change detection
- sensing image
- map
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于DCNN的双时相遥感影像变化检测方法,通过将双时相遥感图像数据集输入深度卷积神经网络,以生成双时相特征图,对双时相特征图进行双线性插值,使双时相特征图的尺寸与双时相遥感图像数据集中遥感影像的尺寸相同,计算双线性插值后的双时相特征图之间的欧氏距离,根据欧氏距离生成差异图像,提取差异图像中各个像素块的特征向量,根据各个特征向量构建特征向量空间,对特征向量空间进行聚类,根据聚类结果生成粗变化检测图,对粗变化检测图进行形态学滤波,以生成变化检测图,有效简化了对待检测遥感影像进行变化检测的过程,提高了检测效果和检测效率。
The invention discloses a dual-phase remote sensing image change detection method based on DCNN. By inputting the dual-phase remote sensing image data set into a deep convolutional neural network, a dual-phase feature map is generated, and the dual-phase feature map is subjected to dual-phase processing. Linear interpolation, so that the size of the bi-temporal feature map is the same as the size of the remote sensing image in the bi-temporal remote sensing image dataset, calculate the Euclidean distance between the bi-temporal feature maps after bi-linear interpolation, and generate according to the Euclidean distance Difference image, extract the feature vector of each pixel block in the difference image, construct a feature vector space according to each feature vector, cluster the feature vector space, generate a coarse change detection map according to the clustering result, and perform morphological filtering on the coarse change detection map , to generate a change detection map, which effectively simplifies the process of change detection in the remote sensing images to be detected, and improves the detection effect and detection efficiency.
Description
技术领域technical field
本发明涉及图像处理技术领域,尤其涉及一种基于DCNN的双时相遥感影像变化检测方法。The invention relates to the technical field of image processing, in particular to a DCNN-based dual-phase remote sensing image change detection method.
背景技术Background technique
变化检测技术是许多利用遥感图像的应用的核心过程。它通过处理不同时间获取的两幅覆盖同一地理区域的图像来识别地球表面发生的变化。变化检测具有广泛的用途,包括土地利用和土地覆盖变化检测、风险评估、环境调查等。Change detection techniques are core processes for many applications that utilize remote sensing imagery. It identifies changes that occur on the Earth's surface by processing two images acquired at different times covering the same geographic area. Change detection has a wide range of uses, including land use and land cover change detection, risk assessment, environmental surveys, and more.
基于人工特征,人们提出了多种算法来解决变化检测问题,如图像定量、主成分分析、变化矢量分析、期望最大化、马尔科夫随机场等。为了计算这些手工设计的特征,需要谨慎并且精心地选择尺寸、比例和方向等参数。除此以外特征的选择也是遥感图像变化检测的难点。Based on artificial features, various algorithms have been proposed to solve the change detection problem, such as image quantification, principal component analysis, change vector analysis, expectation maximization, Markov random fields, etc. To compute these hand-designed features, parameters such as size, scale, and orientation need to be carefully and carefully chosen. In addition, the selection of features is also a difficult point in remote sensing image change detection.
公开号CN108830828A一种遥感图像变化检测方法及装置,利用预置滤波器分别对所述第一遥感图像和所述第二遥感图像进行滤波,得到第一滤波图像和第二滤波图像;根据所述第一滤波图像和所述第二滤波图像计算差异图像,所述差异图像用于标识所述两个不同时相的遥感图像变化。该方法虽然解决了现有的差值法检测遥感图像变化时,由于遥感图像中存在噪声使得遥感图像变化检测的准确性较低的问题,但是直接通过原始双时相遥感图像获取差异图像,生成的差异图像精度不高,易造成误差累积,不能达到良好的变化检测效果。公开号CN107992891A基于光谱矢量分析多光谱遥感图像变化检测方法,输入同一地域不同时间经过预处理的两幅多光谱遥感图像,利用主成分分析方法对利用变化向量分析法构建的差异空间降维处理并取第一主成分得到第一幅差异图;求解双时相遥感图像光谱矢量之间夹角信息得到第二幅差异图;分别求解两幅差异图信息熵,进而通过计算得到融合权重,利用加权求和的方式融合得到更优差异图;进行空间特征描述;采用谱聚类方式进行聚类分析,得到变化检测结果。该方法虽然能在一定程度上抑制由于光照、辐射等因素对变化信息的干扰,但是由于直接利用主成分分析方法构建差异空间,对原始遥感图像的预处理要求较高,需要更加复杂的预处理步骤。可见传统方案往往存在检测过程复杂,检测效果差的技术问题。Publication No. CN108830828A A method and device for detecting changes in remote sensing images, using preset filters to filter the first remote sensing image and the second remote sensing image, respectively, to obtain the first filtered image and the second filtered image; A difference image is calculated from the first filtered image and the second filtered image, and the difference image is used to identify changes in the remote sensing images of the two different phases. Although this method solves the problem of low accuracy of remote sensing image change detection due to noise in remote sensing images when the existing difference method detects changes in remote sensing images, it directly obtains the difference images from the original bi-temporal remote sensing images and generates The accuracy of the difference image is not high, which is easy to cause error accumulation, and cannot achieve a good change detection effect. The publication number CN107992891A is based on the spectral vector analysis multispectral remote sensing image change detection method. Input two multispectral remote sensing images that have been preprocessed in the same region at different times, and use the principal component analysis method to reduce the dimension of the difference space constructed by the change vector analysis method. Take the first principal component to obtain the first difference map; solve the angle information between the spectral vectors of the dual-phase remote sensing image to obtain the second difference map; solve the information entropy of the two difference maps respectively, and then obtain the fusion weight through calculation, and use the weighted The summation method is combined to obtain a better difference map; the spatial feature description is carried out; the spectral clustering method is used for cluster analysis to obtain the change detection result. Although this method can suppress the interference of the change information due to factors such as illumination and radiation to a certain extent, but because the principal component analysis method is directly used to construct the difference space, the preprocessing of the original remote sensing image has higher requirements, and more complex preprocessing is required. step. It can be seen that the traditional solutions often have technical problems of complicated detection process and poor detection effect.
发明内容SUMMARY OF THE INVENTION
针对以上问题,本发明提出一种基于DCNN的双时相遥感影像变化检测方法。In view of the above problems, the present invention proposes a DCNN-based dual-phase remote sensing image change detection method.
为实现本发明的目的,提供一种基于DCNN的双时相遥感影像变化检测方法,包括如下步骤:In order to realize the purpose of the present invention, a kind of dual-temporal remote sensing image change detection method based on DCNN is provided, comprising the following steps:
S10,构建包括待检测遥感影像的双时相遥感图像数据集;S10, constructing a bitemporal remote sensing image dataset including the remote sensing image to be detected;
S30,将双时相遥感图像数据集输入预先构建的深度卷积神经网络,以生成双时相遥感图像数据集所包括的各个遥感影像分别对应的双时相特征图;S30, inputting the bi-temporal remote sensing image dataset into a pre-built deep convolutional neural network to generate bi-temporal feature maps corresponding to each remote sensing image included in the bi-temporal remote sensing image dataset;
S40,对双时相特征图进行双线性插值,使双时相特征图的尺寸与双时相遥感图像数据集中遥感影像的尺寸相同;S40, perform bilinear interpolation on the bi-temporal feature map, so that the size of the bi-temporal feature map is the same as the size of the remote sensing image in the bi-temporal remote sensing image dataset;
S50,计算双线性插值后的双时相特征图之间的欧氏距离,根据欧氏距离生成差异图像,提取差异图像中各个像素块的特征向量,根据各个特征向量构建特征向量空间;S50, calculating the Euclidean distance between the bi-temporal feature maps after bilinear interpolation, generating a difference image according to the Euclidean distance, extracting the feature vector of each pixel block in the difference image, and constructing a feature vector space according to each feature vector;
S60,对特征向量空间进行聚类,根据聚类结果生成粗变化检测图;S60, cluster the feature vector space, and generate a rough change detection map according to the clustering result;
S70,对粗变化检测图进行形态学滤波,以生成变化检测图。S70, perform morphological filtering on the coarse change detection map to generate a change detection map.
在一个实施例中,步骤S30之前,还包括:In one embodiment, before step S30, it further includes:
S20,构建主体结构基于VGG19的深度卷积神经网络,并加载在ImageNet上预训练的权重参数作为深度卷积神经网络的权重参数,以完成深度卷积神经网络的构建。S20, construct a deep convolutional neural network whose main structure is based on VGG19, and load the weight parameters pre-trained on ImageNet as the weight parameters of the deep convolutional neural network to complete the construction of the deep convolutional neural network.
在一个实施例中,计算双线性插值后的双时相特征图之间的欧氏距离包括:In one embodiment, calculating the Euclidean distance between the bi-temporal feature maps after bilinear interpolation includes:
式中,表示双时相特征图中T1时刻特征图的像素值,表示双时相特征图中T2时刻特征图的像素值,DIi(x,y)表示欧氏距离。In the formula, represents the pixel value of the feature map at time T1 in the dual-phase feature map, Represents the pixel value of the feature map at time T2 in the dual-phase feature map, and DI i (x, y) represents the Euclidean distance.
在一个实施例中,对特征向量空间进行聚类,根据聚类结果生成粗变化检测图包括:In one embodiment, clustering the feature vector space, and generating a rough change detection map according to the clustering result includes:
使用k=2的k均值聚类将特征向量空间划分为两个簇,并计算两个簇的特征向量和均值特征向量之间的最小欧氏距离,将每个像素分配给两个簇中的一个,生成粗变化检测图。Use k-means clustering with k=2 to divide the eigenvector space into two clusters, and calculate the minimum Euclidean distance between the eigenvectors of the two clusters and the mean eigenvector, assigning each pixel to the One, generates a coarse change detection map.
在一个实施例中,对粗变化检测图进行形态学滤波,以生成变化检测图包括:In one embodiment, performing morphological filtering on the coarse change detection map to generate the change detection map comprises:
对粗变化检测图进行腐蚀操作,滤除粗变化检测图中的噪声像素点,以生成变化检测图。Corrosion operation is performed on the coarse change detection map, and noise pixels in the coarse change detection map are filtered out to generate a change detection map.
上述基于DCNN的双时相遥感影像变化检测方法,通过构建包括待检测遥感影像的双时相遥感图像数据集,将双时相遥感图像数据集输入预先构建的深度卷积神经网络,以生成双时相遥感图像数据集所包括的各个遥感影像分别对应的双时相特征图,对双时相特征图进行双线性插值,使双时相特征图的尺寸与双时相遥感图像数据集中遥感影像的尺寸相同,计算双线性插值后的双时相特征图之间的欧氏距离,根据欧氏距离生成差异图像,提取差异图像中各个像素块的特征向量,根据各个特征向量构建特征向量空间,对特征向量空间进行聚类,根据聚类结果生成粗变化检测图,对粗变化检测图进行形态学滤波,以生成变化检测图,实现对待检测遥感影像的变化检测,有效简化了对待检测遥感影像进行变化检测的过程,提高了检测效果和检测效率。The above-mentioned DCNN-based bi-temporal remote sensing image change detection method constructs a bi-temporal remote sensing image dataset including the remote sensing images to be detected, and inputs the bi-temporal remote sensing image dataset into a pre-built deep convolutional neural network to generate a bi-temporal remote sensing image dataset. The bi-temporal feature maps corresponding to each remote sensing image included in the temporal remote sensing image dataset, and bilinear interpolation is performed on the bi-temporal feature maps, so that the size of the bi-temporal feature maps is the same as that of the remote sensing images in the bi-temporal remote sensing image dataset. The size of the image is the same, calculate the Euclidean distance between the bi-phase feature maps after bilinear interpolation, generate the difference image according to the Euclidean distance, extract the feature vector of each pixel block in the difference image, and construct the feature vector according to each feature vector. space, cluster the feature vector space, generate a coarse change detection map according to the clustering results, and perform morphological filtering on the coarse change detection map to generate a change detection map, realize the change detection of the remote sensing images to be detected, and effectively simplify the detection process. The change detection process of remote sensing images improves the detection effect and detection efficiency.
附图说明Description of drawings
图1是一个实施例的基于DCNN的双时相遥感影像变化检测方法流程图;1 is a flowchart of a method for detecting changes in dual-phase remote sensing images based on DCNN according to one embodiment;
图2是一个实施例的基于DCNN的双时相遥感影像变化检测框架图;Fig. 2 is a framework diagram of a DCNN-based dual-phase remote sensing image change detection according to an embodiment;
图3是一个实施例的深度神经网络框架图。Figure 3 is a deep neural network framework diagram of one embodiment.
具体实施方式Detailed ways
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solutions and advantages of the present application more clearly understood, the present application will be described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present application, but not to limit the present application.
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。Reference herein to an "embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor a separate or alternative embodiment that is mutually exclusive of other embodiments. It is explicitly and implicitly understood by those skilled in the art that the embodiments described herein may be combined with other embodiments.
参考图1所示,图1为一个实施例的基于DCNN的双时相遥感影像变化检测方法流程图,包括如下步骤:Referring to FIG. 1, FIG. 1 is a flowchart of a method for detecting changes in dual-temporal remote sensing images based on DCNN according to one embodiment, including the following steps:
S10,构建包括待检测遥感影像的双时相遥感图像数据集。S10, constructing a bitemporal remote sensing image dataset including the remote sensing images to be detected.
上述步骤可以构建包括两个时刻(如T1时刻和T2时刻)的待测检测遥感影像的双时相遥感图像数据集,并制作相应的样本标签集合。The above steps can construct a bi-temporal remote sensing image dataset including the remote sensing images to be detected at two times (eg, T1 time and T2 time), and make a corresponding sample label set.
S30,将双时相遥感图像数据集输入预先构建的深度卷积神经网络,以生成双时相遥感图像数据集所包括的各个遥感影像分别对应的双时相特征图。S30: Input the bi-temporal remote sensing image dataset into a pre-built deep convolutional neural network to generate bi-temporal feature maps corresponding to each remote sensing image included in the bi-temporal remote sensing image dataset.
S40,对双时相特征图进行双线性插值,使双时相特征图的尺寸与双时相遥感图像数据集中遥感影像的尺寸相同。S40, perform bilinear interpolation on the bi-temporal feature map, so that the size of the bi-temporal feature map is the same as the size of the remote sensing image in the bi-temporal remote sensing image dataset.
S50,计算双线性插值后的双时相特征图之间的欧氏距离,根据欧氏距离生成差异图像,提取差异图像中各个像素块的特征向量,根据各个特征向量构建特征向量空间。S50: Calculate the Euclidean distance between the bi-temporal feature maps after bilinear interpolation, generate a difference image according to the Euclidean distance, extract feature vectors of each pixel block in the difference image, and construct a feature vector space according to each feature vector.
S60,对特征向量空间进行聚类,根据聚类结果生成粗变化检测图。S60, cluster the feature vector space, and generate a rough change detection map according to the clustering result.
S70,对粗变化检测图进行形态学滤波,以生成变化检测图。S70, perform morphological filtering on the coarse change detection map to generate a change detection map.
上述基于DCNN的双时相遥感影像变化检测方法,通过构建包括待检测遥感影像的双时相遥感图像数据集,将双时相遥感图像数据集输入预先构建的深度卷积神经网络,以生成双时相遥感图像数据集所包括的各个遥感影像分别对应的双时相特征图,对双时相特征图进行双线性插值,使双时相特征图的尺寸与双时相遥感图像数据集中遥感影像的尺寸相同,计算双线性插值后的双时相特征图之间的欧氏距离,根据欧氏距离生成差异图像,提取差异图像中各个像素块的特征向量,根据各个特征向量构建特征向量空间,对特征向量空间进行聚类,根据聚类结果生成粗变化检测图,对粗变化检测图进行形态学滤波,以生成变化检测图,实现对待检测遥感影像的变化检测,有效简化了对待检测遥感影像进行变化检测的过程,提高了检测效果和检测效率。The above-mentioned DCNN-based bi-temporal remote sensing image change detection method constructs a bi-temporal remote sensing image dataset including the remote sensing images to be detected, and inputs the bi-temporal remote sensing image dataset into a pre-built deep convolutional neural network to generate a bi-temporal remote sensing image dataset. The bi-temporal feature maps corresponding to each remote sensing image included in the temporal remote sensing image dataset, and bilinear interpolation is performed on the bi-temporal feature maps, so that the size of the bi-temporal feature maps is the same as that of the remote sensing images in the bi-temporal remote sensing image dataset. The size of the image is the same, calculate the Euclidean distance between the bi-phase feature maps after bilinear interpolation, generate the difference image according to the Euclidean distance, extract the feature vector of each pixel block in the difference image, and construct the feature vector according to each feature vector. space, cluster the feature vector space, generate a coarse change detection map according to the clustering results, and perform morphological filtering on the coarse change detection map to generate a change detection map, realize the change detection of the remote sensing images to be detected, and effectively simplify the detection process. The change detection process of remote sensing images improves the detection effect and detection efficiency.
在一个实施例中,步骤S30之前,还包括:In one embodiment, before step S30, it further includes:
S20,构建主体结构基于VGG19的深度卷积神经网络,并加载在ImageNet上预训练的权重参数作为深度卷积神经网络的权重参数,以完成深度卷积神经网络的构建。S20, construct a deep convolutional neural network whose main structure is based on VGG19, and load the weight parameters pre-trained on ImageNet as the weight parameters of the deep convolutional neural network to complete the construction of the deep convolutional neural network.
上述ImageNet是一个大型的可视化数据库,包含大量的图像。The aforementioned ImageNet is a large visualization database containing a large number of images.
具体地,上述基于VGG19的深度卷积神经网络的具体结构可以包括:Specifically, the specific structure of the above-mentioned VGG19-based deep convolutional neural network may include:
(2.1)第一大层中,分别定义了两层卷积核尺寸为3*3*64、步长为1、激活函数为线性整流函数的卷积层,和一个池化层,池化方法选择为最大值池化;(2.1) In the first large layer, two convolution layers with a convolution kernel size of 3*3*64, a stride of 1, and a linear rectification function are defined respectively, and a pooling layer, the pooling method Choose max pooling;
(2.2)第二大层中,分别定义了两层卷积核尺寸为3*3*128、步长为1、激活函数为线性整流函数的卷积层,和一个池化层,池化方法选择为最大值池化;(2.2) In the second largest layer, two convolution layers with a convolution kernel size of 3*3*128, a stride of 1, and a linear rectification function are defined respectively, and a pooling layer, the pooling method Choose max pooling;
(2.3)第三大层中,分别定义了四层卷积核尺寸为3*3*256、步长为1、活函数为线性整流函数的卷积层,和一个池化层,池化方法选择为最大值池化;(2.3) In the third largest layer, a convolutional layer with a size of 3*3*256 convolution kernel, a stride of 1, and a linear rectification function are defined respectively, and a pooling layer, the pooling method Choose max pooling;
(2.4)第四大层中,分别定义了四层卷积核尺寸为3*3*512、步长为1、激活函数为线性整流函数的卷积层,和一个池化层,池化方法选择为最大值池化;(2.4) In the fourth largest layer, four convolution layers with convolution kernel size of 3*3*512, stride of 1, activation function of linear rectification function, and a pooling layer are respectively defined. The pooling method Choose max pooling;
(2.5)第五大层中,分别定义了四层卷积核尺寸为3*3*512、步长为1、激活函数为线性整流函数的卷积层,和一个池化层,池化方法选择为最大值池化;(2.5) In the fifth largest layer, four convolution layers with kernel size of 3*3*512, stride of 1, activation function of linear rectification function, and a pooling layer are defined respectively, and the pooling method Choose max pooling;
(2.6)第六层为全连接层;(2.6) The sixth layer is the fully connected layer;
(2.7)第七层为全连接层;(2.7) The seventh layer is a fully connected layer;
(2.8)第八层为全连接层。(2.8) The eighth layer is a fully connected layer.
在一个示例中,将双时相遥感图像数据集输入上述深度卷积神经网络后可以执行如下过程:In one example, the following process can be performed after the bitemporal remote sensing image dataset is fed into the above deep convolutional neural network:
设数据集中第i个图像为三通道图像,将其表示如下:Let the i-th image in the dataset be a three-channel image, which is represented as follows:
Imagei(x,y,z)={(x,y,z)|0≤x<h,0≤y<w,0≤z<3}Image i (x,y,z)={(x,y,z)|0≤x<h,0≤y<w,0≤z<3}
其中,h表示图像的高,w表示图像的宽,将图像送入深度卷积神经网络,提取所述网络第四大层输出的特征图:Among them, h represents the height of the image, and w represents the width of the image. The image is sent to the deep convolutional neural network, and the feature map output by the fourth largest layer of the network is extracted:
featurei(x,y,z)={(x,y,z)|0≤x<h′,0≤y<w′,0≤z<512}feature i (x,y,z)={(x,y,z)|0≤x<h′,0≤y<w′,0≤z<512}
其中,h′表示所述深度卷积神经网络第四大层输出的特征图高度,w′表示所述深度卷积神经网络第四大层输出的特征图宽度。Wherein, h' represents the height of the feature map output by the fourth largest layer of the deep convolutional neural network, and w' represents the width of the feature map output by the fourth largest layer of the deep convolutional neural network.
在一个实施例中,计算双线性插值后的双时相特征图之间的欧氏距离包括:In one embodiment, calculating the Euclidean distance between the bi-temporal feature maps after bilinear interpolation includes:
式中,表示双时相特征图中T1时刻特征图的像素值,表示双时相特征图中T2时刻特征图的像素值,DIi(x,y)表示欧氏距离。In the formula, represents the pixel value of the feature map at time T1 in the dual-phase feature map, Represents the pixel value of the feature map at time T2 in the dual-phase feature map, and DI i (x, y) represents the Euclidean distance.
在一个示例中,提取差异图像中各个像素块的特征向量,根据各个特征向量构建特征向量空间的过程可以包括:In an example, the feature vector of each pixel block in the difference image is extracted, and the process of constructing a feature vector space according to each feature vector may include:
(6.1)差异图像DIi可以表示如下:(6.1) The difference image DI i can be expressed as follows:
其中,n为256;Among them, n is 256;
(6.2)在上述差异图像DIi中生成h×h的非重叠像素块设为xd,其中h选择为大于2的常数:(6.2) Generate h×h non-overlapping pixel blocks in the above difference image DI i as x d , where h is selected as a constant greater than 2:
(6.3)将xd展成行向量x′d,将所有x′d集合形成向量集Xd,在Xd上使用主成分分析算法建立特征向量空间:(6.3) Expand x d into row vector x' d , set all x' d to form vector set X d , and use principal component analysis algorithm to establish feature vector space on X d :
其中x′d中共有h×h个元素,Among them, there are h×h elements in x′ d ,
其中Xd形状为h2×h2,对Xd中所有特征进行中心化生成X′d:The shape of X d is h 2 ×h 2 , and all features in X d are centered to generate X′ d :
对Xd′求协方差矩阵:Find the covariance matrix for X d ':
其中,cov(fp,fq)=E{[fp-E(fp)][fq-E(fq)]},利用矩阵知识Cu=λu计算出所述协方差矩阵C的特征值λ以及对应的特征向量u,并按照特征值降序排列,选取前对特征值特征向量对构成所述特征向量空间;Wherein, cov(f p , f q )=E{[f p -E(f p )][f q -E(f q )]}, using the matrix knowledge Cu=λu to calculate the covariance matrix C The eigenvalue λ and the corresponding eigenvector u are arranged in descending order of the eigenvalues, and the eigenvalue eigenvector pair before the selection constitutes the eigenvector space;
(6.4)将DIi每个像素周围h×h的像素块投影到(6.2)中的特征向量空间,建立整个DIi上的特征向量空间FVS:(6.4) Project the h×h pixel block around each pixel of DI i to the feature vector space in (6.2), and establish the feature vector space FVS on the entire DI i :
其中, in,
在一个实施例中,对特征向量空间进行聚类,根据聚类结果生成粗变化检测图包括:In one embodiment, clustering the feature vector space, and generating a rough change detection map according to the clustering result includes:
使用k=2的k均值聚类将特征向量空间划分为两个簇,并计算两个簇的特征向量和均值特征向量之间的最小欧氏距离,将每个像素分配给两个簇中的一个,生成粗变化检测图。Use k-means clustering with k=2 to divide the eigenvector space into two clusters, and calculate the minimum Euclidean distance between the eigenvectors of the two clusters and the mean eigenvector, assigning each pixel to the One, generates a coarse change detection map.
进一步地,对粗变化检测图进行形态学滤波,以生成变化检测图包括:Further, performing morphological filtering on the coarse change detection map to generate the change detection map includes:
对粗变化检测图进行腐蚀操作,滤除粗变化检测图中的噪声像素点,以生成变化检测图。Corrosion operation is performed on the coarse change detection map, and noise pixels in the coarse change detection map are filtered out to generate a change detection map.
在一个实施例中,构建包括待检测遥感影像的双时相遥感图像数据集的过程可以包括:In one embodiment, the process of constructing a bi-temporal remote sensing image dataset including the remote sensing image to be detected may include:
构建遥感图像数据集Image=[Image0,Image1,...,Imagei],并制作相对应的样本标签集Lable=[Lable0,Lable1,...,Lablei],其中i表示所构建的图像数据集和相对应的标签集中所包含的图像的最大数量,Imagei表示所构建的遥感图像数据集中第i个图像,Lablei表示所制作的样本标签集中第i个标签图像。其中每一张原始图像(如待检测遥感影像)都有一张与之对应的标签图像。标签可以是一张二值图像,其中变化的像素点显示为白色,未变化的像素点显示为黑色。Construct a remote sensing image dataset Image=[Image 0 , Image 1 ,...,Image i ], and make a corresponding sample label set Lable=[Lable 0 ,Lable 1 ,...,Lable i ], where i represents The constructed image dataset and the maximum number of images contained in the corresponding label set, Image i represents the i-th image in the constructed remote sensing image dataset, and Lable i represents the i-th label image in the produced sample label set. Each of the original images (such as remote sensing images to be detected) has a corresponding label image. The label can be a binary image, where pixels that change are shown in white, and pixels that don't change are shown in black.
在一个实施例中,对双时相特征图进行双线性插值的过程包括:In one embodiment, the process of performing bilinear interpolation on the bitemporal feature map includes:
按照如下表达式进行双线性插值:Bilinear interpolation is performed according to the following expression:
双线性插值后的特征图为:The feature map after bilinear interpolation is:
featurei(x,y,z)={(x,y,z)|0≤x<h,0≤y<w,0≤z<512}feature i (x,y,z)={(x,y,z)|0≤x<h,0≤y<w,0≤z<512}
其中,h表示原始图像的高,w表示原始图像的宽。x,y分别表示特征图中待求像素点的横坐标以及纵坐标,q11、q12、q21、q22分别表示距离待求像素点最近的四个像素点,x1、x2、y1、y2则分别表示上述四个像素点的横坐标以及纵坐标。所以最终的计算结果就是待求像素点的像素值。Among them, h represents the height of the original image, and w represents the width of the original image. x and y respectively represent the abscissa and ordinate of the pixel to be determined in the feature map, q 11 , q 12 , q 21 , and q 22 represent the four closest pixels to the pixel to be determined, respectively, x 1 , x 2 , y 1 and y 2 respectively represent the abscissa and ordinate of the above four pixel points. So the final calculation result is the pixel value of the pixel to be calculated.
本实施例通过深度神经网络获取原始双时相图像的特征图,由于深度神经网络中的池化层的作用,每经过一层池化层,图像的尺寸就缩小一半。为了输出图像与输入图像尺寸一致以及检测的精度,对所得的双时相特征图分别进行双线性插值,目的就是将特征图的尺寸放大至原始图像大小。In this embodiment, the feature map of the original dual-phase image is obtained through the deep neural network. Due to the function of the pooling layer in the deep neural network, the size of the image is reduced by half after each layer of pooling layer. In order to keep the size of the output image consistent with the input image and the detection accuracy, bilinear interpolation is performed on the obtained bi-temporal feature maps respectively, and the purpose is to enlarge the size of the feature map to the original image size.
在一个实施例中,上述基于DCNN的双时相遥感影像变化检测方法的框架可以参考图2所示,包括如下过程:In one embodiment, the framework of the above-mentioned DCNN-based dual-temporal remote sensing image change detection method can be referred to as shown in FIG. 2, including the following processes:
(1)搭建主体结构为VGG19的深度卷积神经网络,并选择在ImageNet上预训练的权证参数作为所述的深度卷积神经网络的权重参数。构建遥感图像数据集Image=[Image0,Image1,...,Imagei],并制作相对应的样本标签集Lable=[Lable0,Lable1,...,Lablei],其中i表示所构建的图像数据集和相对应的标签集中所包含的图像的最大数量,Imagei表示所构建的遥感图像数据集中第i个图像,Lablei表示所制作的样本标签集中第i个标签图像。搭建的深度卷积神经网络如附图2中所示,其每层参数设定如下:(1) Build a deep convolutional neural network whose main structure is VGG19, and select the warrant parameters pre-trained on ImageNet as the weight parameters of the deep convolutional neural network. Construct a remote sensing image dataset Image=[Image 0 , Image 1 ,...,Image i ], and make a corresponding sample label set Lable=[Lable 0 ,Lable 1 ,...,Lable i ], where i represents The constructed image dataset and the maximum number of images contained in the corresponding label set, Image i represents the i-th image in the constructed remote sensing image dataset, and Lable i represents the i-th label image in the produced sample label set. The built deep convolutional neural network is shown in Figure 2, and the parameters of each layer are set as follows:
(a)第一大层中,分别定义了两层卷积核尺寸为3*3*64、步长为1、激活函数为线性整流函数的卷积层,和一个池化层,池化方法选择为最大值池化;(a) In the first large layer, two convolution layers with a convolution kernel size of 3*3*64, a stride of 1, and a linear rectification function as the activation function, and a pooling layer are defined respectively. The pooling method Choose max pooling;
(b)第二大层中,分别定义了两层卷积核尺寸为3*3*128、步长为1、激活函数为线性整流函数的卷积层,和一个池化层,池化方法选择为最大值池化;(b) In the second largest layer, two convolution layers with a convolution kernel size of 3*3*128, a stride of 1, and a linear rectification function as the activation function, and a pooling layer are defined respectively. The pooling method Choose max pooling;
(c)第三大层中,分别定义了四层卷积核尺寸为3*3*256、步长为1、激活函数为线性整流函数的卷积层,和一个池化层,池化方法选择为最大值池化;(c) In the third largest layer, four convolution layers with convolution kernel size of 3*3*256, stride of 1, and activation function of linear rectification function are respectively defined, and a pooling layer, the pooling method Choose max pooling;
(d)第四大层中,分别定义了四层卷积核尺寸为3*3*512、步长为1、激活函数为线性整流函数的卷积层,和一个池化层,池化方法选择为最大值池化;(d) In the fourth largest layer, four convolution layers with convolution kernel size of 3*3*512, stride of 1, and activation function of linear rectification function, and a pooling layer, are respectively defined. The pooling method Choose max pooling;
(e)第五大层中,分别定义了四层卷积核尺寸为3*3*512、步长为1、激活函数为线性整流函数的卷积层,和一个池化层,池化方法选择为最大值池化;(e) In the fifth largest layer, four convolution layers with kernel size of 3*3*512, stride of 1, activation function of linear rectification function, and a pooling layer are defined respectively. Pooling method Choose max pooling;
(f)第六层为全连接层;(f) The sixth layer is a fully connected layer;
(g)第七层为全连接层;(g) The seventh layer is a fully connected layer;
(h)第八层为全连接层。(h) The eighth layer is a fully connected layer.
(2)将构建的遥感图像数据集输入到上述搭建好的深度神经网络中,生成所述的特征图。(2) Input the constructed remote sensing image dataset into the above constructed deep neural network to generate the feature map.
(2.1)在本实施例中,数据集中的遥感图像尺寸为256×256,其中第i个图像为三通道图像,将其表示如下:(2.1) In this embodiment, the size of the remote sensing images in the dataset is 256×256, and the i-th image is a three-channel image, which is represented as follows:
Imagei(x,y,z)={(x,y,z)|0≤x<h,0≤y<w,0≤z<3}Image i (x,y,z)={(x,y,z)|0≤x<h,0≤y<w,0≤z<3}
其中,h表示图像的高,w表示图像的宽,其值都为256;Among them, h represents the height of the image, w represents the width of the image, and its value is 256;
(2.2)经过上述深度卷积神经网络输出的特征图为:(2.2) The feature map output by the above deep convolutional neural network is:
featurei(x,y,z)={(x,y,z)|0≤x<h′,0≤y<w′,0≤z<512}feature i (x,y,z)={(x,y,z)|0≤x<h′,0≤y<w′,0≤z<512}
其中,h′表示所述深度卷积神经网络输出的特征图高度,w′表示所述深度卷积神经网络输出的特征图宽度。在本实施例中提取的是上述深度卷积神经网络第四大层输出的特征图,所以h′和w′的值都为16;Wherein, h' represents the height of the feature map output by the deep convolutional neural network, and w' represents the width of the feature map output by the deep convolutional neural network. In this embodiment, the feature map output by the fourth largest layer of the above-mentioned deep convolutional neural network is extracted, so the values of h' and w' are both 16;
(2.3)对深度卷积神经网络第四大层输出的特征图按如下表达式进行双线性插值:(2.3) Perform bilinear interpolation on the feature map output by the fourth largest layer of the deep convolutional neural network according to the following expression:
将其尺寸放大至原始输入图像大小,插值后的特征图为:Enlarging its size to the original input image size, the interpolated feature map is:
featurei(x,y,z)={(x,y,z)|0≤x<h,0≤y<w,0≤z<512}feature i (x,y,z)={(x,y,z)|0≤x<h,0≤y<w,0≤z<512}
其中,h表示原始图像的高,w表示原始图像的宽,值为256.Among them, h represents the height of the original image, w represents the width of the original image, and the value is 256.
(3)在上述特征图上按照如下表达式计算欧氏距离,生成差异图像:(3) Calculate the Euclidean distance on the above feature map according to the following expression, and generate a difference image:
式中,表示双时相特征图中T1时刻特征图的像素值,表示双时相特征图中T2时刻特征图的像素值,DIi(x,y)表示欧氏距离。In the formula, represents the pixel value of the feature map at time T1 in the dual-phase feature map, Represents the pixel value of the feature map at time T2 in the dual-phase feature map, and DI i (x, y) represents the Euclidean distance.
(4)在上述差异图像上使用主成分分析算法构建特征向量空间,生成特征向量空间后利用k均值聚类算法对上述特征向量进行聚类生成粗变化图,并对粗变化图进行腐蚀操作生成最终变化检测图。将测试集中所有图像的预测值predict与原始遥感图像所对应的Label值进行对比,即可得到整个测试集的检测准确率。(4) Using the principal component analysis algorithm to construct the feature vector space on the above difference image, after generating the feature vector space, use the k-means clustering algorithm to cluster the above feature vectors to generate a coarse change map, and perform an erosion operation on the coarse change map to generate Final change detection map. By comparing the predicted values of all images in the test set with the Label values corresponding to the original remote sensing images, the detection accuracy of the entire test set can be obtained.
本实施例具体如下有益效果:The present embodiment has the following beneficial effects:
(1)直接通过深度卷积神经网络对原始遥感图像提取特征,避免了传统的手工提取特征的复杂步骤,并且所提取的特征精度较高。(1) The features are extracted from the original remote sensing images directly through the deep convolutional neural network, which avoids the complicated steps of traditional manual feature extraction, and the extracted features have high accuracy.
(2)通过深度神经网络提取的特征构建差异图像,可以消除原始图像中噪声的影响,并且可以避免误差累积,提高变化检测的性能。(2) Constructing the difference image through the features extracted by the deep neural network can eliminate the influence of noise in the original image, and can avoid the accumulation of errors and improve the performance of change detection.
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above embodiments can be combined arbitrarily. In order to make the description simple, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction in the combination of these technical features It is considered to be the range described in this specification.
需要说明的是,本申请实施例所涉及的术语“第一\第二\第三”仅仅是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序。应该理解“第一\第二\第三”区分的对象在适当情况下可以互换,以使这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。It should be noted that the term "first\second\third" involved in the embodiments of the present application is only to distinguish similar objects, and does not represent a specific ordering of objects. It is understandable that "first\second\third" "Three" may be interchanged in a particular order or sequence where permitted. It should be understood that the "first\second\third" distinctions may be interchanged under appropriate circumstances to enable the embodiments of the application described herein to be practiced in sequences other than those illustrated or described herein.
本申请实施例的术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或模块的过程、方法、装置、产品或设备没有限定于已列出的步骤或模块,而是可选地还包括没有列出的步骤或模块,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或模块。The terms "comprising" and "having" and any variations thereof in the embodiments of the present application are intended to cover non-exclusive inclusion. For example, a process, method, apparatus, product or device comprising a series of steps or modules is not limited to the listed steps or modules, but optionally also includes unlisted steps or modules, or optionally also includes Other steps or modules inherent to these processes, methods, products or devices.
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only represent several embodiments of the present application, and the descriptions thereof are specific and detailed, but should not be construed as a limitation on the scope of the invention patent. It should be pointed out that for those skilled in the art, without departing from the concept of the present application, several modifications and improvements can be made, which all belong to the protection scope of the present application. Therefore, the scope of protection of the patent of the present application shall be subject to the appended claims.
Claims (5)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010903557.1A CN112131968A (en) | 2020-09-01 | 2020-09-01 | Change detection method of dual-phase remote sensing image based on DCNN |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010903557.1A CN112131968A (en) | 2020-09-01 | 2020-09-01 | Change detection method of dual-phase remote sensing image based on DCNN |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN112131968A true CN112131968A (en) | 2020-12-25 |
Family
ID=73847093
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010903557.1A Pending CN112131968A (en) | 2020-09-01 | 2020-09-01 | Change detection method of dual-phase remote sensing image based on DCNN |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112131968A (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112784777A (en) * | 2021-01-28 | 2021-05-11 | 西安电子科技大学 | Unsupervised hyperspectral image change detection method based on antagonistic learning |
| CN114120141A (en) * | 2021-11-23 | 2022-03-01 | 深圳航天智慧城市系统技术研究院有限公司 | All-weather remote sensing monitoring automatic analysis method and system thereof |
| CN116403123A (en) * | 2023-04-25 | 2023-07-07 | 北京数慧时空信息技术有限公司 | Remote sensing image change detection method based on deep convolution network |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103971364A (en) * | 2014-04-04 | 2014-08-06 | 西南交通大学 | Remote sensing image variation detecting method on basis of weighted Gabor wavelet characteristics and two-stage clusters |
| CN108596108A (en) * | 2018-04-26 | 2018-09-28 | 中国科学院电子学研究所 | Method for detecting change of remote sensing image of taking photo by plane based on the study of triple semantic relation |
-
2020
- 2020-09-01 CN CN202010903557.1A patent/CN112131968A/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103971364A (en) * | 2014-04-04 | 2014-08-06 | 西南交通大学 | Remote sensing image variation detecting method on basis of weighted Gabor wavelet characteristics and two-stage clusters |
| CN108596108A (en) * | 2018-04-26 | 2018-09-28 | 中国科学院电子学研究所 | Method for detecting change of remote sensing image of taking photo by plane based on the study of triple semantic relation |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112784777A (en) * | 2021-01-28 | 2021-05-11 | 西安电子科技大学 | Unsupervised hyperspectral image change detection method based on antagonistic learning |
| CN112784777B (en) * | 2021-01-28 | 2023-06-02 | 西安电子科技大学 | Unsupervised hyperspectral image change detection method based on countermeasure learning |
| CN114120141A (en) * | 2021-11-23 | 2022-03-01 | 深圳航天智慧城市系统技术研究院有限公司 | All-weather remote sensing monitoring automatic analysis method and system thereof |
| CN116403123A (en) * | 2023-04-25 | 2023-07-07 | 北京数慧时空信息技术有限公司 | Remote sensing image change detection method based on deep convolution network |
| CN116403123B (en) * | 2023-04-25 | 2025-06-06 | 北京数慧时空信息技术有限公司 | Remote sensing image change detection method based on deep convolutional network |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Haut et al. | A new deep generative network for unsupervised remote sensing single-image super-resolution | |
| Xie et al. | Hyperspectral image super-resolution using deep feature matrix factorization | |
| Yuan et al. | Factorization-based texture segmentation | |
| CN107992891B (en) | Multispectral remote sensing image change detection method based on spectral vector analysis | |
| CN112634163A (en) | Method for removing image motion blur based on improved cycle generation countermeasure network | |
| CN103714526B (en) | Based on the super-resolution image reconstruction method that sparse multiple manifold embeds | |
| CN112131968A (en) | Change detection method of dual-phase remote sensing image based on DCNN | |
| CN112329818B (en) | Hyperspectral image non-supervision classification method based on graph convolution network embedded characterization | |
| CN111914909B (en) | Hyperspectral change detection method based on space-spectrum combined three-direction convolution network | |
| CN115565045B (en) | Hyperspectral and multispectral image fusion method based on multiscale spatial spectrum transformation | |
| Mignotte | A bicriteria-optimization-approach-based dimensionality-reduction model for the color display of hyperspectral images | |
| CN109300115B (en) | Object-oriented multispectral high-resolution remote sensing image change detection method | |
| CN110570395A (en) | Hyperspectral Anomaly Detection Method Based on Joint Space-Spectrum Cooperative Representation | |
| Wu et al. | A novel approach to subpixel land-cover change detection based on a supervised back-propagation neural network for remotely sensed images with different resolutions | |
| Liu et al. | An efficient residual learning neural network for hyperspectral image superresolution | |
| CN105701493A (en) | Image extraction and foreground estimation method and system based on hierarchical graph | |
| Zhu et al. | Image interpolation based on non-local geometric similarities | |
| CN108335265B (en) | Rapid image super-resolution reconstruction method and device based on sample learning | |
| CN112131969A (en) | Remote sensing image change detection method based on fully convolutional neural network | |
| Thuan et al. | Edge-focus thermal image super-resolution using generative adversarial network | |
| US8208731B2 (en) | Image descriptor quantization | |
| CN115330650A (en) | Knowledge graph-based multi-source heterogeneous remote sensing image fusion method | |
| WO2005050533A2 (en) | Image clustering with metric, local linear structure, and affine symmetry | |
| Ahmadian et al. | Single image super-resolution with self-organization neural networks and image laplace gradient operator | |
| Diderot et al. | An efficient fuzzy C-means clustering based image dissection algorithm for satellite images |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201225 |
|
| RJ01 | Rejection of invention patent application after publication |