[go: up one dir, main page]

CN112287871B - Remote sensing image extraction method for nearshore aquaculture area based on multi-feature and spectral fusion - Google Patents

Remote sensing image extraction method for nearshore aquaculture area based on multi-feature and spectral fusion Download PDF

Info

Publication number
CN112287871B
CN112287871B CN202011257631.3A CN202011257631A CN112287871B CN 112287871 B CN112287871 B CN 112287871B CN 202011257631 A CN202011257631 A CN 202011257631A CN 112287871 B CN112287871 B CN 112287871B
Authority
CN
China
Prior art keywords
remote sensing
sensing image
extraction
feature
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202011257631.3A
Other languages
Chinese (zh)
Other versions
CN112287871A (en
Inventor
付东洋
钟雅枫
余果
黄浩恩
刘大召
徐华兵
罗亚飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Ocean University
Original Assignee
Guangdong Ocean University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Ocean University filed Critical Guangdong Ocean University
Priority to CN202011257631.3A priority Critical patent/CN112287871B/en
Publication of CN112287871A publication Critical patent/CN112287871A/en
Application granted granted Critical
Publication of CN112287871B publication Critical patent/CN112287871B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a near-shore aquaculture area remote sensing image extraction method based on multi-feature and spectrum fusion. Secondly, the improved constrained energy minimization algorithm is used for enhancing the target ground objects in the culture area, background spectrum information is weakened, then the Otsu method is used for calculating a threshold value, and the single-waveband threshold value is combined for carrying out primary extraction on the target ground objects. And finally, according to the ground feature characteristics, the primarily extracted result is subjected to customized elimination of ground feature interfering objects by using a gray level co-occurrence texture matrix or an object-oriented method, and the final extracted result of the culture area is output. Compared with the traditional target detection method, the method can effectively overcome the interference of background ground objects even in the culture area with obvious foreign matter co-spectrum phenomenon, obtain the extraction result of the raft culture area with higher precision, and better meet the high-precision extraction requirement of the raft culture area.

Description

基于多特征与光谱融合的近岸水产养殖区遥感图像提取方法Remote sensing image extraction method for nearshore aquaculture area based on multi-feature and spectral fusion

技术领域technical field

本发明属于遥感技术、图像处理技术领域,具体涉及一种基于多特征与光谱融合的近岸水产养殖区遥感图像提取方法。The invention belongs to the technical fields of remote sensing technology and image processing, and in particular relates to a method for extracting remote sensing images of coastal aquaculture areas based on multi-feature and spectrum fusion.

背景技术Background technique

自第二次工业革命以来,渔业资源作为陆地粮食资源的补充越来越得到更多国家和地区的关注。作为渔业资源的重要组成部分,水产养殖活动的自主选择性和经济利益更高,但对自然环境的影响程度更深,随着水中排泄物的不断沉积和残余饵料的增加,养殖水体的氨氮含量不断积累,水体富营养化现象加剧,水产养殖区域自然环境恶化。因此,如何对水产养殖区进行合理养殖和科学规划是可持续发展渔业资源的主要问题,而有效掌握养殖区的空间布局,提高水产养殖区的动态监测技术,是合理规划和科学治理渔业资源的重要环节。Since the second industrial revolution, more and more countries and regions have paid more and more attention to fishery resources as a supplement to terrestrial food resources. As an important part of fishery resources, aquaculture activities have higher self-selectivity and economic benefits, but have a deeper impact on the natural environment. With the continuous deposition of excrement in water and the increase of residual bait, the ammonia nitrogen content in aquaculture water continues to increase. Accumulation, the phenomenon of eutrophication in water body intensifies, and the natural environment of aquaculture area deteriorates. Therefore, how to carry out reasonable breeding and scientific planning of aquaculture areas is the main problem of sustainable development of fishery resources, and effectively mastering the spatial layout of aquaculture areas and improving the dynamic monitoring technology of aquaculture areas is the key to rational planning and scientific management of fishery resources. important link.

与传统监测技术相比,遥感技术通过制定卫星轨道路径和运行周期,能宏观、连续、自动地观测地表物体,克服了传统监测周期长、耗时耗力、人为干扰因素大等不足,成为水产养殖区动态监测的重要手段。基于遥感技术的水产养殖区提取方法主要有目视解译法、面向对象法、基于像元光谱特征以及纹理分析等方法,一些学者均在相应的养殖区提取实验中取得了较为理想的提取效果和应用前景,极大促进了遥感技术的发展。其中目视解译方法最为常用,但其精度绝大程度上取决于目视解译人员的自身解译经验,客观性低,工作量大且耗费时间,不利于养殖区的长时间、动态监测需求。面向对象的提取方法综合考虑了遥感影像中分类对象的空间、光谱、纹理、形状特征,降低了传统图像提取方法中难以解决的“椒盐”噪声影响,但在提取过程中由于分割尺度的主观性和部分像元的“异物同谱”问题,可能会导致养殖区提取精度降低。基于像元的提取方法能较好地利用水产养殖区的光谱反射特性,突出养殖区与非养殖区间的差异,利用阈值能够自动地对水产养殖区进行提取。但是由于受传感器间差异以及自身参数影响,尚不存在一种对大部分卫星影像数据都可用的水产特征指数,而且同一养殖区内受不同水质因素的影响,水产养殖区的反射特性出现差异化现象。这也给基于像元的提取方法增加了困难,也使得像元提取法很难独立、准确地提取出水产养殖区,而与其他方法相结合使用才能更好地利用水产养殖区的光谱特征,提高水产养殖区提取算法的精度。Compared with traditional monitoring technology, remote sensing technology can macroscopically, continuously and automatically observe surface objects by formulating satellite orbit path and operation cycle, which overcomes the shortcomings of long traditional monitoring cycle, time-consuming and labor-consuming, and large human interference factors. An important means of dynamic monitoring in breeding areas. The extraction methods of aquaculture areas based on remote sensing technology mainly include visual interpretation method, object-oriented method, methods based on pixel spectral characteristics and texture analysis. Some scholars have achieved relatively ideal extraction results in the corresponding aquaculture area extraction experiments. and application prospects, greatly promoting the development of remote sensing technology. Among them, the visual interpretation method is the most commonly used, but its accuracy largely depends on the interpretation experience of the visual interpretation personnel, which has low objectivity, heavy workload and time-consuming, which is not conducive to long-term and dynamic monitoring in breeding areas need. The object-oriented extraction method comprehensively considers the spatial, spectral, texture, and shape features of the classification objects in remote sensing images, and reduces the influence of "salt and pepper" noise that is difficult to solve in traditional image extraction methods. However, due to the subjectivity of the segmentation scale in the extraction process The problem of "different objects with the same spectrum" in some pixels may lead to a decrease in the extraction accuracy of the breeding area. The pixel-based extraction method can make better use of the spectral reflection characteristics of aquaculture areas, highlight the differences between aquaculture areas and non-aquaculture areas, and use thresholds to automatically extract aquaculture areas. However, due to the differences between sensors and their own parameters, there is no aquatic characteristic index that is available for most satellite image data, and the reflection characteristics of aquaculture areas are differentiated due to the influence of different water quality factors in the same aquaculture area Phenomenon. This also adds difficulties to the pixel-based extraction method, and it also makes it difficult for the pixel extraction method to independently and accurately extract the aquaculture area, and the combination with other methods can make better use of the spectral characteristics of the aquaculture area. Improve the accuracy of the aquaculture area extraction algorithm.

针对现有算法的不足,本发明提出一种基于多特征与光谱融合的近岸水产养殖区遥感图像提取方法,构建一种将遥感影像地物光谱特征、阈值法和灰度共生纹理矩阵等相结合的多特征分析法,从而实现养殖区的精确提取。Aiming at the deficiencies of the existing algorithms, the present invention proposes a remote sensing image extraction method based on multi-feature and spectral fusion in nearshore aquaculture areas, and constructs a method that combines the spectral features of remote sensing images, threshold methods, and gray-level symbiotic texture matrices. Combined with the multi-feature analysis method, the precise extraction of breeding areas can be realized.

发明内容Contents of the invention

本发明的目的在于提供一种基于多特征与光谱融合的近岸水产养殖区遥感图像提取方法,以解决上述存在的问题。The purpose of the present invention is to provide a method for extracting remote sensing images of coastal aquaculture areas based on multi-feature and spectral fusion, so as to solve the above-mentioned problems.

实现本发明目的的技术解决方案为:The technical solution that realizes the object of the present invention is:

基于多特征与光谱融合的近岸水产养殖区遥感图像提取方法,其特征在于,包括以下步骤:The method for extracting remote sensing images of coastal aquaculture areas based on multi-feature and spectral fusion is characterized in that it includes the following steps:

步骤1:输入一幅原始遥感图像;Step 1: Input an original remote sensing image;

步骤2:对输入的遥感图像进行图像预处理,针对处理后图像中的地物特征,利用特征指数方法提取特征光谱,根据得到的特征光谱构建目标地物特征集;Step 2: Carry out image preprocessing on the input remote sensing image, and use the feature index method to extract the characteristic spectrum for the object features in the processed image, and construct the target object feature set according to the obtained characteristic spectrum;

步骤3:构建有限脉冲响应线性滤波器,基于梯度积分递归神经网络的约束能量最小化算法,选取目标地物像元光谱数据,利用有限脉冲响应线性滤波器对目标地物光谱数据进行增强,得到增强的遥感图像;Step 3: Construct a finite impulse response linear filter, based on the constrained energy minimization algorithm of the gradient integral recurrent neural network, select the spectral data of the target surface object pixel, and use the finite impulse response linear filter to enhance the target surface object spectral data, and obtain Enhanced remote sensing images;

步骤4:对增强后的遥感图像,利用Otsu算法和单波段阈值对目标地物进行初步提取,得到目标地物的初步提取后的遥感图像;Step 4: For the enhanced remote sensing image, use the Otsu algorithm and single-band threshold to perform preliminary extraction of the target feature, and obtain the remote sensing image after the preliminary extraction of the target feature;

步骤5:将初步提取后的遥感图像,基于地物纹理特征和几何特征,采用面向对象法或灰度共生纹理矩阵对养殖区地物干扰物体进行定制化剔除;Step 5: Use the object-oriented method or the gray-level co-occurrence texture matrix to remove the disturbing objects of the ground features in the breeding area based on the texture features and geometric features of the ground features after the preliminary extraction;

步骤6:输出最终的养殖区提取后的遥感图像。Step 6: Output the final extracted remote sensing image of the breeding area.

进一步地,其特征在于,步骤3的具体操作步骤包括:Further, it is characterized in that the specific operation steps of step 3 include:

步骤31:根据已知的目标像元光谱的先验信息,构建有限脉冲响应线性滤波器;Step 31: Construct a finite impulse response linear filter according to the known prior information of the target pixel spectrum;

步骤32:将约束能量最小化算法表示为一个线性约束最优化数学模型,表达式为:Step 32: Express the constrained energy minimization algorithm as a linear constrained optimization mathematical model, the expression is:

Figure BDA0002773569050000031
Figure BDA0002773569050000031

其中,w表示代求滤波系数,R为遥感图像的自相关矩阵,且

Figure BDA0002773569050000041
d表示约束条件向量,并且d满足条件:Among them, w represents the filtering coefficient, R is the autocorrelation matrix of the remote sensing image, and
Figure BDA0002773569050000041
d represents a vector of constraints, and d satisfies the condition:

Figure BDA0002773569050000042
Figure BDA0002773569050000042

步骤33:在所述约束条件向量下,当滤波器对于输入ri的输出yi满足下式:Step 33: Under the constraint condition vector, when the output y i of the filter for the input ri satisfies the following formula:

Figure BDA0002773569050000043
Figure BDA0002773569050000043

则遥感图像{r1,r2,...,rN}对应的平均输出能量为:Then the average output energy corresponding to the remote sensing image {r 1 ,r 2 ,...,r N } is:

Figure BDA0002773569050000044
Figure BDA0002773569050000044

其中,{r1,r2,...,rN}为图像中的像元矢量,代表遥感图像中的光谱信息,N为图像中的总像元数值,每个像元ri=[ri1,ri2,...,ril]T为l维列向量,且l是图像的波段数,1≤i≤N;Among them, {r 1 ,r 2 ,...,r N } is the pixel vector in the image, representing the spectral information in the remote sensing image, N is the total pixel value in the image, and each pixel r i =[ r i1 ,r i2 ,...,r il ] T is an l-dimensional column vector, and l is the number of image bands, 1≤i≤N;

步骤34:利用拉格朗日乘数法将公式(1)转化为无约束最优化数学模型,公式为:Step 34: Use the Lagrange multiplier method to convert formula (1) into an unconstrained optimization mathematical model, the formula is:

F(w)=wTRw+λ(dTw-1) (5),F(w)= wT Rw+λ( dTw -1) (5),

其中,λ为拉格朗日乘子;Among them, λ is the Lagrangian multiplier;

步骤35:将公式(5)转换为线性等式方程数学模型,公式为:Step 35: Convert formula (5) into a linear equation equation mathematical model, the formula is:

Gs(t)=b (6),Gs(t)=b(6),

其中,自相关系数矩阵G=[2R,dT;d,0]∈R(l+1)×(l+1);b为系数向量且b=[0,1]T∈Rl+1;s(t)=[w(t),λ(t)]T∈Rl+1为待求向量;w(t)={w1(t),w2(t),…,wl(t)}T是由滤波系数构成的l维向量,λ(t)∈R为拉格朗日函数乘子;Among them, the autocorrelation coefficient matrix G=[2R,d T ; d,0]∈R (l+1)×(l+1) ; b is the coefficient vector and b=[0,1] T ∈ R l+1 ;s(t)=[w(t),λ(t)] T ∈R l+1 is the vector to be sought; w(t)={w 1 (t),w 2 (t),…,w l (t)} T is an l-dimensional vector composed of filter coefficients, and λ(t)∈R is a Lagrange function multiplier;

步骤36:定义公式(6)的误差函数为:Step 36: Define the error function of formula (6) as:

e(t)=Gs(t)-b (7),e(t)=Gs(t)-b (7),

步骤37:根据公式(7),构建积分增强梯度递归公式为:Step 37: According to the formula (7), the recursive formula for constructing the integral enhanced gradient is:

Figure BDA0002773569050000051
Figure BDA0002773569050000051

步骤38:根据公式(8)进行递归计算,直至当计算出的误差小于允许误差时结束计算,并得到滤波输出系数w(t);Step 38: Carry out recursive calculation according to the formula (8), until the calculation ends when the calculated error is less than the allowable error, and the filter output coefficient w(t) is obtained;

步骤39:根据得到的滤波输出系数进行反演,输出增强后的遥感图像。Step 39: Perform inversion according to the obtained filter output coefficients, and output the enhanced remote sensing image.

进一步地,步骤4的具体操作步骤包括:Further, the specific operation steps of step 4 include:

步骤41:针对增强后的遥感图像,采用Otsu算法计算出地物提取的最佳阈值并根据得到的阈值对遥感图像进行阈值分割,提取部分地物,得到Otsu阈值提取结果;Step 41: For the enhanced remote sensing image, use the Otsu algorithm to calculate the optimal threshold for feature extraction, and perform threshold segmentation on the remote sensing image according to the obtained threshold, extract part of the feature, and obtain the Otsu threshold extraction result;

步骤42:在遥感图像的单波段上提取于Otsu阈值提取结果位置相同的光谱值,对单波段灰度值进行阈值分割,剔除部分干扰物体,得到单波段阈值提取结果。Step 42: Extract the spectral value at the same position as the Otsu threshold value extraction result on the single band of the remote sensing image, perform threshold segmentation on the single band gray value, remove some interfering objects, and obtain the single band threshold value extraction result.

进一步地,步骤5的具体操作步骤包括:Further, the specific operation steps of step 5 include:

步骤51:基于目标地物的纹理特征,建立遥感图像各波段之间的比值波段,采用巴氏距离法挑选出敏感波段,并在敏感波段的基础上建立灰度共生纹理矩阵,划定阈值进行目标提取,剔除干扰物体;Step 51: Based on the texture characteristics of the target object, establish the ratio bands between the bands of the remote sensing image, select the sensitive bands by using the Bhattacharyian distance method, and establish a gray-level co-occurrence texture matrix on the basis of the sensitive bands, and delineate the threshold value to carry out Target extraction, removing interfering objects;

步骤52:基于目标地物的空间属性,分别从地物间的面积、延伸性、周长、紧密性、坚固性、形状要素、圆度特征进行目标提取,剔除干扰物体。Step 52: Based on the spatial attributes of the target ground objects, perform target extraction from the area, extension, perimeter, compactness, solidity, shape elements, and roundness features of the ground objects, and remove interfering objects.

本方法与现有技术相比,具有以下有益效果:Compared with the prior art, this method has the following beneficial effects:

第一,本发明提出的方法运用改进后的约束能量最小化算法对养殖区目标地物进行增强,削弱背景光谱信息,并且在得到初步提取结果后,运用灰度共生纹理矩阵或面向对象法进行地物干扰物体定制化剔除,即使在“异物同谱”现象较为明显的养殖区域,本方法仍能有效克服背景地物干扰,得到较高精度的养殖区提取结果;First, the method proposed in the present invention uses the improved constrained energy minimization algorithm to enhance the target features in the breeding area, weaken the background spectral information, and after obtaining the preliminary extraction results, use the gray-scale co-occurrence texture matrix or object-oriented method to Customized elimination of ground object interference, even in the breeding area where the phenomenon of "different objects with the same spectrum" is more obvious, this method can still effectively overcome the background ground object interference, and obtain higher-precision breeding area extraction results;

第二,本申请能够利用地物的光谱特征来进行提取养殖区地物,利用Otsu算法算出的阈值和单波段阈值进行阈值分割,提取出部分目标地物区域,并能根据养殖区地物间的地物纹理特征和几何特征进行干扰物体定制化剔除,进一步提高了提取的精度;Second, this application can use the spectral features of the ground objects to extract the ground features in the breeding area, use the threshold value calculated by the Otsu algorithm and the single-band threshold to perform threshold segmentation, extract some target feature areas, and can extract some target feature areas according to the distance between the ground features in the breeding area. The texture features and geometric features of the ground objects are customized to eliminate the interfering objects, which further improves the accuracy of extraction;

第三,本发明方法将约束能量最小化算法利用拉格朗日乘数法转换为无约束最优化数学模型,运用误差函数和演化公式将每个光谱的计算误差均收敛到零,分类精度有所提高。Third, the method of the present invention converts the constrained energy minimization algorithm into an unconstrained optimization mathematical model using the Lagrangian multiplier method, uses the error function and the evolution formula to converge the calculation error of each spectrum to zero, and the classification accuracy is improved.

综上所述,本发明结合当前卫星遥感影像的实际应用效果和近海养殖区的实际需求,构建了一种将遥感影像地物光谱特征、阈值法和灰度共生纹理矩阵等相结合的多特征分析方法,利用该方法实现了对养殖区的精确提取。In summary, the present invention combines the actual application effects of current satellite remote sensing images and the actual needs of offshore aquaculture areas, and constructs a multi-feature method that combines the spectral features of remote sensing images, threshold methods, and gray-scale co-occurrence texture matrices. Analytical method, using this method to achieve accurate extraction of breeding areas.

附图说明Description of drawings

图1是本发明方法的流程示意图;Fig. 1 is a schematic flow sheet of the inventive method;

图2是湛江港湾筏式养殖区的遥感影像;Figure 2 is a remote sensing image of the raft breeding area in Zhanjiang Harbor;

图3是湛江港湾筏式养殖区经本方法提取后的结果图;Fig. 3 is the result figure extracted by this method in Zhanjiang harbor raft type culture area;

图4是湛江港湾养殖池区域的遥感影像;Figure 4 is a remote sensing image of the Zhanjiang harbor culture pond area;

图5是湛江港湾养殖池区域经本方法提取后的结果图。Fig. 5 is the result map of the Zhanjiang harbor culture pond area extracted by this method.

具体实施方式detailed description

为了使本领域的普通技术人员能更好的理解本发明的技术方案,下面结合附图和实施例对本发明的技术方案做进一步的描述。In order to enable those skilled in the art to better understand the technical solution of the present invention, the technical solution of the present invention will be further described below in conjunction with the accompanying drawings and embodiments.

参考附图1-5可以看出,基于多特征与光谱融合的近岸水产养殖区遥感图像提取方法,包括以下步骤:Referring to accompanying drawings 1-5, it can be seen that the method for extracting remote sensing images of coastal aquaculture areas based on multi-feature and spectral fusion includes the following steps:

步骤1:输入一幅原始遥感图像;Step 1: Input an original remote sensing image;

步骤2:对输入的遥感图像进行图像预处理,对欲进行提取的养殖区域遥感图像进行海陆分离,将欲进行提取的养殖区域进行裁剪和保留,根据处理后图像中的地物特征构建特征光谱,采用巴氏距离法在特征光谱波段与遥感图像原始波段中筛选敏感波段,基于得到的特征光谱构建目标地物特征集;Step 2: Carry out image preprocessing on the input remote sensing image, separate the sea and land from the remote sensing image of the farming area to be extracted, cut and retain the farming area to be extracted, and construct the characteristic spectrum according to the features of the ground objects in the processed image , use the Bhattacharyian distance method to screen the sensitive bands in the characteristic spectral bands and the original bands of remote sensing images, and construct the target feature set based on the obtained characteristic spectra;

步骤3:构建有限脉冲响应线性滤波器,基于梯度积分递归神经网络的约束能量最小化算法,选取目标地物像元光谱数据,利用有限脉冲响应线性滤波器对图像背景信息抑制的同时对目标地物光谱数据进行增强,得到增强的遥感图像;Step 3: Construct a finite impulse response linear filter, based on the constrained energy minimization algorithm of the gradient integral recurrent neural network, select the spectral data of the target object pixel, and use the finite impulse response linear filter to suppress the background information of the image and at the same time Enhanced object spectral data to obtain enhanced remote sensing images;

步骤4:对增强后的遥感图像,利用Otsu算法和单波段阈值对目标地物进行初步提取,得到目标地物的初步提取后的遥感图像;Step 4: For the enhanced remote sensing image, use the Otsu algorithm and single-band threshold to perform preliminary extraction of the target feature, and obtain the remote sensing image after the preliminary extraction of the target feature;

步骤5:将初步提取后的遥感图像,基于地物纹理特征和几何特征,采用面向对象法或灰度共生纹理矩阵对养殖区地物干扰物体进行定制化剔除;Step 5: Use the object-oriented method or the gray-level co-occurrence texture matrix to remove the disturbing objects of the ground features in the breeding area based on the texture features and geometric features of the ground features after the preliminary extraction;

步骤6:输出最终的养殖区提取后的遥感图像。Step 6: Output the final extracted remote sensing image of the breeding area.

进一步地,步骤3的具体操作步骤包括:Further, the specific operation steps of step 3 include:

步骤31:根据已知的目标像元光谱的先验信息,构建有限脉冲响应线性滤波器;Step 31: Construct a finite impulse response linear filter according to the known prior information of the target pixel spectrum;

步骤32:将约束能量最小化算法表示为一个线性约束最优化数学模型,表达式为:Step 32: Express the constrained energy minimization algorithm as a linear constrained optimization mathematical model, the expression is:

Figure BDA0002773569050000081
Figure BDA0002773569050000081

其中,w表示代求滤波系数,R为遥感图像的自相关矩阵,且

Figure BDA0002773569050000082
d表示约束条件向量,并且d满足条件:Among them, w represents the filtering coefficient, R is the autocorrelation matrix of the remote sensing image, and
Figure BDA0002773569050000082
d represents a vector of constraints, and d satisfies the condition:

Figure BDA0002773569050000083
Figure BDA0002773569050000083

步骤33:在所述约束条件向量下,当滤波器对于输入ri的输出yi满足下式:Step 33: Under the constraint condition vector, when the output y i of the filter for the input ri satisfies the following formula:

Figure BDA0002773569050000084
Figure BDA0002773569050000084

则遥感图像{r1,r2,...,rN}对应的平均输出能量为:Then the average output energy corresponding to the remote sensing image {r 1 ,r 2 ,...,r N } is:

Figure BDA0002773569050000091
Figure BDA0002773569050000091

其中,{r1,r2,...,rN}为图像中的像元矢量,代表遥感图像中的光谱信息,N为图像中的总像元数值,每个像元ri=[ri1,ri2,...,ril]T为l维列向量,且l是图像的波段数,1≤i≤N;Among them, {r 1 ,r 2 ,...,r N } is the pixel vector in the image, representing the spectral information in the remote sensing image, N is the total pixel value in the image, and each pixel r i =[ r i1 ,r i2 ,...,r il ] T is an l-dimensional column vector, and l is the number of image bands, 1≤i≤N;

步骤34:利用拉格朗日乘数法将公式(1)转化为无约束最优化数学模型,公式为:Step 34: Use the Lagrange multiplier method to convert formula (1) into an unconstrained optimization mathematical model, the formula is:

F(w)=wTRw+λ(dTw-1) (5),F(w)= wT Rw+λ( dTw -1) (5),

其中,λ为拉格朗日乘子;Among them, λ is the Lagrangian multiplier;

步骤35:将公式(5)转换为线性等式方程数学模型,公式为:Step 35: Convert formula (5) into a linear equation equation mathematical model, the formula is:

Gs(t)=b (6),Gs(t)=b(6),

其中,自相关系数矩阵G=[2R,dT;d,0]∈R(l+1)×(l+1);b为系数向量且b=[0,1]T∈Rl+1;s(t)=[w(t),λ(t)]T∈Rl+1为待求向量;w(t)={w1(t),w2(t),…,wl(t)}T是由滤波系数构成的l维向量,λ(t)∈R为拉格朗日函数乘子;Among them, the autocorrelation coefficient matrix G=[2R,d T ; d,0]∈R (l+1)×(l+1) ; b is the coefficient vector and b=[0,1] T ∈ R l+1 ;s(t)=[w(t),λ(t)] T ∈R l+1 is the vector to be sought; w(t)={w 1 (t),w 2 (t),…,w l (t)} T is an l-dimensional vector composed of filter coefficients, and λ(t)∈R is a Lagrange function multiplier;

步骤36:定义公式(6)的误差函数为:Step 36: Define the error function of formula (6) as:

e(t)=Gs(t)-b (7),e(t)=Gs(t)-b (7),

步骤37:根据公式(7),构建积分增强梯度递归公式为:Step 37: According to the formula (7), the recursive formula for constructing the integral enhanced gradient is:

Figure BDA0002773569050000101
Figure BDA0002773569050000101

步骤38:根据公式(8)进行递归计算,直至当计算出的误差小于允许误差时结束计算,并得到滤波输出系数w(t);Step 38: Carry out recursive calculation according to the formula (8), until the calculation ends when the calculated error is less than the allowable error, and the filter output coefficient w(t) is obtained;

步骤39:根据得到的滤波输出系数进行反演,输出增强后的遥感图像。Step 39: Perform inversion according to the obtained filter output coefficients, and output the enhanced remote sensing image.

进一步地,步骤4的具体操作步骤包括:Further, the specific operation steps of step 4 include:

步骤41:针对增强后的遥感图像,采用Otsu算法计算出地物提取的最佳阈值并根据得到的阈值对遥感图像进行阈值分割,提取部分地物,得到Otsu阈值提取结果;Step 41: For the enhanced remote sensing image, use the Otsu algorithm to calculate the optimal threshold for feature extraction, and perform threshold segmentation on the remote sensing image according to the obtained threshold, extract part of the feature, and obtain the Otsu threshold extraction result;

步骤42:运用部分干扰物体在某个单波段的灰度值小于目标地物的特性,在遥感图像的单波段上提取于Otsu阈值提取结果位置相同的光谱值,对单波段灰度值进行阈值分割,剔除部分干扰物体,得到单波段阈值提取结果;Step 42: Using the characteristic that the gray value of some interfering objects in a single band is smaller than the target feature, extract the spectral value at the same position as the Otsu threshold extraction result on the single band of the remote sensing image, and threshold the single band gray value Segmentation, remove some interfering objects, and obtain single-band threshold extraction results;

进一步地,步骤5的具体操作步骤包括:Further, the specific operation steps of step 5 include:

步骤51:基于目标地物的纹理特征,建立遥感图像各波段之间的比值波段,采用巴氏距离法挑选出敏感波段,并在敏感波段的基础上建立灰度共生纹理矩阵,划定阈值进行目标提取,剔除干扰物体;Step 51: Based on the texture characteristics of the target object, establish the ratio bands between the bands of the remote sensing image, select the sensitive bands by using the Bhattacharyian distance method, and establish a gray-level co-occurrence texture matrix on the basis of the sensitive bands, and delineate the threshold value to carry out Target extraction, removing interfering objects;

步骤52:基于目标地物的空间属性,分别从地物间的面积、延伸性、周长、紧密性、坚固性、形状要素、圆度特征进行目标提取,剔除干扰物体。Step 52: Based on the spatial attributes of the target ground objects, perform target extraction from the area, extension, perimeter, compactness, solidity, shape elements, and roundness features of the ground objects, and remove interfering objects.

实施例Example

1、试验过程1. Test process

首先,利用传感器获得如附图2和图4所示的原始遥感图像,对原始遥感图像进行预处理,保留欲提取的养殖区域,然后运用NDAI特征指数、悬浮泥沙差异性指数TSM和叶绿素a浓度指数CHL构建特征指数,并结合遥感图像的4个原始波段,运用巴氏距离法对7个波段进行筛选处理,构建养殖区地物光谱特征集。其中,筏式养殖区的光谱特征集主要由红光波段、近红波段和CHL波段组成,而养殖池区域则由蓝光波段、CHL波段和红光波段组成;Firstly, use the sensor to obtain the original remote sensing image as shown in Figure 2 and Figure 4, preprocess the original remote sensing image, retain the breeding area to be extracted, and then use the NDAI characteristic index, suspended sediment difference index TSM and chlorophyll a The concentration index CHL constructs the characteristic index, and combines the 4 original bands of the remote sensing image, and uses the Bhattachary distance method to screen the 7 bands to construct the spectral feature set of the ground objects in the breeding area. Among them, the spectral feature set of the raft culture area is mainly composed of red light band, near red wave band and CHL wave band, while the culture pond area is composed of blue light wave band, CHL wave band and red light wave band;

其次,分别输入筏式养殖区和养殖池区域的目标地物像元数值,运用基于梯度积分递归神经网络的约束能量最小化算法,执行步骤31-步骤39,进行图像增强处理,得到增强后的遥感图像;Secondly, input the pixel values of the target surface objects in the raft culture area and the culture pond area respectively, and use the constrained energy minimization algorithm based on the gradient integral recursive neural network to perform step 31-step 39 to perform image enhancement processing, and obtain the enhanced image Remote Sensing Image;

其中,筏式养殖区在进行图像增强处理后,筏架像元灰度与附近水体间存在明显区别,筏架与周围水体被有效区分,筏架边缘轮廓得到有效增强。图像增强后的养殖池区域中养殖池水体与海水存在明显区别,易于区分。Among them, after the image enhancement processing in the raft farming area, there is a clear difference between the gray scale of the raft frame pixel and the nearby water body, the raft frame and the surrounding water body are effectively distinguished, and the edge profile of the raft frame is effectively enhanced. There are obvious differences between the water body of the aquaculture pond and the sea water in the aquaculture pond area after the image enhancement, and it is easy to distinguish.

再次,对目标地物进行初步提取,首先,对增强后的遥感影像运用Otsu算法算出的阈值进行阈值分割,提取部分地物作为粗略提取的目标地物区域。筏式养殖区在进行Otsu算法阈值分割后均在筏架区域仍保留了许多海水,养殖池区域则保留了大量小型人工地物。然后,运用部分干扰物体在某个单波段的灰度值小于目标地物的特性,在影像的某个单波段上提取与Otsu算法结果位置相同的光谱值,对单波段灰度值进行阈值分割,提取部分干扰物体。其中筏式养殖区中在蓝光波段进行阈值分割,养殖池区域在绿光波段进行阈值分割。Thirdly, the preliminary extraction of the target features is carried out. First, the threshold value calculated by the Otsu algorithm is used to perform threshold segmentation on the enhanced remote sensing image, and part of the features are extracted as the roughly extracted target feature area. After the Otsu algorithm threshold segmentation in the raft farming area, a lot of seawater remained in the raft frame area, and a large number of small artificial ground objects remained in the farming pond area. Then, using the characteristic that the gray value of some interfering objects in a single band is smaller than the target object, the spectral value at the same position as the Otsu algorithm result is extracted on a single band of the image, and the single band gray value is thresholded. , to extract some interfering objects. Among them, the threshold segmentation is performed in the blue light band in the raft farming area, and the threshold segmentation is performed in the green light band in the culture pool area.

最后,根据养殖区地物间的地物纹理特征和几何特征进行干扰物体定制化剔除。比如筏式养殖区目标地物纹理特征与水体纹理之间存在着较为明显的差异,则运用灰度共生纹理矩阵中的均值特征来对筏式养殖区残留的水体来进行剔除。而养殖池区域的养殖池多为矩形或正方形,而河流的弯曲性较为明显,因此运用河流的几何特征对其剔除。其中筏式养殖区主要基于绿光波段和红光波比值、近红波段和蓝光波段比值来分别生成灰度共生纹理矩阵均值特征,而养殖池区域则运用面向对象法中的Rect_Fit属性和Elongation属性进行养殖区提取;其中,养殖区最终提取结果存在许多零星的细小斑点,本发明运用现有的斑点分组方法进行分类结果过滤处理,去除零星斑点。Finally, customized elimination of interfering objects is performed according to the texture and geometric features of the ground objects in the breeding area. For example, there is a relatively obvious difference between the texture characteristics of the target ground objects in the raft farming area and the texture of the water body, and the mean value feature in the gray-level symbiotic texture matrix is used to remove the residual water body in the raft farming area. Most of the aquaculture ponds in the aquaculture pond area are rectangular or square, and the curvature of the river is more obvious, so the geometric characteristics of the river are used to eliminate them. Among them, the raft farming area is mainly based on the ratio of the green light band to the red light wave, and the ratio of the near-red band to the blue light band to generate the mean value feature of the gray-scale co-occurrence texture matrix, while the breeding pool area uses the Rect_Fit attribute and the Elongation attribute in the object-oriented method. Extract the breeding area; wherein, there are many sporadic tiny spots in the final extraction result of the breeding area. The present invention uses the existing spot grouping method to filter the classification results to remove the sporadic spots.

最终,参考附图3和附图5可以看出,依据本方法提取后的结果图。Finally, referring to accompanying drawings 3 and 5, it can be seen that the result map extracted according to this method.

2、结果分析2. Result analysis

由于本发明采取了分层随机采样的方式,在实地采样或者更高空间分辨率影像目视解译结果上建立验证样本,应用混淆矩阵方法对提取结果进行精度验证,因此,本方法在Google Earth软件的目视解译结果为参照在ENVI软件中对各个样本点属性进行赋值,并对其构建混淆矩阵和精度评析。Because the present invention adopts the method of stratified random sampling, establishes verification samples on the field sampling or visual interpretation results of higher spatial resolution images, and applies the confusion matrix method to verify the accuracy of the extraction results, therefore, this method is used in Google Earth The visual interpretation results of the software refer to the assignment of the attributes of each sample point in the ENVI software, and the construction of a confusion matrix and accuracy evaluation.

如表1所示的养殖区各类精度表中可以看出,针对GF-1影像筏式养殖区的提取结果,本发明方法在充分利用地物光谱特征和纹理特征的基础上,有效克服高浊度水体影响,筏架轮廓清晰,总体精度达98.74%;针对ZY-3影像养殖池区域的提取结果来看,多特征分析法运用河流和养殖池间的几何差异实现养殖池的高精度提取,总体精度达96.74%。As can be seen from the various precision tables of the breeding area shown in Table 1, for the extraction results of the GF-1 image raft farming area, the method of the present invention effectively overcomes the high accuracy on the basis of making full use of the spectral features and texture features of the ground objects. Influenced by turbidity water body, the outline of the raft frame is clear, and the overall accuracy reaches 98.74%. According to the extraction results of the ZY-3 image cultivation pond area, the multi-feature analysis method uses the geometric difference between the river and the cultivation pond to achieve high-precision extraction of the cultivation pond , with an overall accuracy of 96.74%.

表1养殖区各类精度表Table 1 Various types of accuracy tables in breeding areas

Figure BDA0002773569050000131
Figure BDA0002773569050000131

综上所述,不论是上述哪种养殖类型,本发明中的方法在克服地物的“同谱异物”和“同物异谱”方面均有显著的比较优势,原始遥感图像通过本发明的方法处理后,提取结果清晰,正确率高,因此可以作为高分遥感图像养殖区提取的优选算法。In summary, no matter which type of farming is mentioned above, the method of the present invention has significant comparative advantages in overcoming "same spectrum and different spectrum" and "same spectrum and different spectrum" of ground features. After the method is processed, the extraction results are clear and the accuracy rate is high, so it can be used as the optimal algorithm for the extraction of breeding areas in high-resolution remote sensing images.

本说明书中未作详细描述的内容属于本领域专业技术人员公知的现有技术。尽管参照前述实施例对本发明专利进行了详细的说明,对于本领域的技术人员来说,其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The content not described in detail in this specification belongs to the prior art known to those skilled in the art. Although the patent of the present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art can still modify the technical solutions described in the aforementioned embodiments, or perform equivalent replacements for some of the technical features. Within the spirit and principles of the present invention, any modifications, equivalent replacements, improvements, etc., shall be included within the protection scope of the present invention.

Claims (3)

1. The method for extracting the remote sensing image of the near-shore aquaculture area based on multi-feature and spectrum fusion is characterized by comprising the following steps of:
step 1: inputting an original remote sensing image;
step 2: carrying out image preprocessing on an input remote sensing image, extracting a characteristic spectrum by using a characteristic index method aiming at the feature of the ground object in the processed image, and constructing a target ground object feature set according to the obtained characteristic spectrum;
and step 3: constructing a finite impulse response linear filter, inputting target ground object pixel spectral data based on a constrained energy minimization algorithm of a gradient integral recurrent neural network, and enhancing the target ground object spectral data by using the finite impulse response linear filter to obtain an enhanced remote sensing image;
and 4, step 4: performing preliminary extraction on the target ground object by using an Otsu algorithm and a single-band threshold value on the enhanced remote sensing image to obtain a preliminarily extracted remote sensing image of the target ground object;
and 5: the preliminarily extracted remote sensing image is subjected to customized elimination on ground feature interference objects in the culture area by adopting an object-oriented method or a gray level symbiotic texture matrix based on the texture features and the geometric features of the ground features;
and 6: outputting the final remote sensing image extracted from the culture area;
the specific operation steps of the step 3 comprise:
step 31: constructing a finite impulse response linear filter according to the prior information of the known target pixel spectrum;
step 32: the constrained energy minimization algorithm is expressed as a linear constrained optimization mathematical model, and the expression is as follows:
Figure FDA0003791762650000011
wherein w represents the substitution filter coefficient, R is the autocorrelation matrix of the remote sensing image, and
Figure FDA0003791762650000021
d represents a constraint condition vector, and d satisfies the condition:
Figure FDA0003791762650000022
step 33: under the constraint condition vector, when the filter is corresponding to the input r i Output y of i Satisfies the following formula:
Figure FDA0003791762650000023
then the remote sensing image r 1 ,r 2 ,...,r N The corresponding average output energy is:
Figure FDA0003791762650000024
wherein, { r 1 ,r 2 ,...,r N The pixel vector in the image represents the spectral information in the remote sensing image, N is the total pixel value in the image, and each pixel r i =[r i1 ,r i2 ,...,r il ] T Is a l-dimensional column vector, wherein l is the wave band number of the image, and i is more than or equal to 1 and less than or equal to N;
step 34: converting the formula (1) into an unconstrained optimization mathematical model by using a Lagrange multiplier method, wherein the formula is as follows:
F(w)=w T Rw+λ(d T w-1) (5),
wherein λ is a Lagrange multiplier;
step 35: converting equation (5) into a mathematical model of a linear equation, wherein the equation is as follows:
Gs(t)=b (6),
wherein the autocorrelation coefficient matrix G = [2R T ;d,0]∈R (l+1)×(l+1) (ii) a b is a coefficient vector and b = [0,1 =] T ∈R l+1 ;s(t)=[w(t),λ(t)] T ∈R l+1 The vector is to be solved; w (t) = { w 1 (t),w 2 (t),…,w l (t)} T Is a vector of dimension l formed by filter coefficients, and lambda (t) belongs to R and is a Lagrange function multiplier;
step 36: the error function defining equation (6) is:
e(t)=Gs(t)-b (7),
step 37: according to equation (7), the integrated enhanced gradient recurrence equation is constructed as:
Figure FDA0003791762650000031
step 38: performing recursive calculation according to the formula (8) until the calculated error is smaller than the allowable error, and obtaining a filter output coefficient w (t);
step 39: and performing inversion according to the obtained filter output coefficient, and outputting the enhanced remote sensing image.
2. The near-shore aquaculture area remote sensing image extraction method based on multi-feature and spectrum fusion of claim 1, characterized in that the specific operation steps of step 4 comprise:
step 41: aiming at the enhanced remote sensing image, calculating an optimal threshold value for ground feature extraction by adopting an Otsu algorithm, carrying out threshold segmentation on the remote sensing image according to the obtained threshold value, extracting partial ground features and obtaining an Otsu threshold value extraction result;
step 42: and extracting a spectral value with the same position as the Otsu threshold extraction result on a single waveband of the remote sensing image, performing threshold segmentation on the single waveband gray value, and removing part of interference objects to obtain a single waveband threshold extraction result.
3. The near-shore aquaculture area remote sensing image extraction method based on multi-feature and spectrum fusion of claim 1, characterized in that the specific operation steps of step 5 comprise:
step 51: establishing specific wave bands among wave bands of the remote sensing image based on the texture features of the target ground object, selecting sensitive wave bands by adopting a Babbitt distance method, establishing a gray level co-occurrence texture matrix on the basis of the sensitive wave bands, defining a threshold value for target extraction, and eliminating interference objects;
step 52: based on the spatial attributes of the target ground objects, the target is extracted from the characteristics of the area, extensibility, perimeter, compactness, firmness, shape elements and roundness among the ground objects, and the interference objects are removed.
CN202011257631.3A 2020-11-12 2020-11-12 Remote sensing image extraction method for nearshore aquaculture area based on multi-feature and spectral fusion Expired - Fee Related CN112287871B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011257631.3A CN112287871B (en) 2020-11-12 2020-11-12 Remote sensing image extraction method for nearshore aquaculture area based on multi-feature and spectral fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011257631.3A CN112287871B (en) 2020-11-12 2020-11-12 Remote sensing image extraction method for nearshore aquaculture area based on multi-feature and spectral fusion

Publications (2)

Publication Number Publication Date
CN112287871A CN112287871A (en) 2021-01-29
CN112287871B true CN112287871B (en) 2023-01-17

Family

ID=74398866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011257631.3A Expired - Fee Related CN112287871B (en) 2020-11-12 2020-11-12 Remote sensing image extraction method for nearshore aquaculture area based on multi-feature and spectral fusion

Country Status (1)

Country Link
CN (1) CN112287871B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989940B (en) * 2021-02-08 2023-08-01 国家海洋环境监测中心 Raft culture area extraction method based on high-resolution third satellite SAR image
CN112907486B (en) * 2021-03-18 2022-12-09 国家海洋信息中心 A remote sensing image color correction method based on deep learning and color mapping
CN113378679B (en) * 2021-06-01 2024-05-14 大连海事大学 Coastal aquaculture pond extraction method based on improved geometric features and feature-preserving sampling
CN113538559B (en) * 2021-07-02 2022-02-18 宁波大学 Extraction method of offshore aquaculture raft extraction index based on hyperspectral remote sensing image
CN113837123A (en) * 2021-09-28 2021-12-24 大连海事大学 Mid-resolution remote sensing image offshore culture area extraction method based on spectral-spatial information combination
CN113920445B (en) * 2021-10-08 2025-06-03 自然资源部第一海洋研究所 Sea surface oil spill detection method based on decision fusion of multi-kernel classification model
CN114241336B (en) * 2021-12-30 2022-09-20 河南祥宇工程勘察设计有限公司 River and lake water area right-determining demarcation method based on dynamic low-resolution remote sensing image
CN114612387B (en) * 2022-02-16 2023-02-10 珠江水利委员会珠江水利科学研究院 Remote sensing image fusion method, system, equipment and medium based on characteristic threshold
CN114693608B (en) * 2022-03-11 2025-09-09 北京市农林科学院智能装备技术研究中心 Target freshness detection method and system
CN115546656B (en) * 2022-09-14 2024-10-01 山东科技大学 A method for extracting aquaculture areas from remote sensing images based on deep learning
CN116452901B (en) * 2023-06-19 2023-09-15 中国科学院海洋研究所 Automatic extraction method of marine breeding areas from remote sensing images based on deep learning
CN117237770B (en) * 2023-08-29 2024-12-27 珠江水利委员会珠江水利科学研究院 Remote sensing image fusion method, system, equipment and medium based on exponent power function
CN117011555B (en) * 2023-10-07 2023-12-01 广东海洋大学 A mangrove ecological detection method based on remote sensing image recognition
CN117095299B (en) * 2023-10-18 2024-01-26 浙江省测绘科学技术研究院 Grain crop extraction method, system, equipment and medium for crushing cultivation area
CN118097562B (en) * 2024-04-18 2024-07-05 广东海洋大学 Remote monitoring method for seaweed proliferation condition
CN119992110A (en) * 2025-04-08 2025-05-13 广东海洋大学 Aquaculture cage detection method based on dynamic learning network with adaptive variable parameters

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622607A (en) * 2012-02-24 2012-08-01 河海大学 Remote sensing image classification method based on multi-feature fusion
CN109840496A (en) * 2019-01-29 2019-06-04 青岛大学 Aquaculture area hierarchical classification extracting method, device, storage medium and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8280111B2 (en) * 2010-02-17 2012-10-02 The Boeing Company Advanced background estimation technique and circuit for a hyper-spectral target detection method
CN108875659B (en) * 2018-06-26 2022-04-22 上海海事大学 A method for identifying aquaculture areas on charts based on multispectral remote sensing images
CN110135479A (en) * 2019-04-29 2019-08-16 中国地质大学(武汉) Hyperspectral image target detection method and system based on random forest measure learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622607A (en) * 2012-02-24 2012-08-01 河海大学 Remote sensing image classification method based on multi-feature fusion
CN109840496A (en) * 2019-01-29 2019-06-04 青岛大学 Aquaculture area hierarchical classification extracting method, device, storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于背景抑制的遥感图像目标检测方法研究;崔照斌;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180415(第04期);I140-1075 *

Also Published As

Publication number Publication date
CN112287871A (en) 2021-01-29

Similar Documents

Publication Publication Date Title
CN112287871B (en) Remote sensing image extraction method for nearshore aquaculture area based on multi-feature and spectral fusion
CN109190538B (en) A method for analyzing the evolution of the delta coastal zone of a silt-rich river based on remote sensing technology
Hou et al. Marine floating raft aquaculture extraction of hyperspectral remote sensing images based decision tree algorithm
CN110059758B (en) Remote sensing image culture pond detection method based on semantic segmentation
CN110414488A (en) Remote sensing monitoring method of cyanobacterial blooms based on planktonic algae index and deep learning
CN114724049A (en) Inland culture pond water surface identification method based on high-resolution remote sensing image data
CN109635765B (en) A method for automatic extraction of shallow coral reef remote sensing information
CN111339959A (en) Extraction method of offshore floating raft aquaculture area based on SAR and optical image fusion
CN114821343A (en) A fast and accurate extraction method of mangrove remote sensing based on cloud platform
CN113538559B (en) Extraction method of offshore aquaculture raft extraction index based on hyperspectral remote sensing image
CN116645593B (en) Remote sensing methods and devices for monitoring seagrass bed distribution
CN115761493A (en) Water body extraction method based on combined water body index frequency
CN113837123A (en) Mid-resolution remote sensing image offshore culture area extraction method based on spectral-spatial information combination
CN112001641A (en) Scallop culture area suitability remote sensing evaluation system
CN117409313A (en) A method to construct the phenological decline period index of Spartina alterniflora based on Sentinel-2 optical images
CN114037902B (en) An inversion method for extracting and identifying suspended sediment in Porphyra yezoensis cultivation area
CN112037244A (en) Landsat-8 Image Culture Pond Extraction Method for Combined Index and Contour Indicator SLIC
CN114119618B (en) Inland salt lake artemia strip remote sensing extraction method based on deep learning
CN113096114B (en) A remote sensing extraction method for high-resolution urban water patches combining morphology and index
CN119295946A (en) Method and system for identifying blue algae bloom information in narrow and long rivers based on remote sensing data
CN114724035A (en) An early detection method of algal blooms based on remote sensing technology
CN114881984A (en) Detection method and device for rice processing precision, electronic equipment and medium
CN118537737B (en) A high-resolution algal bloom recognition method based on deep learning
CN107862280A (en) A kind of storm surge disaster appraisal procedure based on unmanned aerial vehicle remote sensing images
CN117804988A (en) A method for inverting total suspended matter concentration in the coastal waters near nuclear power plants

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20230117