CN119919822A - Urban remote sensing image vegetation cover recognition method and system based on deep learning - Google Patents
Urban remote sensing image vegetation cover recognition method and system based on deep learning Download PDFInfo
- Publication number
- CN119919822A CN119919822A CN202510407671.8A CN202510407671A CN119919822A CN 119919822 A CN119919822 A CN 119919822A CN 202510407671 A CN202510407671 A CN 202510407671A CN 119919822 A CN119919822 A CN 119919822A
- Authority
- CN
- China
- Prior art keywords
- area
- thermal
- remote sensing
- vegetation
- sensing image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application relates to the technical field of artificial intelligence and discloses a method and a system for identifying vegetation coverage of urban remote sensing images based on deep learning, wherein the method firstly acquires thermal remote sensing images and visible light remote sensing images of urban target areas, and accurately segments the thermal images by utilizing the characteristics of the visible light images to distinguish high-radiation areas (buildings or roads) from low-radiation areas (potential vegetation areas); and then, through multi-time phase thermal radiation change analysis, a specific thermal area with obvious thermal radiation dynamic change is further screened out from the low radiation area. And finally, carrying out area matching verification on the coverage area of the specific thermal area and the coverage information of the buildings and water areas in the urban model network, thereby accurately judging the vegetation coverage area. Therefore, the problem of low accuracy of single data source judgment and identification in the prior art is effectively solved through multi-source data complementation, dynamic feature mining and space constraint verification, and accuracy of vegetation coverage identification is improved.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method and a system for identifying vegetation coverage of urban remote sensing images based on deep learning.
Background
Urban vegetation coverage monitoring is an important foundation for urban ecological assessment, heat island effect analysis and sustainable development planning. The existing urban vegetation coverage detection method is generally based on the color characteristics of visible light remote sensing images for recognition, but multiple challenges are faced in complex urban scenes, for example, green building materials (such as artificial lawns and colored-painted roofs) widely used in cities are similar to natural vegetation in colors and texture characteristics in visible light bands, so that the misjudgment rate of a traditional image segmentation algorithm is high, shadows cast by dense building groups can change the true colors of vegetation surfaces, and the illumination reflection characteristics of the same vegetation region are severely fluctuated due to the difference of solar altitude angles in different periods. Therefore, the urban vegetation coverage detection method in the prior art has lower accuracy.
Disclosure of Invention
The invention mainly aims to provide a deep learning-based urban remote sensing image vegetation coverage recognition method and system, and aims to solve the technical problem of low accuracy of an urban vegetation coverage detection method in the prior art.
In order to achieve the above object, in a first aspect, an embodiment of the present application provides a method for identifying vegetation coverage of an urban remote sensing image based on deep learning, where the method includes:
Respectively acquiring a thermal remote sensing image and a visible light remote sensing image of an urban target area to obtain a first target remote sensing image and a second target remote sensing image;
image segmentation is carried out on the first target remote sensing image according to the second target remote sensing image to obtain a first thermal area and a second thermal area, wherein the heat radiation intensity of the first thermal area is larger than that of the second thermal area;
Analyzing the heat radiation intensity variation amplitude of the second thermal area in different time periods, and carrying out image segmentation on the second thermal area according to the heat radiation intensity variation amplitude to obtain a second main thermal area and a second secondary thermal area, wherein the heat radiation intensity variation amplitude of the second main thermal area is larger than that of the second secondary thermal area;
inputting the coverage area of the second main heating power area into a pre-established urban model network for area matching verification, wherein the urban model network comprises building and water area coverage information;
and under the condition that the area matching verification is passed, judging the second main heating power area as a vegetation coverage area.
In one possible implementation manner, the image segmentation of the first target remote sensing image according to the second target remote sensing image to obtain a first thermal area and a second thermal area includes:
Performing image pre-segmentation on the first target remote sensing image to obtain a first to-be-segmented area and a second to-be-segmented area;
And correcting boundary lines of the first to-be-segmented area and the second to-be-segmented area according to the texture features of the second target remote sensing image to obtain a first thermal area and a second thermal area.
In one possible implementation, the image pre-segmentation uses a U-Net based deep learning model trained from historical remote sensing image data labeled with thermal zone boundaries.
In a possible implementation manner, the correcting, according to the texture feature of the second target remote sensing image, the boundary line between the first to-be-segmented area and the second to-be-segmented area to obtain a first thermal area and a second thermal area includes:
obtaining texture feature variation gradients at boundary lines of the first region to be segmented and the second region to be segmented;
and under the condition that the gradient of the texture feature change is smaller than a gradient threshold value, moving and correcting the boundary line of the first to-be-segmented area and the second to-be-segmented area towards the direction of the second to-be-segmented area to obtain a first thermal area and a second thermal area.
In one possible implementation manner, the moving the boundary lines of the first to-be-segmented area and the second to-be-segmented area towards the second to-be-segmented area for correction to obtain a first thermal area and a second thermal area includes:
Detecting the texture feature change gradient at the current boundary line in real time in the process of moving the boundary line;
And stopping the movement of the boundary line and taking the current latest boundary line as the boundary of the first thermal area and the second thermal area under the condition that the texture feature change gradient at the current boundary line is equal to a preset value or meets a preset condition.
In one possible implementation, before determining the second main thermal area as a vegetation coverage area, the method further includes:
sub-pixel decomposition is carried out on the mixed pixels in the second main heating power area, and the component occupation ratios are extracted by using an end member spectrum library;
If the vegetation component ratio exceeds the preset ratio, the vegetation coverage area is reserved, otherwise, the vegetation coverage area is marked as the area to be verified;
And carrying out secondary judgment on the region to be verified through the second target remote sensing image.
In a possible implementation manner, the performing, by using the second target remote sensing image, the secondary determination on the to-be-verified area includes:
Performing multispectral analysis on the visible light remote sensing image of the region to be verified, and extracting red light wave band reflectivity R red and near infrared wave band reflectivity R nir;
Inputting the red-light-band reflectivity R red and the near-infrared-band reflectivity R nir into a vegetation index model to obtain a vegetation index value N, wherein the vegetation index model meets the following expression:
If the vegetation index value N is more than or equal to T, marking the vegetation area as a candidate vegetation area, otherwise marking the vegetation area as a non-vegetation area, wherein T is a dynamic threshold value, and is set to 0.4 in summer and 0.3 in winter.
In one possible implementation manner, the inputting the coverage area of the second main thermal area into a pre-established city model network for area matching verification includes:
Calculating the overlapping area of the coverage area of the second main heating power area and the coverage area of the building and the water area in the urban model network;
Determining a confidence level according to the overlapping area and the coverage area of the second main heating power area;
and under the condition that the confidence coefficient is larger than or equal to a first confidence coefficient threshold value, determining that the area matching check passes, and judging the second main heating power area as a vegetation coverage area.
In one possible implementation, after determining the confidence according to the overlapping area and the coverage area of the second main thermal area, the method further includes:
determining that the area matching verification fails when the confidence coefficient is greater than or equal to a second confidence coefficient threshold and less than a first confidence coefficient threshold, and judging the second main thermal area as a low-confidence vegetation coverage area, wherein the second confidence coefficient threshold is less than the first confidence coefficient threshold;
Triggering a manual check signal for the low-credibility vegetation coverage area, wherein the manual check signal is used for reminding a user to perform manual detection and identification.
In a second aspect, the embodiment of the application also provides a vegetation coverage identification system, which comprises a memory and a processor, wherein the memory is used for storing program codes, and the processor is used for calling the program codes to execute the method according to the first aspect.
Compared with the prior art, the urban remote sensing image vegetation coverage recognition method based on deep learning provided by the embodiment of the application is characterized in that firstly, the thermal remote sensing image and the visible light remote sensing image of an urban target area are obtained, the thermal image is accurately segmented by utilizing the characteristics of the visible light image, a high-radiation area (a building or a road) and a low-radiation area (a potential vegetation area) are distinguished, and then, a specific thermal area with obvious dynamic change of thermal radiation is further screened from the low-radiation area through multi-time-phase thermal radiation change analysis. And finally, carrying out area matching verification on the coverage area of the specific thermal area and the coverage information of the buildings and water areas in the urban model network, thereby accurately judging the vegetation coverage area. Therefore, the problem of low accuracy of single data source judgment and identification in the prior art is effectively solved through multi-source data complementation, dynamic feature mining and space constraint verification, accuracy of vegetation coverage identification is improved, and powerful support is provided for urban ecological environment monitoring and management.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for identifying vegetation coverage of urban remote sensing images based on deep learning according to some embodiments of the application;
FIG. 2 is a flowchart illustrating a step S200 of a deep learning-based urban remote sensing image vegetation coverage recognition method according to some embodiments of the present application;
FIG. 3 is a flowchart illustrating a step S220 of a deep learning-based urban remote sensing image vegetation coverage recognition method according to some embodiments of the present application;
FIG. 4 is a schematic diagram illustrating a segmentation of a second target remote sensing image according to some embodiments of the present application;
FIG. 5 is a schematic hardware architecture of a vegetation coverage identification system according to some embodiments of the application.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that all directional indicators (such as up, down, left, right, front, and rear are used in the embodiments of the present invention) are merely for explaining the relative positional relationship, movement conditions, and the like between the components in a certain specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicators are changed accordingly.
Furthermore, the description of "first," "second," etc. in this disclosure is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions between the embodiments can be mutually combined, but are necessary to be based on the realization that a person of ordinary skill in the art can realize, and when the combination of the technical solutions contradicts or cannot be realized, the combination of the technical solutions is not considered to exist and is not within the protection scope required by the invention.
Urban vegetation coverage monitoring is an important foundation for urban ecological assessment, heat island effect analysis and sustainable development planning. The existing urban vegetation coverage detection method is generally based on the color characteristics of visible light remote sensing images for recognition, but multiple challenges are faced in complex urban scenes, for example, green building materials (such as artificial lawns and colored-painted roofs) widely used in cities are similar to natural vegetation in colors and texture characteristics in visible light bands, so that the misjudgment rate of a traditional image segmentation algorithm is high, shadows cast by dense building groups can change the true colors of vegetation surfaces, and the illumination reflection characteristics of the same vegetation region are severely fluctuated due to the difference of solar altitude angles in different periods. Therefore, the urban vegetation coverage detection method in the prior art has lower accuracy.
As shown in fig. 1-3, the following description will take an example of the vegetation coverage recognition system performing the deep learning-based urban remote sensing image vegetation coverage recognition method, where although a logic sequence is shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than that shown or described herein. Referring to fig. 1, the method includes the following steps S100-S500:
Step S100, respectively acquiring a thermal remote sensing image and a visible light remote sensing image of an urban target area to obtain a first target remote sensing image and a second target remote sensing image;
In particular, there are many ways of obtaining thermal remote sensing images and visible light remote sensing images of urban target areas. For example, when a thermal remote sensing image of a city target area is acquired, a remote sensing platform carrying a thermal infrared sensor (such as Landsat 8 TIRS, ASTER or an unmanned aerial vehicle thermal imaging module) can be called, scanning is performed above the target area to acquire surface thermal radiation data, and then the thermal radiation energy data is converted into a digital signal through the sensor to generate a thermal image. When the visible light remote sensing image of the urban target area is obtained, a remote sensing platform (such as a high-resolution satellite or an unmanned aerial vehicle) carrying an optical sensor can be called to obtain the multispectral image of the target area, and red light, green light, blue light, near infrared wave bands and the like are covered. In the embodiment of the application, the acquisition time difference of the thermal remote sensing image and the visible light remote sensing image is less than or equal to 1 hour, so that the influence of the phase difference on the subsequent analysis is reduced.
In an embodiment, after the target remote sensing image is obtained, a terrain correction step may be added to further improve the accuracy of the target remote sensing image. For example, a Digital Elevation Model (DEM) of a target area can be firstly obtained, gradient, slope direction and terrain shielding coefficients are calculated, then a regression model of heat radiation intensity and terrain factors is established, radiation difference between a sunny side and a shady side in a mountain city is eliminated, and finally normalized heat radiation intensity after correction is generated, and a thermal distribution remote sensing image map irrelevant to the terrain is generated for subsequent segmentation. Thus, the problem of radiation distortion caused by gradient can be solved.
Step 200, performing image segmentation on a first target remote sensing image according to the second target remote sensing image to obtain a first thermal area and a second thermal area, wherein the heat radiation intensity of the first thermal area is greater than that of the second thermal area;
It will be appreciated that the first target remote sensing image includes thermal radiation intensity information of different regions, and the second target remote sensing image includes color, texture and other optical information of different regions.
It should be noted that, the heat radiation of the relevant areas such as buildings or roads is greatly affected by the illumination, the heat radiation intensity is large, and the heat radiation of the relevant areas such as vegetation or water areas is less affected by the illumination, the heat radiation intensity is small. Therefore, the first target remote sensing image can be divided into the first thermal area and the second thermal area with obvious difference of heat radiation intensity through the heat radiation intensity information, for example, the heat radiation intensity of the first thermal area is more than 2 times of that of the second thermal area, and thus, buildings, roads, vegetation or water areas can be accurately distinguished.
In this way, the embodiment of the application performs image segmentation on the first target remote sensing image to obtain the first thermal area and the second thermal area, so that the high-radiation area (building or road) and the low-radiation area (potential vegetation area) can be distinguished.
In order to further improve the accuracy of the image segmentation of the first target remote sensing image, in an embodiment, the step S200 of performing image segmentation on the first target remote sensing image according to the second target remote sensing image to obtain a first thermal area and a second thermal area includes:
S210, performing image pre-segmentation on the target remote sensing image to obtain a first region to be segmented and a second region to be segmented;
s220, correcting boundary lines of the first to-be-segmented area and the second to-be-segmented area according to texture features of the second target remote sensing image to obtain a first thermal area and a second thermal area.
Specifically, the remote sensing image (the first target remote sensing image) may be first pre-segmented by using a U-Net based deep learning model, which is trained by historical remote sensing image data labeled with thermal zone boundaries. And then correcting boundary lines of the first to-be-segmented area and the second to-be-segmented area according to texture features of the second target remote sensing image to obtain a first thermal area and a second thermal area.
Image pre-segmentation is carried out by adopting a deep learning model based on U-Net, boundary line correction is carried out on the pre-segmentation result according to the texture characteristics of the second target remote sensing image, the method has remarkable effect in improving the accuracy of remote sensing image segmentation. The method not only can automatically generate a high-precision segmentation result, but also can adapt to complex scenes and improve the robustness and reliability of segmentation.
For example, the preprocessed thermal image is input into a trained U-Net model, and a probability map (probability that each pixel belongs to a high temperature region) is output. A binary mask is generated through threshold segmentation (threshold value is 0.5), and is primarily divided into a first region to be segmented (high-temperature region) and a second region to be segmented (low-temperature region).
It will be appreciated that the radiant energy of an object is proportional to its fourth power of temperature and emissivity according to the Stefan-Boltzmann law. Emissivity is an index of the radiant power of the surface of an object, and ranges from 0 to 1, the higher the emissivity, the stronger the radiant energy. The emissivity of glass curtain walls is typically low (lower than that of vegetation). This means that at the same true temperature, the radiant energy of the glass will be lower than that of vegetation, resulting in lower radiation values in thermal infrared images. Thus, glass structures such as building curtain walls may exhibit lower radiation values in thermal infrared images, similar to the temperature characteristics of vegetation. Therefore, aiming at the problem of misjudgment of the glass curtain wall caused by the similarity of the heat radiation characteristics and vegetation, the embodiment of the application introduces the visible light remote sensing image to correct the boundary line of the first to-be-segmented area and the second to-be-segmented area, thereby further improving the accuracy of the image segmentation of the first target remote sensing image.
In an embodiment, the step S220 of correcting the boundary lines of the first to-be-segmented region and the second to-be-segmented region according to the texture features of the second target remote sensing image to obtain a first thermal region and a second thermal region includes:
S221, obtaining texture feature change gradients at boundary lines of the first to-be-segmented area and the second to-be-segmented area;
and S222, under the condition that the gradient of the texture feature change is smaller than a gradient threshold value, moving and correcting the boundary line of the first to-be-segmented area and the second to-be-segmented area towards the direction of the second to-be-segmented area to obtain a first thermal area and a second thermal area.
Specifically, firstly, the gradient of the texture feature change at the boundary line of the first to-be-segmented area and the second to-be-segmented area is obtained, when the gradient of the texture feature change at the boundary line is smaller than the gradient threshold, the boundary line is indicated to be glass features (the gradient of the texture feature change is small because the texture features of the glass surface are the same), and at the moment, the boundary line of the first to-be-segmented area and the boundary line of the second to-be-segmented area are moved towards the second to-be-segmented area, namely the range of the second to-be-segmented area (low temperature area) is narrowed, so that curtain wall glass is partitioned from the potential vegetation area, and the accuracy of the segmentation of the first target remote sensing image is improved. And detecting the texture feature change gradient at the current boundary line in real time in the process of moving the boundary line, and stopping the boundary line from moving and taking the current latest boundary line as the boundary of the first thermal area and the second thermal area under the condition that the texture feature change gradient at the current boundary line is equal to a preset value or meets a preset condition (such as the texture feature change gradient at the current boundary line is close to the preset value).
For example, as shown in fig. 4, S1 is a first to-be-segmented region, and S2 is a second to-be-segmented region. If the boundary line of the preliminary pre-segmentation is L1, at this time, if the boundary line L1 is judged to be a curtain wall glass feature by the visible light texture feature, that is, the second to-be-segmented area S2 contains a curtain wall glass region, at this time, the curtain wall glass region needs to be removed from the second to-be-segmented area S2, that is, the boundary line is moved towards the second to-be-segmented area, if the boundary line is moved to the L2 position, the visible light texture feature at the boundary line meets the predetermined requirement, and the boundary line at this time is the more accurate segmentation boundary line, thereby ensuring that the second to-be-segmented area only contains vegetation or a water area.
Step S300, analyzing the heat radiation intensity variation amplitude of the second thermal area in different time periods, and carrying out image segmentation on the second thermal area according to the heat radiation intensity variation amplitude to obtain a second main thermal area and a second sub thermal area, wherein the heat radiation intensity variation amplitude of the second main thermal area is larger than that of the second sub thermal area;
It will be appreciated that after the first thermal zone and the second thermal zone are separated, the second thermal zone may be a vegetation zone or a water zone because the heat radiation intensity of both the vegetation zone and the water zone is low. Based on the above, in the embodiment of the application, the second thermal area is further subjected to image segmentation according to the change amplitude of the heat radiation intensity to obtain the second main thermal area and the second secondary thermal area.
It should be noted that, since the specific heat capacity of water is relatively large, the temperature change is relatively slow, that is, the variation amplitude of the heat radiation intensity of water is relatively small compared with that of vegetation. Therefore, in the embodiment of the application, the second thermal area is subjected to image segmentation according to the change amplitude of the heat radiation intensity to obtain the second main thermal area and the second secondary thermal area, the second main thermal area with larger change amplitude of the heat radiation intensity is a vegetation area, and the second secondary thermal area with smaller change amplitude of the heat radiation intensity is a water area.
Specifically, embodiments of the present application may first collect heat radiation intensity data for a second thermal zone for different time periods (e.g., multiple time points of a day or the same time period for several consecutive days). These data may be obtained by calling historical data or real-time data of the telemetry platform. Then, the amplitude of the change in the intensity of the heat radiation of the second thermal region over these time periods is analyzed. The magnitude of the change may be measured by calculating the difference or percentage change in the intensity of the thermal radiation between adjacent time periods. For example, an average value of the heat radiation intensity for each period of time may be calculated, and then a difference or percentage change between the average values of adjacent periods of time may be calculated. Then, image segmentation of the second thermal region according to the magnitude of change in thermal radiation intensity may be achieved by setting a magnitude threshold. The region where the variation amplitude of the heat radiation intensity is larger than the threshold is divided into a second main thermal region, and the region where the variation amplitude is smaller than or equal to the threshold is divided into a second sub thermal region.
Therefore, through the steps, the image of the second thermal area can be effectively segmented according to the change amplitude of the thermal radiation intensity, so that the vegetation coverage area can be accurately identified.
Step S400, inputting the coverage area of the second main heating power area into a pre-established urban model network for area matching verification, wherein the urban model network comprises building and water area coverage information;
and S500, judging the second main heating power area as a vegetation coverage area under the condition that the area matching verification is passed.
In an embodiment, the step S400 is to input the coverage area of the second main heating power area into a pre-established city model network for area matching verification, and the method comprises the steps of calculating the overlapping area of the coverage area of the second main heating power area and the coverage area of a building and a water area in the city model network, determining confidence coefficient according to the overlapping area and the coverage area of the second main heating power area, and determining that the area matching verification passes when the confidence coefficient is larger than or equal to a first confidence coefficient threshold value, and determining the second main heating power area as a vegetation coverage area.
The city model network containing building and water area coverage information can be pre-established before the area matching verification is performed. The model network can be obtained by integrating a plurality of data sources such as remote sensing images, geographic Information System (GIS) data, city planning data and the like. And the spatial resolution and the coordinate system of the urban model network are matched with the coverage range of the second main heating power area, so that accurate area matching verification is performed. The city model net at least comprises building and water area coverage information.
And when the area matching is verified, the coverage area of the second main heating power area and the urban model network are subjected to space superposition analysis, and the overlapping area of the coverage area of the building and the water area in the urban model network is calculated first, so that the space superposition analysis can be realized through space analysis tools in GIS software, such as intersection, combination and the like. And then calculating the confidence according to the overlapping area and the coverage area of the second main heating power area. Confidence is defined as a function of the ratio of the overlap area to the area of coverage of the second primary thermal zone, or measured using other suitable statistical indicators. And setting a first confidence threshold for judging whether the area matching check passes or not. This threshold may be set empirically based on actual demand and historical data. And if the calculated confidence coefficient is greater than or equal to the first confidence coefficient threshold value, determining that the area matching check passes. In this case, the second main thermal zone is determined as a vegetation cover zone. This is because the second main thermal zone has a smaller overlap with the coverage of the buildings and waters in the urban model network, indicating that this area is more likely to be a vegetation coverage. If the confidence is less than the first confidence threshold, the area matching check fails. In this case, further inspection and analysis may be performed on the second main thermal area, such as collecting more remote sensing data, adjusting the urban model network, or triggering a manual verification signal on the low-confidence vegetation coverage area, where the manual verification signal is used to remind the user to perform manual detection and identification, so as to more accurately determine the coverage type of the area.
In other embodiments, the urban model network may be dynamically updated by accessing a real-time change data stream of a Geographic Information System (GIS), automatically updating the urban model network in real time to automatically mark a newly built building/water area when the Geographic Information System (GIS) is updated, and triggering an update request of the urban model network when the conflict area of the model network and the remote sensing image exceeds a threshold value to prevent the problem of automatic update failure. Thus, the efficiency of area matching verification can be improved.
In another embodiment, to further improve the accuracy of the vegetation coverage determination. Before the second main heating power area is judged to be a vegetation coverage area, the method further comprises the steps of sub-pixel decomposition of mixed pixels in the second main heating power area, extracting the component proportion by using an end member spectrum library, if the vegetation component proportion exceeds a preset proportion, reserving the vegetation coverage area, otherwise, marking the vegetation coverage area as a to-be-verified area, and carrying out secondary judgment on the to-be-verified area through the second target remote sensing image.
It will be appreciated that the second main thermal zone is denoted as the zone where the intensity of the heat radiation varies widely, and that this feature is also present in the vegetation zone as well as in the open soil zone (the open zone not covered by vegetation). Therefore, in the embodiment of the application, before the second main heating power area is judged as a vegetation coverage area, sub-pixel decomposition is firstly carried out on the mixed pixels in the second main heating power area, the end member spectrum library is utilized to extract the component duty ratio, if the vegetation component duty ratio exceeds the preset ratio, the vegetation coverage area is reserved, otherwise, the vegetation coverage area is marked as an area to be verified, and then secondary judgment is carried out on the area to be verified by utilizing the second target remote sensing image.
Specifically, sub-pel decomposition is first performed for mixed pels (i.e., pels that contain multiple earth surface coverage types simultaneously) within the second primary thermal zone. Sub-pel decomposition is a technique for decomposing a hybrid pel into smaller sub-pels representing a single earth coverage type. And then sub-pixel decomposition is carried out by using an end member spectrum library. The end member spectral libraries contain spectral features of various surface coverage types (e.g., vegetation, water, soil, buildings, etc.). The ratio of each component in the mixed pixel can be estimated by comparing the spectral characteristics of the mixed pixel with the spectral characteristics in the end member spectral library. And extracting the ratio of vegetation components on the basis of sub-pixel decomposition. This can be achieved by calculating the contribution of the vegetation spectral features in the mixed pel spectrum. And setting a preset ratio for judging the dominant position of the vegetation components in the mixed pixels. This preset ratio can be empirically set based on actual demand and historical data. And if the vegetation component ratio exceeds the preset ratio, reserving the area as a vegetation coverage area. This is because the primary surface coverage type of the area is vegetation. And if the vegetation component ratio does not exceed the preset ratio, marking the area as the area to be verified. This is because the area may contain other surface coverage types or vegetation coverage is not dense enough, requiring further decision making. And finally, performing secondary judgment on the area to be verified by using the second target remote sensing image. The second target remote sensing image may be a different data source, a different time acquisition, or a different spectral band than the initial remote sensing image. And further judging the surface coverage type of the region to be verified by analyzing the information such as the spectral characteristics, the texture characteristics and the like in the second target remote sensing image. And finally classifying the area to be verified according to the result of the secondary judgment, and judging the area to be vegetation coverage areas or other surface coverage types.
Thus, through the steps, the vegetation coverage area in the second main heating power area can be more accurately determined. The sub-pixel decomposition and the end member spectrum library improve the accuracy of the mixed pixel decomposition, and the secondary judgment further reduces the possibility of misjudgment, thereby improving the accuracy and reliability of vegetation coverage identification.
In an embodiment, the secondary judgment of the to-be-verified area through the second target remote sensing image comprises performing multispectral analysis on the visible light remote sensing image of the to-be-verified area, extracting red light wave band reflectivity R red and near infrared wave band reflectivity R nir, inputting the red light wave band reflectivity R red and the near infrared wave band reflectivity R nir into a vegetation index model to obtain a vegetation index value N, wherein the vegetation index model satisfies the following expression:
If the vegetation index value N is more than or equal to T, marking the vegetation area as a candidate vegetation area, otherwise marking the vegetation area as a non-vegetation area, wherein T is a dynamic threshold value, and is set to 0.4 in summer and 0.3 in winter.
It can be appreciated that vegetation is strongly absorbed by chlorophyll in the red band (Rred) while mesophyll cell structures are highly reflective in the near infrared band (Rnir), such that the difference in reflectivity of vegetation is significant in both bands. Through the design of a normalized vegetation index model, the vegetation coverage situation can be quantitatively represented, and the larger the vegetation index value (N), the higher the vegetation coverage possibility is indicated.
It is noted that vegetation growth is affected by seasons, vegetation is luxuriant in summer, red light absorption and near infrared reflection are different more obviously, a higher threshold (0.4) is needed for screening, vegetation is withered in winter, spectral characteristics are weakened, and a threshold (0.3) is reduced to adapt to seasonal changes, so that identification accuracy is improved. In other seasons, such as spring and autumn, the threshold value can be set according to actual conditions.
Based on the above, the urban remote sensing image vegetation coverage recognition method based on deep learning provided by the embodiment of the application firstly acquires the thermal remote sensing image and the visible light remote sensing image of the urban target area, accurately segments the thermal image by utilizing the characteristics of the visible light image, distinguishes a high-radiation area (building or road) from a low-radiation area (potential vegetation area), and then further screens out a specific thermal area with obvious thermal radiation dynamic change from the low-radiation area through multi-time phase thermal radiation change analysis. And finally, carrying out area matching verification on the coverage area of the specific thermal area and the coverage information of the buildings and water areas in the urban model network, thereby accurately judging the vegetation coverage area. Therefore, the problem of low accuracy of single data source judgment and identification in the prior art is effectively solved through multi-source data complementation, dynamic feature mining and space constraint verification, accuracy of vegetation coverage identification is improved, and powerful support is provided for urban ecological environment monitoring and management.
As shown in fig. 5, fig. 5 is a schematic hardware structure diagram of a vegetation coverage recognition system according to some embodiments of the present application, where the vegetation coverage recognition system provided by the embodiments of the present application includes a memory 1000 and a processor 2000, where the memory 1000 is configured to store computer readable instructions, and the processor 2000 is configured to invoke the computer readable instructions to execute the urban remote sensing image vegetation coverage recognition method based on deep learning as described above.
The processor 2000 is configured to provide computing and control capabilities to control the vegetation coverage recognition system to perform corresponding tasks, for example, control the vegetation coverage recognition system to perform the method for recognizing vegetation coverage of urban remote sensing images based on deep learning in any one of the method embodiments, where the method includes respectively obtaining thermal remote sensing images of an urban target area and visible light remote sensing images to obtain a first target remote sensing image and a second target remote sensing image, image-dividing the first target remote sensing image according to the second target remote sensing image to obtain a first thermal area and a second thermal area, where the thermal radiation intensity of the first thermal area is greater than that of the second thermal area, analyzing the thermal radiation intensity variation amplitude of the second thermal area in different time periods, image-dividing the second thermal area according to the thermal radiation intensity variation amplitude to obtain a second main thermal area and a second thermal area, where the thermal radiation intensity variation amplitude of the second main thermal area is greater than that of the second thermal area, inputting the second main thermal area into a city network pre-established in advance, and checking the second thermal area to be matched with a model, and determining that the area of the vegetation is covered by the second main area.
The processor 2000 may be a general-purpose processor including a central Processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a hardware chip, or any combination thereof, or may be a digital signal processor (DIGITAL SIGNAL Processing, DSP), an Application Specific Integrated Circuit (ASIC), a programmable logic device (programmable logic device, PLD), or a combination thereof. The PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (FPGA) GATE ARRAY, generic array logic (GENERIC ARRAY logic, GAL), or any combination thereof.
The memory 1000 is used as a non-transitory computer readable storage medium, and can be used to store a non-transitory software program, a non-transitory computer executable program, and a module, such as a program instruction/module corresponding to the method for identifying vegetation coverage of urban remote sensing images based on deep learning in the embodiment of the application. The processor 2000 may implement the urban remote sensing image vegetation coverage recognition method based on deep learning in any of the above method embodiments by running non-transitory software programs, instructions and modules stored in the memory 1000.
Specifically, memory 1000 may include Volatile Memory (VM), such as random access memory (random access memory, RAM), memory 1000 may also include non-volatile memory (NVM), such as read-only memory (ROM), flash memory (flash memory), hard disk (HARD DISK DRIVE, HDD) or solid state disk (solid-state-STATE DRIVE, SSD) or other non-transitory solid state storage device, and memory 1000 may also include a combination of the above types of memory.
In summary, the vegetation coverage recognition system of the present application adopts the technical scheme of any one of the embodiments of the urban remote sensing image vegetation coverage recognition method based on deep learning, so the technical scheme of the embodiment at least has the beneficial effects, and the description is omitted herein.
The embodiment of the application also provides a computer readable storage medium, such as a memory comprising program codes, wherein the program codes can be executed by a processor to complete the urban remote sensing image vegetation coverage recognition method based on deep learning in the embodiment. For example, the computer readable storage medium may be Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), compact disc Read-Only Memory (CDROM), magnetic tape, floppy disk, optical data storage device, and the like.
Embodiments of the present application also provide a computer program product comprising one or more program codes stored in a computer-readable storage medium. The processor of the vegetation coverage recognition system reads the program code from the computer readable storage medium, and the processor executes the program code to complete the steps of the urban remote sensing image vegetation coverage recognition method based on deep learning provided in the above embodiment.
It will be appreciated by those of ordinary skill in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by program code related hardware, where the program may be stored in a computer readable storage medium, where the storage medium may be a read only memory, a magnetic disk or optical disk, etc.
It should be noted that the above-described apparatus embodiments are merely illustrative, and the units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Those skilled in the art will appreciate that all or part of the processes implementing the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and where the program may include processes implementing the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random-access Memory (Random Access Memory, RAM), or the like.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structural changes made by the description of the present invention and the accompanying drawings or direct/indirect application in other related technical fields are included in the scope of the invention.
Claims (10)
1. The urban remote sensing image vegetation coverage identification method based on deep learning is characterized by comprising the following steps of:
Respectively acquiring a thermal remote sensing image and a visible light remote sensing image of an urban target area to obtain a first target remote sensing image and a second target remote sensing image;
image segmentation is carried out on the first target remote sensing image according to the second target remote sensing image to obtain a first thermal area and a second thermal area, wherein the heat radiation intensity of the first thermal area is larger than that of the second thermal area;
Analyzing the heat radiation intensity variation amplitude of the second thermal area in different time periods, and carrying out image segmentation on the second thermal area according to the heat radiation intensity variation amplitude to obtain a second main thermal area and a second secondary thermal area, wherein the heat radiation intensity variation amplitude of the second main thermal area is larger than that of the second secondary thermal area;
inputting the coverage area of the second main heating power area into a pre-established urban model network for area matching verification, wherein the urban model network comprises building and water area coverage information;
and under the condition that the area matching verification is passed, judging the second main heating power area as a vegetation coverage area.
2. The method for identifying vegetation coverage of urban remote sensing images based on deep learning as set forth in claim 1, wherein the image segmentation of the first target remote sensing image according to the second target remote sensing image to obtain a first thermal area and a second thermal area comprises:
Performing image pre-segmentation on the first target remote sensing image to obtain a first to-be-segmented area and a second to-be-segmented area;
And correcting boundary lines of the first to-be-segmented area and the second to-be-segmented area according to the texture features of the second target remote sensing image to obtain a first thermal area and a second thermal area.
3. The method for identifying vegetation coverage of urban remote sensing images based on deep learning according to claim 2, wherein the image pre-segmentation adopts a deep learning model based on U-Net, and the deep learning model is trained through historical remote sensing image data marked with thermal area boundaries.
4. The method of claim 2, wherein the correcting the boundary line between the first region to be segmented and the second region to be segmented according to the texture feature of the second target remote sensing image to obtain the first thermal region and the second thermal region comprises:
obtaining texture feature variation gradients at boundary lines of the first region to be segmented and the second region to be segmented;
and under the condition that the gradient of the texture feature change is smaller than a gradient threshold value, moving and correcting the boundary line of the first to-be-segmented area and the second to-be-segmented area towards the direction of the second to-be-segmented area to obtain a first thermal area and a second thermal area.
5. The method of claim 4, wherein moving the boundary lines of the first and second regions to be segmented toward the second region to be segmented to obtain the first and second thermal regions comprises:
Detecting the texture feature change gradient at the current boundary line in real time in the process of moving the boundary line;
And stopping the movement of the boundary line and taking the current latest boundary line as the boundary of the first thermal area and the second thermal area under the condition that the texture feature change gradient at the current boundary line is equal to a preset value or meets a preset condition.
6. The depth learning-based urban remote sensing image vegetation coverage recognition method of claim 1, further comprising, prior to determining the second main thermal zone as a vegetation coverage:
sub-pixel decomposition is carried out on the mixed pixels in the second main heating power area, and the component occupation ratios are extracted by using an end member spectrum library;
If the vegetation component ratio exceeds the preset ratio, the vegetation coverage area is reserved, otherwise, the vegetation coverage area is marked as the area to be verified;
And carrying out secondary judgment on the region to be verified through the second target remote sensing image.
7. The depth learning-based urban remote sensing image vegetation coverage recognition method of claim 6, wherein the performing the secondary determination on the region to be verified by the second target remote sensing image comprises:
Performing multispectral analysis on the visible light remote sensing image of the region to be verified, and extracting red light wave band reflectivity R red and near infrared wave band reflectivity R nir;
Inputting the red-light-band reflectivity R red and the near-infrared-band reflectivity R nir into a vegetation index model to obtain a vegetation index value N, wherein the vegetation index model meets the following expression:
If the vegetation index value N is more than or equal to T, marking the vegetation area as a candidate vegetation area, otherwise marking the vegetation area as a non-vegetation area, wherein T is a dynamic threshold value, and is set to 0.4 in summer and 0.3 in winter.
8. The method for identifying vegetation coverage of urban remote sensing images based on deep learning as claimed in claim 1, wherein the step of inputting the coverage of the second main thermal area into a pre-established urban model network for area matching verification comprises the steps of:
Calculating the overlapping area of the coverage area of the second main heating power area and the coverage area of the building and the water area in the urban model network;
Determining a confidence level according to the overlapping area and the coverage area of the second main heating power area;
and under the condition that the confidence coefficient is larger than or equal to a first confidence coefficient threshold value, determining that the area matching check passes, and judging the second main heating power area as a vegetation coverage area.
9. The depth learning based urban remote sensing image vegetation coverage identification method of claim 8, further comprising, after determining the confidence level based on the overlapping area and the coverage area of the second main thermal area:
determining that the area matching verification fails when the confidence coefficient is greater than or equal to a second confidence coefficient threshold and less than a first confidence coefficient threshold, and judging the second main thermal area as a low-confidence vegetation coverage area, wherein the second confidence coefficient threshold is less than the first confidence coefficient threshold;
Triggering a manual check signal for the low-credibility vegetation coverage area, wherein the manual check signal is used for reminding a user to perform manual detection and identification.
10. A vegetation coverage identification system comprising a memory for storing program code and a processor for invoking the program code to perform the method of any of claims 1 to 9.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510407671.8A CN119919822B (en) | 2025-04-02 | 2025-04-02 | Urban remote sensing image vegetation cover recognition method and system based on deep learning |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510407671.8A CN119919822B (en) | 2025-04-02 | 2025-04-02 | Urban remote sensing image vegetation cover recognition method and system based on deep learning |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119919822A true CN119919822A (en) | 2025-05-02 |
| CN119919822B CN119919822B (en) | 2025-06-27 |
Family
ID=95513284
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510407671.8A Active CN119919822B (en) | 2025-04-02 | 2025-04-02 | Urban remote sensing image vegetation cover recognition method and system based on deep learning |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119919822B (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107389036A (en) * | 2017-08-02 | 2017-11-24 | 珠江水利委员会珠江水利科学研究院 | A kind of large spatial scale vegetation coverage computational methods of combination unmanned plane image |
| CN109101955A (en) * | 2018-09-12 | 2018-12-28 | 北京英视睿达科技有限公司 | Industrial heat anomaly area recognizing method based on Multi-sensor satellite remote sensing |
| KR102136893B1 (en) * | 2019-12-16 | 2020-07-22 | 대한민국 | Apparatus for analyzing heat distribution change of downtown according to appeasement policy of urban heat island effect and method thereof |
| CN114612804A (en) * | 2022-01-28 | 2022-06-10 | 广东省科学院广州地理研究所 | Vegetation detection method, device and equipment based on UAV remote sensing image |
| CN119128743A (en) * | 2024-08-15 | 2024-12-13 | 中国农业科学院农田灌溉研究所 | A decision-making method and system for multi-source information fusion |
-
2025
- 2025-04-02 CN CN202510407671.8A patent/CN119919822B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107389036A (en) * | 2017-08-02 | 2017-11-24 | 珠江水利委员会珠江水利科学研究院 | A kind of large spatial scale vegetation coverage computational methods of combination unmanned plane image |
| CN109101955A (en) * | 2018-09-12 | 2018-12-28 | 北京英视睿达科技有限公司 | Industrial heat anomaly area recognizing method based on Multi-sensor satellite remote sensing |
| KR102136893B1 (en) * | 2019-12-16 | 2020-07-22 | 대한민국 | Apparatus for analyzing heat distribution change of downtown according to appeasement policy of urban heat island effect and method thereof |
| CN114612804A (en) * | 2022-01-28 | 2022-06-10 | 广东省科学院广州地理研究所 | Vegetation detection method, device and equipment based on UAV remote sensing image |
| CN119128743A (en) * | 2024-08-15 | 2024-12-13 | 中国农业科学院农田灌溉研究所 | A decision-making method and system for multi-source information fusion |
Non-Patent Citations (2)
| Title |
|---|
| XUE, W: "Retrieval of Vegetation Indices and Vegetation Fraction in Highly Compact Urban Areas: A 3D Radiative Transfer Approach", REMOTE SENS, 3 January 2025 (2025-01-03), pages 1 - 23 * |
| 袁浩: "遥感技术支撑非光合植被覆盖度提取模型构建及应用", 中国优秀硕士学位论文全文数据库 信息科技辑, 15 March 2025 (2025-03-15), pages 1 - 80 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119919822B (en) | 2025-06-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Wangchuk et al. | Mapping of glacial lakes using Sentinel-1 and Sentinel-2 data and a random forest classifier: Strengths and challenges | |
| EP3997616B1 (en) | Object-based change detection using a neural network | |
| US8594375B1 (en) | Advanced cloud cover assessment | |
| EP3035237B1 (en) | Method and system for classifying a terrain type in an area | |
| CN113850139B (en) | Multi-source remote sensing-based forest annual phenological monitoring method | |
| US12214809B2 (en) | Instance segmentation imaging system | |
| CN110703244B (en) | Method and device for identifying urban water body based on remote sensing data | |
| CN115223063A (en) | Unmanned aerial vehicle remote sensing wheat new variety lodging area extraction method and system based on deep learning | |
| KR20220122381A (en) | Water quality monitoring method and system using drone | |
| CN112766417A (en) | Method and system for recognizing current land type of land utilization of target land block by using field photo | |
| CN117058557A (en) | Cloud and cloud shadow joint detection method based on physical characteristics and deep learning model | |
| Chen et al. | Urban land use mapping using high resolution SAR data based on density analysis and contextual information | |
| Li et al. | Hybrid cloud detection algorithm based on intelligent scene recognition | |
| CN110070513A (en) | The radiation correction method and system of remote sensing image | |
| CN119540803B (en) | A mangrove vegetation carbon sink monitoring and measurement method based on drone and AI technology | |
| CN120071159A (en) | Urban green land coverage rate dynamic monitoring method and device | |
| CN118898757B (en) | A sample data migration enhancement method and system based on remote sensing image spatiotemporal feature correlation fusion | |
| CN119919822B (en) | Urban remote sensing image vegetation cover recognition method and system based on deep learning | |
| Meng | Remote Sensing of Urban Green Space | |
| WO2014165787A1 (en) | System and method for detection of structures | |
| CN110427961B (en) | Building information extraction method and system based on rule and sample fusion | |
| Lingfors et al. | Deriving the orientation of existing solar energy systems from LiDAR data at scale | |
| Zahradník et al. | Flat roof classification and leaks detections by Deep Learning | |
| Boccalatte et al. | Quantifying urban solar potential losses from rooftop superstructures via aerial imagery and Convolutional Neural Networks | |
| Musa et al. | Image Enhancement and Change Detection for Urban Sprawl Analysis of Bauchi Metropolis, Nigeria Using Remote Sensing and GIS Techniques |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |