[go: up one dir, main page]

CN119992238A - Hyperspectral target detection and recognition method and system based on semantic and spatial-spectral feature fusion - Google Patents

Hyperspectral target detection and recognition method and system based on semantic and spatial-spectral feature fusion Download PDF

Info

Publication number
CN119992238A
CN119992238A CN202510472960.6A CN202510472960A CN119992238A CN 119992238 A CN119992238 A CN 119992238A CN 202510472960 A CN202510472960 A CN 202510472960A CN 119992238 A CN119992238 A CN 119992238A
Authority
CN
China
Prior art keywords
target
detection
branch
hyperspectral
semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202510472960.6A
Other languages
Chinese (zh)
Other versions
CN119992238B (en
Inventor
廉鹏飞
林渤然
王辉
黄宇轩
刘奎
邱源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai aerospace computer technology research institute
East China Normal University
Original Assignee
Shanghai aerospace computer technology research institute
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai aerospace computer technology research institute, East China Normal University filed Critical Shanghai aerospace computer technology research institute
Priority to CN202510472960.6A priority Critical patent/CN119992238B/en
Publication of CN119992238A publication Critical patent/CN119992238A/en
Application granted granted Critical
Publication of CN119992238B publication Critical patent/CN119992238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Image Analysis (AREA)

Abstract

本发明提供了一种基于语义和空谱特征融合的高光谱目标检测识别方法和系统,包括:步骤1:基于无监督异常检测算法提取高光谱图像中的异常目标,通过约束能量最小化算子进行目标快速粗检测,并基于帧间运动特性对比排除虚警;步骤2:采用双流卷积神经网络对粗检测结果进行精检测,获取目标的空间‑光谱特征信息,结合立方长短时记忆网络预测目标的置信区间范围,实现动态跟踪;步骤3:利用基于历史光谱训练的支持向量机对检测目标进行分类,判别其所属类别。本发明通过多层次的目标检测设计,在没有可参考目标光谱的情况下对不同类目标进行高置信度检测,为天基探测预警提供有效途径。

The present invention provides a method and system for detecting and identifying hyperspectral targets based on the fusion of semantic and spatial-spectral features, including: step 1: extracting abnormal targets in hyperspectral images based on an unsupervised anomaly detection algorithm, performing rapid and rough detection of targets through a constrained energy minimization operator, and eliminating false alarms based on inter-frame motion characteristic comparison; step 2: using a two-stream convolutional neural network to perform precise detection on the rough detection results, obtaining the spatial-spectral feature information of the target, combining a cubic long short-term memory network to predict the confidence interval range of the target, and realizing dynamic tracking; step 3: using a support vector machine based on historical spectrum training to classify the detected target and identify the category to which it belongs. The present invention uses a multi-level target detection design to perform high-confidence detection of different types of targets in the absence of a reference target spectrum, providing an effective way for space-based detection and early warning.

Description

Hyperspectral target detection and recognition method and system based on semantic and spatial spectrum feature fusion
Technical Field
The invention relates to the field of hyperspectral remote sensing image target detection methods and systems, in particular to a hyperspectral target detection and identification method and system based on semantic and spatial spectrum feature fusion.
Background
The hyperspectral remote sensing image acquires geometric, radiation and spectral information of a scene to form three-dimensional image information, and the hyperspectral technology acquires a spectral curve of any pixel through spectrum subdivision, and different substances have different characteristic spectral lines and are fingerprints of the substances. The ability to refine the identification of terrestrial, marine and aerial targets can be achieved using hyperspectral detection techniques.
However, the data volume of the entire image scene of the hyperspectral image is huge, and the hyperspectral image still needs to be applied after being issued to the ground and subjected to a large amount of manual processing, so that a large amount of related information of targets, such as ships, airplanes and the like, which are strongly related to the motion characteristics cannot be obtained in real time. Therefore, there is a need to design a method, a system and an electronic device for detecting and identifying a hyperspectral target based on semantic and spatial spectrum feature fusion, and perform tracking detection classification with high confidence on the target.
Patent application CN116958807a discloses a hyperspectral image target detection method based on momentum-free contrast learning and a transducer network, which comprises designing an encoder and a momentum encoder for extracting spectral features in a hyperspectral target detection task based on a transducer encoder. In order to pay attention to the long-distance dependence and self-similarity of the spectrum and not to ignore the local detail information in the spectrum, the spectrum characteristic extraction encoder and the momentum encoder pay attention to the local detail information of the spectrum through designed overlapped spectrum block characteristic mapping and interaction token feedforward layers. And secondly, performing spectrum discrimination capability learning by an unsupervised momentum contrast learning mode, wherein a queue and a momentum encoder which is slowly updated in a momentum mode are used for providing negative sample characteristics with sufficient quantity and good consistency so as to help a model to learn to better represent. And finally, carrying out nonlinear pull-up on a detection result obtained through cosine similarity twice by using an exponential and normalization operation to restrain the background. However, the present patent cannot completely solve the existing technical problems, and cannot meet the needs of the present invention.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a hyperspectral target detection and identification method and system based on semantic and spatial spectrum feature fusion.
The hyperspectral target detection and identification method based on semantic and spatial spectrum feature fusion provided by the invention comprises the following steps:
extracting an abnormal target in a hyperspectral image based on an unsupervised abnormal detection algorithm, performing rapid coarse detection on the target through a constraint energy minimization operator, and eliminating false alarms based on inter-frame motion characteristic contrast;
Step 2, precisely detecting the coarse detection result by adopting a double-flow convolutional neural network, acquiring space-spectrum characteristic information of a target, and combining a cubic long-short-term memory network to predict a confidence interval range of the target so as to realize dynamic tracking;
And 3, classifying the detection targets by using a support vector machine based on historical spectrum training, and judging the category to which the detection targets belong.
Preferably, the unsupervised anomaly detection algorithm in step 1 adopts an RX detection operator, and the expression is:
Wherein, For any one of the vectors of picture elements in the image,As a sample mean value vector of the samples,A sample covariance matrix for the image;
And (3) eliminating false alarms by calculating the energy distribution of the target in the direction of the small eigenvalue of the covariance matrix and combining the target motion distance threshold between adjacent frames.
Preferably, the dual-flow convolutional neural network in the step2 includes an upper branch and a lower branch, each branch contains an input, 9 convolutional layers are utilized in each branch to extract spectral information rich in input pixels, and one-dimensional convolutional layers are utilized to realize convolution operation;
The convolution layer with the core step length of 2 is used for replacing a pooling layer in a network, so that spectral characteristics are reserved to the greatest extent, all the characteristics extracted by the convolution layer with the core step length of 2 are added with the characteristics extracted by the last layer based on different average pooling layers, and then the final characteristics of each branch are obtained through the operation of an AVG pooling layer and a full connection layer;
In a dual-flow convolutional neural network, For a target a priori pixel,For the target pixel to be a target pixel,For background pixels, the input of the upper branch is always the following according to the training sample structureThe input of the current branch isThe label of the training sample is 1 when the input of the lower branch is 1The final characteristics of the two branches are obtained through multiple convolution operations, pooling operations and one full connection operation and are recorded asAndTwo features are then combined:
finally, the output of the double-flow convolutional neural network is obtained through the last full-connection layer and a Sigmoid function.
Preferably, the neutral square long short time memory network in the step 2 consists of a space branch, a time branch and an output branch, wherein the input comprises the longitude and latitude, the speed, the acceleration and the historical track of the target, and the output is the predicted result of the target position at the next momentDistance between adjacent framesThe confidence interval is defined as the 10 pixel neighborhood of the last frame position, where V is the target speed,For the orbital tilt, r is the spatial resolution of the video hyperspectral camera imaging, and f is the video frame rate.
Preferably, the support vector machine in step 3 adopts a soft margin optimization model, trains a classifier based on historical spectrum data, distinguishes aircraft, ships and other target categories, and the loss function is as follows:
Wherein, Is the weight vector of the object,Is a bias term that is used to determine,AndThe feature vector of the i-th sample and the corresponding label, respectively.
The hyperspectral target detection and identification system based on semantic and spatial spectrum feature fusion provided by the invention comprises the following components:
The coarse detection module is used for extracting an abnormal target in the hyperspectral image based on an unsupervised abnormal detection algorithm, carrying out rapid coarse detection on the target through a constraint energy minimization operator, and eliminating false alarms based on inter-frame motion characteristic comparison;
The fine detection module is used for carrying out fine detection on the coarse detection result by adopting a double-flow convolutional neural network, acquiring the space-spectrum characteristic information of the target, and combining with a cubic long-short-term memory network to predict the confidence interval range of the target so as to realize dynamic tracking;
and the classification module is used for classifying the detection targets by using a support vector machine based on historical spectrum training and judging the category to which the detection targets belong.
Preferably, an RX detection operator is adopted in the unsupervised anomaly detection algorithm in the coarse detection module, and the expression is:
Wherein, For any one of the vectors of picture elements in the image,As a sample mean value vector of the samples,A sample covariance matrix for the image;
And (3) eliminating false alarms by calculating the energy distribution of the target in the direction of the small eigenvalue of the covariance matrix and combining the target motion distance threshold between adjacent frames.
Preferably, the double-flow convolutional neural network in the fine detection module comprises an upper branch and a lower branch, each branch comprises an input, 9 convolutional layers are utilized in each branch to extract spectrum information rich in input pixels, and one-dimensional convolutional layers are utilized to realize convolution operation;
The convolution layer with the core step length of 2 is used for replacing a pooling layer in a network, so that spectral characteristics are reserved to the greatest extent, all the characteristics extracted by the convolution layer with the core step length of 2 are added with the characteristics extracted by the last layer based on different average pooling layers, and then the final characteristics of each branch are obtained through the operation of an AVG pooling layer and a full connection layer;
In a dual-flow convolutional neural network, For a target a priori pixel,For the target pixel to be a target pixel,For background pixels, the input of the upper branch is always the following according to the training sample structureThe input of the current branch isThe label of the training sample is 1 when the input of the lower branch is 1The final characteristics of the two branches are obtained through multiple convolution operations, pooling operations and one full connection operation and are recorded asAndTwo features are then combined:
finally, the output of the double-flow convolutional neural network is obtained through the last full-connection layer and a Sigmoid function.
Preferably, the neutral square short-time memory network of the fine detection module consists of a space branch, a time branch and an output branch, wherein the input comprises the longitude and latitude, the speed, the acceleration and the historical track of the target, and the output is the predicted result of the target position at the next momentDistance between adjacent framesThe confidence interval is defined as the 10 pixel neighborhood of the last frame position, where V is the target speed,For the orbital tilt, r is the spatial resolution of the video hyperspectral camera imaging, and f is the video frame rate.
Preferably, the support vector machine in the classification module adopts a soft margin optimization model, trains the classifier based on historical spectrum data, distinguishes aircraft, ships and other target categories, and the loss function is as follows:
Wherein, Is the weight vector of the object,Is a bias term that is used to determine,AndThe feature vector of the i-th sample and the corresponding label, respectively.
Compared with the prior art, the invention has the following beneficial effects:
The invention provides a hyperspectral target detection and recognition method based on semantic and spatial spectrum feature fusion, which constructs a semantic segmentation network model in the spatial domain of a hyperspectral image, designs a self-adaptive spatial spectrum joint optimization model, constructs a spatial spectrum joint cascade detector, realizes the spatial domain target detection of hyperspectral images at pixel-by-pixel level, improves the target detection precision, reduces the false alarm rate and solves the problem of difficult capture of a time-sensitive target track in the air.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of a hyperspectral target detection and recognition method based on semantic and spatial spectrum feature fusion;
FIG. 2 is a flow chart of anomaly detection based on an unsupervised method of the present invention;
FIGS. 3a and 3b are a flow chart and a result chart, respectively, of an improved constrained energy minimized target detection of the present invention;
FIG. 4 is a block diagram of a dual-stream convolutional neural network of the present invention;
FIGS. 5a and 5b are a flow chart and a result chart, respectively, of target classification recognition by a support vector machine according to the present invention;
FIG. 6 is a block diagram of a hyperspectral target detection and recognition system based on semantic and spatial feature fusion according to the present invention;
Fig. 7 is a structural diagram of a hyperspectral target detection and identification electronic device based on semantic and spatial spectrum feature fusion.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present invention.
Examples
Referring to fig. 1, the present embodiment provides a hyperspectral target detection and identification method based on semantic and spatial spectrum feature fusion, which mainly includes:
step 1, extracting an abnormal target in a hyperspectral image based on abnormal detection of an unsupervised method, performing rapid coarse detection on the target through a CEM operator, and comparing inter-frame information to eliminate false alarms;
Step 2, performing fine detection through a double-flow convolutional neural network to obtain target related information, and determining a confidence interval range tracking detection target by using a cubic long-short-time memory network;
and 3, classifying the detected targets by using SVM based on historical spectrum training, and judging which targets the detected targets belong to.
Further, referring to fig. 2, the present embodiment provides a flowchart based on unsupervised method anomaly detection.
The anomaly detection method based on the unsupervised method adopts an RX detection operator, and the form is as follows:
Wherein, For any one of the vectors of picture elements in the image,As a sample mean value vector of the samples,Is the sample covariance matrix of the image. In the aboveIn the same manner as the mahalanobis distance, the RX algorithm essentially can be regarded as the inverse of the principal component analysis. Principal component analysis compresses a significant portion of the meaningful image information from the original image feature space to a space in which at least some of the uncorrelated principal components are basis. Obviously, some objects (small objects, outlier objects) with very low probability of occurrence in the image will not be included in these principal components, instead they will have a greater probability of occurrence in the covariance matrixIn the direction of the feature vector corresponding to the small feature value of (a). The RX algorithm is calculated byIf there is an outlier in the graph, its corresponding energy will be small and will likely be in line with the covariance matrixCorresponds to a small eigenvalue of (c), and the smaller the eigenvalue,The larger the value, the more effectively an abnormal object in the image can be detected.
The RX detection operator can detect targets such as airplanes, ships and the like existing in the hyperspectral image as abnormal targets, but other abnormal points in the image are often accompanied with noise output, so that the function of coarse detection of the targets is realized under the condition that the spectrum of the targets is unknown in abnormal detection, and then further fine detection is needed to eliminate noise and reduce false alarm rate.
Further, referring to FIG. 3a, the present embodiment provides a flow chart for improved constrained energy minimized target detection.
Recording deviceFor all observation sample sets, whereFor any of the sample pixel vectors,Is the number of the pixels to be processed,Assume that the number of bands of an image isIs the object of interest. The purpose of CEM is to design a FIR linear filterMinimizing the filtered output energy:
the solution of the above formula is CEM operator The method comprises the following steps:
applying the CEM operator to each pixel in the image will result in a target And the target is detected under the distribution condition in the image.
Let V be the target moving speed (m/s),For the aircraft flight tilt, r is the spatial resolution (m) of the video hyperspectral camera imaging, f is the video frame rate (fps), and W is the video breadth (m), as shown in table 1.
From the above information, the motion speed of the target pixel on the image plane can be calculated(Pixel/s) is:
Distance of motion of adjacent frame target The (pixels) are:
Time of target stay in field of view (S) is:
Frame number of target stay field of view The method comprises the following steps:
TABLE 1 motion characteristics of aircraft and watercraft reflected on images
It is calculated that the target remains in the field of view for at least 12s, while the hyperspectral video satellites are imaged at 5 frames per second, so that the target appears continuously over at least 60 frames. The conservative estimate of the distance that a moving object moves between two adjacent frames should also be less than 20 pixels. Thus, the target will typically appear in the vicinity of the target position of the previous frame (within 20 pixel neighborhood).
Therefore, for the anomaly detection of two consecutive frames, for all the objects detected on the previous frame, if they no longer appear within 20 pixels of the next frame, they can be considered as false alarms.
Further, referring to FIG. 3b, the present embodiment provides a result graph of improved constrained energy minimized target detection.
Further, referring to fig. 4, the present embodiment provides a structure diagram of a dual-flow convolutional neural network.
Before training, a mixed pixel selection strategy based on sparse representation and classification is proposed, typical background samples are selected in a hyperspectral image, and then enough target samples are generated through some typical background samples and target priors. In the training process, training samples (a positive training sample with a label of 1 constructed by a target prior and a target sample and a negative training sample with a label of 0 constructed by a target prior and a background sample) are input into a well-designed double-flow convolution network, and the discrimination capability is learned. During the test, the test samples (consisting of target priors and detection pixels) are classified by a well-trained two-stream convolutional network. The output of the network constitutes the final detection result.
The double-flow convolutional neural network comprises two branches, namely an uplink branch and a downlink branch. Each branch contains one input. In each branch, 9 convolution layers are used to extract the input pixel-rich spectral information. The convolution operation is implemented with one-dimensional convolution layers, which are followed by a ReLU layer. Considering that the pooling layer may cause the loss of spectrum information when the dimension of the spectrum is required to be reduced, a convolution layer with a core step length of 2 is used for replacing the pooling layer in the network. To preserve spectral features to the maximum, all features extracted by the convolution layer with a kernel step size of 2 are added to the features extracted by the last layer based on the different average pooling layers. The final characteristics of each branch are then obtained by AVG pooling and full connection layer operations.
In the network of the present invention,For a target a priori pixel,For the target pixel to be a target pixel,Is a background pixel. From the previous training sample configuration, the input of the upper branch is alwaysThe input of the current branch isThe label of the training sample is 1 when the input of the lower branch is 1When the label is 0. After some convolution operations, several pooling operations and one full join operation, the final characteristics of the two branches are obtained and recorded as1 And2, Then combining the two features:
Finally, the output of the network is obtained through the last full-connection layer and a Sigmoid function.
There are two loss functions in the proposed network. First is the binary cross entropy loss:
Wherein, In order to be of the size of the batch,In order to train the labels of the samples,Is the output of the Sigmoid function. The other is ICS loss. To improve the effect of target and background separability, ICS losses are proposed. If the inputs of the two branches are respectivelyAnd(Tag 1) they belong to the same class, the distance between them should be minimized, otherwise they belong to different classes, the distance between them should be maximized. Thus, the proposed ICS loss is expressed as:
Wherein, 1 Is the extracted upper branch feature of the device,2 Is the extracted lower branch feature; And Respectively extracting feature vectors;
Final loss function The sum of ICS loss and BCE loss is expressed as:
A cubic long-short-term memory network is a new structure developed based on LSTM, consisting of three branches, a spatial branch for capturing moving objects, a temporal branch for processing the motion, and an output branch for combining the first two branches to generate a predicted frame.
The spatial branches flow along the z-axis (spatial axis), where convolution is responsible for capturing and analyzing moving objects. The spatial state is generated by the branch carrying spatial layout information about the moving object.
The time branches flow along the x-axis (time axis), and convolution aims to obtain and process motion. A temporal state is generated by the branch, which contains motion information.
The output branch generates intermediate or final predicted frames along the y-axis (output axis) based on the predicted motion provided by the temporal branch and the motion object information provided by the spatial branch.
Processing temporal and spatial information separately may result in better predictions, and such separation may reduce the prediction burden of the network. Stacking a plurality CubicLSTM of units along the spatial and output branches may form a two-dimensional network. The two-dimensional network can further construct a three-dimensional network (CubicRNN) through evolution along a time axis, three layers are spatially overlapped, information of a tracking target can be more prominent, and better spatial information is obtained.
The motion characteristics of high maneuvering targets in the air are divided into two types, namely short-time motion characteristics and long-time motion characteristics. Short-time motion characteristics include longitude and latitude, speed, acceleration and the like of a target, and the characteristics can change greatly in a short period. The long-term task characteristics comprise historical motion tracks of the target, and compared with the short-term motion characteristics, the long-term task characteristics are relatively stable in the motion process of the target. In order to make the target tracking more accurate, it is necessary to use both the short-time motion characteristics and the long-time task characteristics of the target. Therefore, a CubicLSTM network is adopted, and the short-time motion characteristic and the long-time task characteristic of the target are taken as the input of the network together, so that the state of the target at the next moment is predicted, and the confidence range of the target is calculated. The input and output variables of the CubicLSTM network employed are defined as follows:
TABLE 2 input output relationship
After the predicted target position at the next moment is obtained through CubicLSTM network, further, the confidence interval of the target is required to be determined according to the motion characteristic of the target, and the target is detected within the range to obtain the real position of the target, so that the tracking of the target is realized. The confidence interval range will be analyzed in detail based on the characteristics of the target motion.
Let V be the target speed (m/s),For orbital tilt, r is the spatial resolution (m) of video hyperspectral camera imaging, f is the video frame rate (fps), and W is the video breadth (m).
From the above information, the motion speed of the target pixel on the image plane can be calculated(Pixel/s) is:
The adjacent frame target motion distance (pixel) is:
the time(s) for which the target stays in the field of view is:
The number of frames of the target stay field is:
It can be calculated that the distance of the moving object between two adjacent frames is smaller than 2.5 pixels, and the conservative estimation should be smaller than 10 pixels. Thus, the target will typically appear within 10 pixels of the vicinity of the target location of the previous frame. And repeatedly detecting the target in the adjacent area to capture the position of the next frame of the target.
Further, referring to fig. 5a, the present embodiment provides a target classification recognition flowchart of a support vector machine.
The use of hard-edge SVMs in the linear inseparable problem will produce classification errors, so that a new optimization problem can be constructed by introducing a loss function on the basis of maximizing the edge. Given input dataAnd learning objectThe optimization problem of the soft-margin SVM using the hinge loss function is expressed as follows:
Wherein, Is the weight vector of the object,Is a bias term that is used to determine,Is a regularization parameter which is a function of the data,Is the number of samples that are to be taken,AndThe feature vector of the ith sample and the corresponding label are respectively;
the above equation shows that the soft margin SVM is one Regularized classifier, inRepresenting the hinge loss function.
After training based on the historical spectrum, the SVM classifier can classify the targets of the accurate detection result and distinguish the targets of the airplane, the ship and the like.
Further, referring to fig. 5b, the present embodiment provides a target classification recognition result diagram of the support vector machine.
Further, referring to fig. 6, the present embodiment provides a hyperspectral target detection and identification system based on semantic and spatial spectrum feature fusion, which mainly includes:
The coarse detection module is used for inputting the hyperspectral image to be detected, detecting an abnormal target in the hyperspectral image based on an abnormal detection RX operator, performing rapid coarse detection on the target through a CEM operator, and eliminating false alarms by comparing the inter-frame information.
And the fine detection module is used for carrying out fine detection through a double-flow convolutional neural network, acquiring target related information, and determining a confidence interval range tracking detection target by using a cubic long-short-term memory network.
The classification module classifies the detected targets by using SVM based on historical spectrum training and judges which targets the detected targets belong to.
Further, referring to FIG. 7, the present embodiment provides an electronic device, mainly comprising at least one memory and at least one processor, wherein the at least one memory stores instructions that, when executed by the at least one processor, perform a hyperspectral target detection and recognition method based on semantic and spatial signature fusion according to an exemplary embodiment of the present disclosure.
By way of example, the electronic device may be a PC computer, tablet device, personal digital assistant, smart phone, or other device capable of executing the instructions described above. Here, the electronic device is not necessarily a single electronic device, but may be any device or an aggregate of circuits capable of executing the above-described instructions (or instruction set) singly or in combination. The electronic device may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with either locally or remotely (e.g., via wireless transmission).
In an electronic device, a processor may include a Central Processing Unit (CPU), a Graphics Processor (GPU), a programmable logic device, a special purpose processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processors may also include analog processors, digital processors, microprocessors, multi-core processors, processor arrays, network processors, and the like.
The processor may execute instructions or code stored in the memory, wherein the memory may also store data. The instructions and data may also be transmitted and received over a network via a network interface device, which may employ any known transmission protocol.
The memory may be integrated with the processor, for example, RAM or flash memory disposed within an integrated circuit microprocessor or the like. In addition, the memory may include a stand-alone device, such as an external disk drive, a storage array, or any other storage device usable by a database system. The memory and the processor may be operatively coupled or may communicate with each other, for example, through an I/O port, a network connection, etc., such that the processor is able to read files stored in the memory.
In addition, the electronic device may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of the electronic device may be connected to each other via a bus and/or a network.
Those skilled in the art will appreciate that the systems, apparatus, and their respective modules provided herein may be implemented entirely by logic programming of method steps such that the systems, apparatus, and their respective modules are implemented as logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc., in addition to the systems, apparatus, and their respective modules being implemented as pure computer readable program code. Therefore, the system, the device and the respective modules thereof provided by the invention can be regarded as a hardware component, and the modules for realizing various programs included therein can be regarded as a structure in the hardware component, and the modules for realizing various functions can be regarded as a structure in the hardware component as well as a software program for realizing the method.
The foregoing describes specific embodiments of the present application. It is to be understood that the application is not limited to the particular embodiments described above, and that various changes or modifications may be made by those skilled in the art within the scope of the appended claims without affecting the spirit of the application. The embodiments of the application and the features of the embodiments may be combined with each other arbitrarily without conflict.

Claims (10)

1. The hyperspectral target detection and identification method based on semantic and spatial spectrum feature fusion is characterized by comprising the following steps of:
extracting an abnormal target in a hyperspectral image based on an unsupervised abnormal detection algorithm, performing rapid coarse detection on the target through a constraint energy minimization operator, and eliminating false alarms based on inter-frame motion characteristic contrast;
Step 2, precisely detecting the coarse detection result by adopting a double-flow convolutional neural network, acquiring space-spectrum characteristic information of a target, and combining a cubic long-short-term memory network to predict a confidence interval range of the target so as to realize dynamic tracking;
And 3, classifying the detection targets by using a support vector machine based on historical spectrum training, and judging the category to which the detection targets belong.
2. The hyperspectral target detection and recognition method based on semantic and spatial spectrum feature fusion according to claim 1, wherein the unsupervised anomaly detection algorithm in step 1 adopts an RX detection operator, and the expression is:
Wherein, For any one of the vectors of picture elements in the image,As a sample mean value vector of the samples,A sample covariance matrix for the image;
And (3) eliminating false alarms by calculating the energy distribution of the target in the direction of the small eigenvalue of the covariance matrix and combining the target motion distance threshold between adjacent frames.
3. The hyperspectral target detection and recognition method based on semantic and spatial spectrum feature fusion according to claim 1, wherein the double-flow convolutional neural network in the step 2 comprises an upper branch and a lower branch, each branch comprises an input, 9 convolutional layers are used for extracting spectral information rich in input pixels in each branch, and one-dimensional convolutional layers are used for realizing convolution operation;
The convolution layer with the core step length of 2 is used for replacing a pooling layer in a network, so that spectral characteristics are reserved to the greatest extent, all the characteristics extracted by the convolution layer with the core step length of 2 are added with the characteristics extracted by the last layer based on different average pooling layers, and then the final characteristics of each branch are obtained through the operation of an AVG pooling layer and a full connection layer;
In a dual-flow convolutional neural network, For a target a priori pixel,For the target pixel to be a target pixel,For background pixels, the input of the upper branch is always the following according to the training sample structureThe input of the current branch isThe label of the training sample is 1 when the input of the lower branch is 1The final characteristics of the two branches are obtained through multiple convolution operations, pooling operations and one full connection operation and are recorded asAndTwo features are then combined:
finally, the output of the double-flow convolutional neural network is obtained through the last full-connection layer and a Sigmoid function.
4. The hyperspectral target detection and recognition method based on semantic and spatial spectrum feature fusion as claimed in claim 1, wherein the neutral square long short time memory network in the step 2 is composed of a space branch, a time branch and an output branch, wherein the input comprises longitude and latitude, speed, acceleration and historical track of a target, and the output is a predicted result of the target position at the next momentDistance between adjacent framesThe confidence interval is defined as the 10 pixel neighborhood of the last frame position, where V is the target speed,For the orbital tilt, r is the spatial resolution of the video hyperspectral camera imaging, and f is the video frame rate.
5. The hyperspectral target detection and recognition method based on semantic and spatial spectrum feature fusion according to claim 1, wherein the support vector machine in the step 3 adopts a soft margin optimization model, trains a classifier based on historical spectrum data, distinguishes aircraft, ships and other target categories, and the loss function is as follows:
Wherein, Is the weight vector of the object,Is a bias term that is used to determine,AndThe feature vector of the i-th sample and the corresponding label, respectively.
6. A hyperspectral target detection and recognition system based on semantic and spatial spectrum feature fusion, comprising:
The coarse detection module is used for extracting an abnormal target in the hyperspectral image based on an unsupervised abnormal detection algorithm, carrying out rapid coarse detection on the target through a constraint energy minimization operator, and eliminating false alarms based on inter-frame motion characteristic comparison;
The fine detection module is used for carrying out fine detection on the coarse detection result by adopting a double-flow convolutional neural network, acquiring the space-spectrum characteristic information of the target, and combining with a cubic long-short-term memory network to predict the confidence interval range of the target so as to realize dynamic tracking;
and the classification module is used for classifying the detection targets by using a support vector machine based on historical spectrum training and judging the category to which the detection targets belong.
7. The hyperspectral target detection and recognition system based on semantic and spatial spectrum feature fusion according to claim 6, wherein an RX detection operator is adopted by an unsupervised anomaly detection algorithm in the coarse detection module, and the expression is as follows:
Wherein, For any one of the vectors of picture elements in the image,As a sample mean value vector of the samples,A sample covariance matrix for the image;
And (3) eliminating false alarms by calculating the energy distribution of the target in the direction of the small eigenvalue of the covariance matrix and combining the target motion distance threshold between adjacent frames.
8. The hyperspectral target detection and recognition system based on semantic and spatial spectrum feature fusion according to claim 6, wherein the double-flow convolutional neural network in the fine detection module comprises an upper branch and a lower branch, each branch comprises an input, 9 convolutional layers are used for extracting spectral information rich in input pixels in each branch, and one-dimensional convolutional layers are used for realizing convolution operation;
The convolution layer with the core step length of 2 is used for replacing a pooling layer in a network, so that spectral characteristics are reserved to the greatest extent, all the characteristics extracted by the convolution layer with the core step length of 2 are added with the characteristics extracted by the last layer based on different average pooling layers, and then the final characteristics of each branch are obtained through the operation of an AVG pooling layer and a full connection layer;
In a dual-flow convolutional neural network, For a target a priori pixel,For the target pixel to be a target pixel,For background pixels, the input of the upper branch is always the following according to the training sample structureThe input of the current branch isThe label of the training sample is 1 when the input of the lower branch is 1The final characteristics of the two branches are obtained through multiple convolution operations, pooling operations and one full connection operation and are recorded asAndTwo features are then combined:
finally, the output of the double-flow convolutional neural network is obtained through the last full-connection layer and a Sigmoid function.
9. The hyperspectral target detection and recognition system based on semantic and spatial spectrum feature fusion as claimed in claim 6, wherein the short-term memory network of the neutral square length in the fine detection module is composed of a space branch, a time branch and an output branch, wherein the input comprises longitude and latitude, speed, acceleration and historical track of the target, and the output is a predicted result of the target position at the next momentDistance between adjacent framesThe confidence interval is defined as the 10 pixel neighborhood of the last frame position, where V is the target speed,For the orbital tilt, r is the spatial resolution of the video hyperspectral camera imaging, and f is the video frame rate.
10. The hyperspectral target detection and recognition system based on semantic and spatial spectrum feature fusion according to claim 6, wherein a soft margin optimization model is adopted by a support vector machine in the classification module, a classifier is trained based on historical spectrum data, and aircraft, ships and other target categories are distinguished, and a loss function is as follows:
Wherein, Is the weight vector of the object,Is a bias term that is used to determine,AndThe feature vector of the i-th sample and the corresponding label, respectively.
CN202510472960.6A 2025-04-16 2025-04-16 Hyperspectral target detection and recognition method and system based on semantic and spatial-spectral feature fusion Active CN119992238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510472960.6A CN119992238B (en) 2025-04-16 2025-04-16 Hyperspectral target detection and recognition method and system based on semantic and spatial-spectral feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510472960.6A CN119992238B (en) 2025-04-16 2025-04-16 Hyperspectral target detection and recognition method and system based on semantic and spatial-spectral feature fusion

Publications (2)

Publication Number Publication Date
CN119992238A true CN119992238A (en) 2025-05-13
CN119992238B CN119992238B (en) 2025-07-01

Family

ID=95640274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510472960.6A Active CN119992238B (en) 2025-04-16 2025-04-16 Hyperspectral target detection and recognition method and system based on semantic and spatial-spectral feature fusion

Country Status (1)

Country Link
CN (1) CN119992238B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8897571B1 (en) * 2011-03-31 2014-11-25 Raytheon Company Detection of targets from hyperspectral imagery
CN110084159A (en) * 2019-04-15 2019-08-02 西安电子科技大学 Hyperspectral image classification method based on the multistage empty spectrum information CNN of joint
CN111582298A (en) * 2020-03-18 2020-08-25 宁波送变电建设有限公司永耀科技分公司 Sensing abnormal data real-time detection method based on artificial intelligence
CN114937206A (en) * 2022-06-15 2022-08-23 西安电子科技大学 Target detection method in hyperspectral images based on transfer learning and semantic segmentation
CN118279747A (en) * 2024-04-28 2024-07-02 陕西聆图屿智能科技有限公司 Hyperspectral abnormal target detection method based on multi-scale analysis and variation self-encoder

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8897571B1 (en) * 2011-03-31 2014-11-25 Raytheon Company Detection of targets from hyperspectral imagery
CN110084159A (en) * 2019-04-15 2019-08-02 西安电子科技大学 Hyperspectral image classification method based on the multistage empty spectrum information CNN of joint
CN111582298A (en) * 2020-03-18 2020-08-25 宁波送变电建设有限公司永耀科技分公司 Sensing abnormal data real-time detection method based on artificial intelligence
CN114937206A (en) * 2022-06-15 2022-08-23 西安电子科技大学 Target detection method in hyperspectral images based on transfer learning and semantic segmentation
CN118279747A (en) * 2024-04-28 2024-07-02 陕西聆图屿智能科技有限公司 Hyperspectral abnormal target detection method based on multi-scale analysis and variation self-encoder

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王彩玲 等: "一种新的空谱联合探测高光谱影像目标探测算法", 光谱学与光谱分析, no. 04, 15 April 2016 (2016-04-15) *

Also Published As

Publication number Publication date
CN119992238B (en) 2025-07-01

Similar Documents

Publication Publication Date Title
Jiang et al. A semisupervised Siamese network for efficient change detection in heterogeneous remote sensing images
Zheng et al. Fast ship detection based on lightweight YOLOv5 network
Qu et al. Improved YOLOv5-based for small traffic sign detection under complex weather
Zhao et al. SAR ship detection based on end-to-end morphological feature pyramid network
KR102527642B1 (en) System and method for detecting small target based deep learning
CN113705375A (en) Visual perception device and method for ship navigation environment
Hao et al. Infrared small target detection with super-resolution and YOLO
Kong et al. Lightweight algorithm for multi-scale ship detection based on high-resolution SAR images
Li et al. Moderately dense adaptive feature fusion network for infrared small target detection
CN118711149A (en) Target tracking method, device and electronic equipment for small photoelectric turntable of unmanned vehicle
Raj J et al. Lightweight SAR ship detection and 16 class classification using novel deep learning algorithm with a hybrid preprocessing technique
Venkatesvara Rao et al. Real-time video object detection and classification using hybrid texture feature extraction
Idicula et al. A novel sarnede method for real-time ship detection from synthetic aperture radar image
CN113743185A (en) An optical remote sensing image aircraft detection method and device based on regional saliency guidance
Wang et al. A lightweight CNN for multi-source infrared ship detection from unmanned marine vehicles
Yang et al. Fatcnet: Feature adaptive transformer and CNN for infrared small target detection
CN116453014A (en) Multi-mode road scene target detection method based on images and events
Yang et al. Multicue contrastive self-supervised learning for change detection in remote sensing
Cai et al. HA-Net: a SAR image ship detector based on hybrid attention
Zhang et al. Rotationally unconstrained region proposals for ship target segmentation in optical remote sensing
Zhao et al. Forward vehicle detection based on deep convolution neural network
CN119992238B (en) Hyperspectral target detection and recognition method and system based on semantic and spatial-spectral feature fusion
Paramanandam et al. A review on deep learning techniques for saliency detection
Du et al. A multi-scale attention encoding and dynamic decoding network designed for short-term precipitation forecasting
Donadi et al. Improving generalization of synthetically trained sonar image descriptors for underwater place recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant