CN113057653B - Channel mixed convolution neural network-based motor electroencephalogram signal classification method - Google Patents
Channel mixed convolution neural network-based motor electroencephalogram signal classification method Download PDFInfo
- Publication number
- CN113057653B CN113057653B CN202110294395.0A CN202110294395A CN113057653B CN 113057653 B CN113057653 B CN 113057653B CN 202110294395 A CN202110294395 A CN 202110294395A CN 113057653 B CN113057653 B CN 113057653B
- Authority
- CN
- China
- Prior art keywords
- channel
- layer
- convolution
- mixed
- eeg signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 15
- 238000012545 processing Methods 0.000 claims abstract description 27
- 238000010586 diagram Methods 0.000 claims abstract description 12
- 230000004931 aggregating effect Effects 0.000 claims abstract description 4
- 238000006116 polymerization reaction Methods 0.000 claims abstract description 3
- 238000013527 convolutional neural network Methods 0.000 claims description 36
- 230000004913 activation Effects 0.000 claims description 19
- 230000002776 aggregation Effects 0.000 claims description 12
- 238000004220 aggregation Methods 0.000 claims description 12
- 238000011176 pooling Methods 0.000 claims description 10
- 210000004556 brain Anatomy 0.000 claims description 9
- 238000013507 mapping Methods 0.000 claims description 8
- 210000002569 neuron Anatomy 0.000 claims description 7
- 238000012549 training Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 3
- 230000008901 benefit Effects 0.000 abstract description 4
- 238000004422 calculation algorithm Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 238000010801 machine learning Methods 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 208000012902 Nervous system disease Diseases 0.000 description 2
- 208000025966 Neurological disease Diseases 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000007480 spreading Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 210000003710 cerebral cortex Anatomy 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000036039 immunity Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000002028 premature Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7225—Details of analogue processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Physiology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Psychiatry (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Power Engineering (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Error Detection And Correction (AREA)
- Filters That Use Time-Delay Elements (AREA)
Abstract
The invention discloses a channel mixed convolution neural network-based motor electroencephalogram signal classification method, which specifically comprises the following steps; a. channel mixed convolutional layer output: extracting channel characteristics of an EEG signal, decomposing the original integral single-channel input of the EEG signal into multi-channel input corresponding to each channel, enabling each convolution kernel to span all the channels, and performing transverse convolution on the EEG signal in each channel to obtain the channel characteristics; b. mixed channel processing layer output: processing the channel characteristics extracted from the channel mixed convolution layer to obtain a characteristic diagram; c. polymerization: the characteristic diagram is used for aggregating the characteristic diagram obtained by processing the channel mixed processing layer; d. and (4) classification: for taking over the aggregated output and completing the final decoding classification. The invention can perform excellent decoding and has the advantage of high decoding precision.
Description
Technical Field
The invention relates to the field of data analysis of electroencephalogram signals, in particular to a motor electroencephalogram signal classification method based on a channel mixed convolutional neural network.
Background
EEG signals (EEG signals) are electrical signals generated by neurons in the cerebral cortex and can be divided into self-generating electroencephalograms and inducing electroencephalograms. Currently, brain-computer interfaces (BCIs) are capable of capturing these electrical signals and "translating" it correctly into the instructions required to be executed by the human body, making it practical for the human brain to interact directly with external devices. BCI technology is widely applied, particularly in the medical field, patient groups suffering from neurological disorders or limb disabilities can carry out rehabilitation training or daily activities through equipment with brain-computer interfaces, and for common groups, electroencephalograms collected in BCIs can be used for analysis, so that some neurological diseases can be detected and intervened.
The motion instruction is a target instruction which is widely designed and decoded by BCIs at present, instruction information of the motion instruction is contained in motor imagery electroencephalogram signals, and a large number of feature extraction and classification algorithms built by machine learning are used for building an electroencephalogram signal decoding model in order to extract information related to the motion instruction from EEG with low signal-to-noise ratio and complete a high-precision decoding task. Convolutional Neural Networks (CNN) have achieved great success in decoding electroencephalogram signals, and integrate feature extraction and classification, but the enormous training parameters introduced by the model prevent the electroencephalogram decoding performance from being improved, and challenge is brought to the interpretability of the decoding process.
Disclosure of Invention
The invention aims to provide a method for classifying motor brain electrical signals based on a channel mixed convolution neural network. The invention can perform excellent decoding and has the advantage of high decoding precision.
In order to solve the technical problems, the technical scheme provided by the invention is as follows: a method for classifying motor brain electrical signals based on a channel mixed convolution neural network specifically comprises the following steps;
a. channel mixed convolutional layer output: extracting the channel characteristics of the EEG signal, decomposing the original integral single-channel input of the EEG signal into multi-channel input corresponding to each channel, wherein each convolution kernel spans all the channels at the moment, and performing transverse convolution on the EEG signal in each channel to obtain the channel characteristics;
b. mixed channel processing layer output: processing the channel characteristics extracted from the channel mixed convolution layer to obtain a characteristic diagram;
c. polymerization: the characteristic diagram is used for aggregating the characteristic diagram obtained by processing the channel mixed processing layer;
d. and (4) classification: for taking over the aggregated output and completing the final decoding classification.
In the above-mentioned method for classifying a motor brain signal based on a channel hybrid convolutional neural network, in step a, the EEG signal is decomposed from the original integral single-channel input into a multi-channel input corresponding to each channel, and the tensor of the EEG signal is (channel =1, height = C, width = T) and is converted into an input form of (channel = C, height =1, width = T).
In the aforementioned method for classifying motor electroencephalogram signals based on the channel mixed convolutional neural network, the channel mixed convolutional layer comprises 32 layers implemented by a 2D convolution modeFilter of (2), wherein the number of convolution kernels K 1 32, size of convolution kernel F 1 To (1, 16), step size S at convolution kernel operation 1 Is (1, 1), a fill size P 1 Is (0, 0), receives a tensor of the form (channel = C, height =1, width = T), and the activation function is Linear; the number of channels input by the convolution kernel is equal to the number of channels of the EEG signal.
In the method for classifying motor brain electrical signals based on the channel hybrid convolutional neural network, in the step b, the hybrid channel processing layer adopts a depth separable convolutional layer, and the size F of each convolutional core of the depth separable convolutional layer 2 To (1, 22), a multiplier D in the depth separable convolutional layer 2 Is 2, has a total of K 2 =K 1 ×D 2 The processing of the channel characteristics of the EEG signal is involved by convolution kernels, each of which is constrained by the kernel maximum norm constraint.
In the aforementioned method for classifying a motor brain electrical signal based on a channel hybrid convolutional neural network, in step c, the aggregation sequentially includes the following network layer outputs:
BatchNorm layer: the BatchNorm layer is used for optimizing the input;
activation function layer: the activation function layer adopts an ELU activation function and is used for improving the anti-noise capability and accelerating the training convergence;
average pooling layer: the size of the average pooling layer F P To (1, 180) for down-sampling along the time dimension of the feature map, the step size S of the average pooling layer P Is (1, 30);
dropout layer: the probability p that a neuron node in the Dropout layer is closed is set to 0.5 for reducing the risk of overfitting.
In the above method for classifying a motor brain electrical signal based on a channel hybrid convolutional neural network, the classification steps are as follows: spreading the aggregated output to one dimension by Fallten layer, and putting N C N obtained from a fully connected layer of =4 neurons C Each output is activated by an Activation function softmax in an Activation layer, and a pair is constructedFixed input X i Tags belonging to class kOf the magnitude of the probability, i.e. a mapping functionWherein theta is a parameter set to be trained in the whole network; s is the S th subject;
thus for a single X i Will give N C Individual probability value, and then the class label with the highest corresponding probability value is taken as X i The final classification result, i.e.While minimizing the corresponding sum of loss functionsWhereinAnd finally, the channel mixed convolution neural network is enabled to assign the highest probability value to the correct label.
Compared with the prior art, the invention has the following beneficial effects:
the channel mixed convolution neural network constructed by the invention comprises channel mixed convolution layer output, mixed channel processing layer output, aggregation and classification. When the channel mixed convolution layer outputs to extract the channel characteristics of the EEG signal, the EEG signal is decomposed from the original integral single-channel input into multi-channel input corresponding to each channel, each convolution kernel spans all the channels at the moment, and the EEG signal in each channel is subjected to transverse convolution to obtain the channel characteristics; the time domain layer and the space domain layer which are originally and independently arranged are combined into a whole, the representation form of the input CNN EEG original data is changed, and a receiving network matched with the CNN is constructed, so that the model has the capability of extracting relevant characteristics of the motion electroencephalogram in the time domain and the space domain under the original explicit stacking. The mixed channel processing layer can deeply process and integrate the mixed channel characteristics, then utilizes aggregation to aggregate the characteristic graphs obtained by processing the mixed channel processing layer, and finally is used for receiving the output of the aggregation layer through classification and finishing the final decoding classification. In summary, the present invention can perform excellent decoding and has an advantage of high decoding accuracy.
Drawings
FIG. 1 is a schematic diagram of a channel hybrid convolutional neural network constructed in an embodiment;
FIG. 2 is a confusion matrix obtained by a conventional machine learning algorithm FBCSP on a first data set;
FIG. 3 is a confusion matrix obtained by the conventional machine learning algorithm FBCSP on a second data set;
FIG. 4 is a confusion matrix obtained by a channel mixing convolution neural network constructed by the present invention on a first data set;
fig. 5 is a confusion matrix obtained by the channel hybrid convolutional neural network constructed by the present invention on the second data set.
Detailed Description
The present invention will be further described with reference to the following examples and drawings, but the present invention is not limited thereto.
Example (b): a channel hybrid convolutional neural network-based motor electroencephalogram signal classification method comprises the steps of constructing a channel hybrid convolutional neural network, and classifying cloud electroencephalograms by using the channel hybrid convolutional neural network as shown in figure 1, wherein the channel hybrid convolutional neural network comprises the steps of channel hybrid convolutional layer output, hybrid channel processing layer output, aggregation and classification, and four layer blocks of a channel hybrid convolutional layer output layer, a hybrid channel processing layer output layer, an aggregation layer and a classification layer are formed accordingly.
The channel mixing convolutional layer is used for extracting the channel characteristics of the EEG signal; the channel mixed convolution layer comprises 32 filters realized by a 2D convolution mode, wherein the number K of convolution kernels 1 32, size of convolution kernel F 1 To (1, 16), step size S at convolution kernel operation 1 Is (1, 1), the filling size P 1 Is (0, 0), receives a tensor of the form (C, 1, T), and has an activation function of Linear; the convolution kernelThe number of channels input is equal to the number of channels of the EEG signal. The construction of the mixed channel convolutional neural network is to deal with the situation that the data representation form fed into the CNN is changed. The data representation form of CNN fed by the original algorithm is usually a tensor with a shape size of (1, C, T) (the receiving form of the convolution kernel in CNN is (channel, height, width)), however, the channel mixing convolution neural network allows the receiving input form to be (C, 1, T), that is, the input tensor of EEG signal is (channel =1, height = C, width = T) and is converted into the input form (channel = C, height =1, width = T). When the number of channels input by the convolution kernel is equal to the number of EEG channels, the EEG data is decomposed from the original integral single-channel input into multi-channel input corresponding to each channel, and each convolution kernel spans all the channels at the moment and carries out transverse convolution, namely time-domain convolution on EEG signals in each channel; it is worth noting that since the convolution kernels have a shape size of (1, t) with respect to the EEG signal data injected in each input channel, the filter bank implemented here in a 2D convolution is equivalent to the filter bank implemented by a 1D convolution. Compared with the original "time domain layer" and "channel mixed convolution layer" which are not only used for extracting time domain features, because the number of input channels of the convolution kernel is equal to the number of channels of the EEG signal, the multi-channel calculation method of the convolution kernel introduces: each convolution kernel compresses the C-sheets (1, t') of the time domain feature maps of the signals from the EEG channels generated by the time domain convolution on the input channel to 1 sheet. The essence of this compression is the linear mixing of the profiles in the various convolution kernel channels, i.e., the linear superposition of the signals in the various channels of the EEG. After the convolution is finished, the EEG signal does not have the original channel form any more, the number of channels is changed from C to 1, and the EEG signal can be regarded as a single-channel signal additionally and newly generated. This is exactly the same convolution effect as the "spatial layer", however, the "channel mix convolutional layer" does not suffer from the negative effects of the premature loss of channel morphology, since previously, implicitly placed time domain convolutions have been performed to reduce the time domain dimension of the EEG signal from T to T' in different channels. At this time, the channel is again performed by a single convolution kernelAnd mixing convolution to generate a new signal, wherein the process is actually to complete the transient response of the time domain characteristics and the channel characteristics, the generated signal is not meaningless any more, and the mixed signal contains rich time domain-channel characteristics establishing a mapping relation. Implicit stacking enables the depth of the whole network to be shallower than that of the original explicit stacking, and a large number of parameters to be trained generated by a time domain layer and a space domain layer which are separated are reduced.
The mixed channel processing layer is used for processing the channel characteristics extracted by the channel mixed convolution layer to obtain a characteristic diagram; for the time domain-channel characteristics (channel characteristics) extracted by the channel mixed convolution layer, a certain mapping relation is assumed to be maintained between the two, so that the introduced structure can capture and supplement the mapping relation. Thus, the size F of each convolution kernel of the depth-separable convolution layer 2 To (1, 22), a multiplier D in the depth separable convolutional layer 2 Is 2, has a total of K 2 =K 1 ×D 2 The processing of the channel characteristics of the EEG signal is involved by convolution kernels, each of which is constrained by the kernel maximum norm constraint. These convolution kernels can process the inner and outer portions of the single time domain-channel signature of the previous layer: and the mapping relation established between the time domain and the channel is mined and supplemented, and simultaneously, the mapping relation between the characteristic graphs can be optimized and integrated by spanning all the characteristic graphs.
The aggregation is used for aggregating the characteristic diagram obtained by processing the mixed channel processing layer; the aggregation comprises the following network layer outputs in sequence: a BatchNorm layer, an activation function layer, an average pooling layer, and a Dropout layer;
the BatchNorm layer is used for optimizing input;
the activation function layer adopts an ELU activation function:
f(x)=x,x≥0;f(x)=e x -1, x < 0 for improved noise immunity and faster training convergence;
the size of the average pooling layer F P Is (1, 180) for down-sampling along the time dimension of the feature map, the step size S of the average pooling layer P To (1, 30), such a parameter setting means that the pooling layer spans an EEG signal of 120ms duration as it moves laterally along the EEG signal profile, and triggers an average activation every time a signal of about 720ms duration passes, which can be used to further reduce the parameters that need to be fitted when training the model;
the probability p that a neuron node in the Dropout layer is closed is set to 0.5 for reducing the risk of overfitting.
The classification layer is used for receiving the output of the aggregation layer and finishing the final decoding classification, and comprises a Fallten layer, a full connection layer and an Activation layer;
in the classification step, the Fallten layer is used for spreading the output of the aggregation layer to one dimension to complete the transition from the convolution layer to the full-connection layer, and then N is put into the Fallten layer C N obtained from a fully connected layer of =4 neurons in the fully connected layer C Each output is activated by the Activation function softmax in the Activation layer, constructing the input X given i Tags belonging to class kOf the magnitude of the probability, i.e. a mapping functionWherein theta is a parameter set to be trained in the whole network; s is the S th subject;
thus for a single X i Will give N C Individual probability value, and then the class label with the highest corresponding probability value is taken as X i The final classification result, i.e.While minimizing the corresponding sum of loss functionsWherein
And finally, the channel mixed convolution neural network is enabled to assign the highest probability value to the correct label.
In order to further clearly describe the channel hybrid convolutional neural network described in the embodiment of the present invention, table 1 is used to describe each functional block of the hybrid channel convolutional neural network, including corresponding names, modules and component parameters in each network layer. The channel hybrid convolutional neural network constructed in this embodiment introduces a total number of parameters 32064, 17472 for the motion execution and the motion imagery data sets, respectively.
TABLE 1
In order to verify the effect of the invention, on two public computer data sets BCI-IV2a and HGD, the invention is compared with the classical algorithm FBCSP in the traditional machine learning on one hand, and is compared with the existing mainstream CNN model algorithm on the other hand, wherein the performance of the invention comprises Shallow _ FbcspNet, EEGNet, C2CM, waSF-ConvNet, sinc-ShallowNet and CP-MixedNet. The decoding accuracy and the confusion matrix are adopted to jointly measure the overall classification performance of the model.
First, the conventional machine learning algorithm FBCSP is compared with the proposed channel hybrid convolutional neural network on the ME-EEG and MI-EEG datasets, fig. 2-5 are confusion matrices obtained on different datasets by both, and the classification accuracy of each class in each matrix is calculated across the experimenter. After feature engineering is completed by the traditional FBCSP algorithm, the traditional FBCSP algorithm is input into an rLDA classifier, and on the basis, the classifying precision of both left hand and right hand on MI-EEG classifying task is close to that of Channels-Mixing-ConvNet, but the identification of feet and tongue cannot keep the previous level, and the Channels-Mixing-ConvNet still has higher classifying precision for feet and tongue (see fig. 2 and 4). In ME-EEG, the performance of the conventional method on four classification tasks is completely lower than that of the method proposed by the present invention (see fig. 3 and 5).
Secondly, the overall decoding accuracy of the present invention and the other 6 most advanced CNN models on ME-EEG and MI-EEG datasets, respectively, is shown in table 2.
TABLE 2
The respective overall classification performance (decoding precision mean ± variance) across subjects on ME-EEG and MI-EEG for FBCSP, shallow _ FbcspNet, EEGNet, C2CM, waSF-ConvNet, sinc-ShallowNet, CP-MixedNet7 models and algorithms are listed in Table 2, respectively. As can be seen from Table 2, the decoding accuracy of Shallow _ FbcspNet, C2CM, sinc-ShallowNet and the proposed Channels-Mixing-ConvNet of the present invention all reached a 70% + level on the MI-EEG data set, specifically 72.0% + -13.9%, 74.4% + -14.5%, 72.8% + -12.9%, 74.9% + -14.9%, respectively. The channel-Mixing-ConvNet (channel mixed convolution neural network) provided by the invention achieves the decoding precision of 74.9 +/-14.9% at most. Similarly, it can be seen from the table that the decoding performance of C2CM is very similar to the network model proposed by the present invention, and the difference in accuracy is only 0.5%. The decoding precision of the CP-Mixed-convolutional network also reaches 73.2 percent, which is close to the Channels-Mixing-ConvNet (channel hybrid convolutional neural network) provided by the invention. And CP-MixedNet is a model trained by adopting a data enhancement mode for CP-MixedNet, the decoding precision of the model is as high as 74.6 percent, the model surpasses most comparison models, and although the decoding precision is improved by 1.4 percent compared with the original decoding precision by data enhancement, the model is still slightly lower than the proposed Channels-Mixing-ConvNet. Meanwhile, the applicant compares the performances of the models on the ME-EEG dataset, and Shalow _ FbcspNet, sinc-ShallowNet, CP-MixedNet and CP-MixedNet still maintain high performance level when crossing the dataset, and the decoding precision reaches 93.9% + -9.3%, 91.2% + -9.1%, 93.0 and 93.7 respectively. However, the channel mixed convolution neural network provided by the invention is 1.3% higher than the CP-MixedNet which obtains the highest decoding precision in the model, and reaches 95.0% +/-7.3%. Therefore, it can be noted that even if the data-enhanced CP-MixedNet is used, the decoding accuracy is improved by only 0.7%, and the effect is much worse than that of the present invention. Therefore, compared with the traditional machine learning algorithm FBCSP, the accuracy of the model provided by the invention is improved by nearly 7%, and the model has the advantage of high decoding accuracy.
Claims (3)
1. A method for classifying motor brain electrical signals based on a channel mixed convolution neural network is characterized by comprising the following steps: the method specifically comprises the following steps;
a. channel mixed convolutional layer output: extracting the channel characteristics of the EEG signal, decomposing the original integral single-channel input of the EEG signal into multi-channel input corresponding to each channel, enabling each convolution kernel to span all the channels, and performing transverse convolution on the EEG signal in each channel to obtain the channel characteristics;
b. mixed channel processing layer output: processing the channel characteristics extracted from the channel mixed convolution layer to obtain a characteristic diagram;
c. polymerization: the characteristic diagram is used for aggregating the characteristic diagram obtained by processing the channel mixed processing layer;
d. and (4) classification: used for carrying out the output of the aggregation and finishing the final decoding classification;
in step a, decomposing the original integral single-channel input of the EEG signal into multi-channel input corresponding to each channel, converting the input tensor of the EEG signal into an input form (channel =1, height = C, width = T) by using the input tensor (channel = C, height =1, width = T);
the channel mixed convolution layer comprises 32 filters realized by 2D convolution mode, wherein the number K of convolution kernels 1 32, size of convolution kernel F 1 To (1, 16), step size in convolution kernel operationS 1 Is (1, 1), the filling size P 1 Is (0, 0), receives a tensor of the form (channel = C, height =1, width = T), and the activation function is Linear; the number of channels input by the convolution kernel is equal to the number of channels of the EEG signal; after the convolution is finished, the EEG signal does not have the original channel form any more, and the number of channels is changed from C to 1;
in step b, the mixed channel processing layer adopts a depth-separable convolutional layer, and the size F of each convolutional core of the depth-separable convolutional layer 2 To (1, 22), a multiplier D in the depth separable convolutional layer 2 Is 2, has a total of K 2 =K 1 ×D 2 The processing of the channel characteristics of the EEG signal is involved by convolution kernels, each of which is constrained by the kernel maximum norm constraint.
2. The channel mixed convolutional neural network-based motor brain electrical signal classification method of claim 1, characterized in that: in step c, the aggregation sequentially comprises the following network layer outputs:
BatchNorm layer: the BatchNorm layer is used for optimizing input;
activation function layer: the activation function layer adopts an ELU activation function and is used for improving the anti-noise capability and accelerating the training convergence;
average pooling layer: the size of the average pooling layer F P Is (1, 180) for down-sampling along the time dimension of the feature map, the step size S of the average pooling layer P Is (1, 30);
dropout layer: the probability p that a neuron node in the Dropout layer is closed is set to 0.5 for reducing the risk of overfitting.
3. The channel mixed convolution neural network-based electroencephalogram signal classification method as claimed in claim 1, characterized in that: the classification steps are specifically as follows: spread the aggregated output to one dimension using Fallten layers and then put N C N obtained from a fully connected layer of =4 neurons C Individual output is activated function in Activation layerSeveral softmax are activated, building a set of inputs X i Tags belonging to class kOf the magnitude of the probability, i.e. a mapping functionWherein theta is a parameter set to be trained in the whole network; s is the S th subject;
thus for a single X i Will give N C Individual probability value, and then the class label with the highest corresponding probability value is taken as X i The final classification result, i.e.While minimizing the corresponding sum of loss functionsWherein And finally, the channel mixed convolution neural network is enabled to assign the highest probability value to the correct label.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110294395.0A CN113057653B (en) | 2021-03-19 | 2021-03-19 | Channel mixed convolution neural network-based motor electroencephalogram signal classification method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110294395.0A CN113057653B (en) | 2021-03-19 | 2021-03-19 | Channel mixed convolution neural network-based motor electroencephalogram signal classification method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN113057653A CN113057653A (en) | 2021-07-02 |
| CN113057653B true CN113057653B (en) | 2022-11-04 |
Family
ID=76562239
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110294395.0A Active CN113057653B (en) | 2021-03-19 | 2021-03-19 | Channel mixed convolution neural network-based motor electroencephalogram signal classification method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113057653B (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110069958A (en) * | 2018-01-22 | 2019-07-30 | 北京航空航天大学 | A kind of EEG signals method for quickly identifying of dense depth convolutional neural networks |
| CN110929581A (en) * | 2019-10-25 | 2020-03-27 | 重庆邮电大学 | An EEG Signal Recognition Method Based on Spatio-temporal Feature Weighted Convolutional Neural Network |
Family Cites Families (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4371699B2 (en) * | 2003-05-14 | 2009-11-25 | 独立行政法人科学技術振興機構 | Vital sign measurement system, vital sign measurement method, vital sign measurement program, and recording medium |
| TW201238562A (en) * | 2011-03-25 | 2012-10-01 | Univ Southern Taiwan | Brain wave control system and method |
| CN106963374A (en) * | 2017-04-14 | 2017-07-21 | 山东大学 | A kind of brain electro-detection method and device based on S-transformation and deep belief network |
| CA3101371C (en) * | 2018-05-24 | 2024-01-02 | Health Tech Connex Inc. | Quantifying motor function using eeg signals |
| KR102318775B1 (en) * | 2018-08-13 | 2021-10-28 | 한국과학기술원 | Method for Adaptive EEG signal processing using reinforcement learning and System Using the same |
| CN109215380B (en) * | 2018-10-17 | 2020-07-31 | 浙江科技学院 | Effective parking space prediction method |
| CN109993808B (en) * | 2019-03-15 | 2020-11-10 | 浙江大学 | Dynamic double-tracing PET reconstruction method based on DSN |
| CN110867181B (en) * | 2019-09-29 | 2022-05-06 | 北京工业大学 | Multi-target speech enhancement method based on joint estimation of SCNN and TCNN |
| CN112364989A (en) * | 2020-10-10 | 2021-02-12 | 天津大学 | Fast Fourier transform-based convolutional neural network acceleration design method |
-
2021
- 2021-03-19 CN CN202110294395.0A patent/CN113057653B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110069958A (en) * | 2018-01-22 | 2019-07-30 | 北京航空航天大学 | A kind of EEG signals method for quickly identifying of dense depth convolutional neural networks |
| CN110929581A (en) * | 2019-10-25 | 2020-03-27 | 重庆邮电大学 | An EEG Signal Recognition Method Based on Spatio-temporal Feature Weighted Convolutional Neural Network |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113057653A (en) | 2021-07-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111012336B (en) | Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion | |
| Zhao et al. | Deep CNN model based on serial-parallel structure optimization for four-class motor imagery EEG classification | |
| CN113269048A (en) | Motor imagery electroencephalogram signal classification method based on deep learning and mixed noise data enhancement | |
| CN111931656B (en) | User independent motor imagery classification model training method based on transfer learning | |
| CN113052099B (en) | A SSVEP Classification Method Based on Convolutional Neural Networks | |
| CN115238796A (en) | Motor imagery electroencephalogram signal classification method based on parallel DAMSCN-LSTM | |
| CN113180692A (en) | Electroencephalogram signal classification and identification method based on feature fusion and attention mechanism | |
| CN109144277B (en) | Method for constructing intelligent vehicle controlled by brain based on machine learning | |
| CN115251909B (en) | Method and device for evaluating hearing by electroencephalogram signals based on space-time convolutional neural network | |
| CN110175510A (en) | Multi-mode Mental imagery recognition methods based on brain function network characterization | |
| CN117113015B (en) | A method and device for recognizing electroencephalogram signals based on spatiotemporal deep learning | |
| CN113780134A (en) | An EEG decoding method for motor imagery based on ShuffleNetV2 network | |
| CN108364062B (en) | Construction method of deep learning model based on MEMD and its application in motor imagery | |
| CN119475109B (en) | A motor imagery EEG signal classification and recognition method and system based on data enhancement and multimodal feature fusion | |
| CN108470182B (en) | Brain-computer interface method for enhancing and identifying asymmetric electroencephalogram characteristics | |
| Li et al. | EEG signal processing based on genetic algorithm for extracting mixed features | |
| Luo et al. | MI-MBFT: Superior Motor Imagery Decoding of Raw EEG Data Based on a Multi-Branch and Fusion Transformer Framework | |
| CN119646623A (en) | A cross-subject EEG signal emotion recognition model training method based on spatial interaction features, a model training system and an emotion recognition method | |
| CN113057653B (en) | Channel mixed convolution neural network-based motor electroencephalogram signal classification method | |
| CN114847973A (en) | A Few-Sample Recognition Method Based on Brain-Computer Interface | |
| CN118839240A (en) | Electroencephalogram signal classification method based on three-dimensional attention depth separable convolution network | |
| CN118153626A (en) | Lightweight network model and construction method for surface electromyography signal gesture recognition | |
| CN116361700B (en) | Collaborative motor imagery decoding method and brain-computer system based on hypergraph representation | |
| CN112259228A (en) | Depression screening method by dynamic attention network non-negative matrix factorization | |
| CN111950366A (en) | A data-augmented convolutional neural network for motor imagery EEG classification |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |