Disclosure of Invention
Aiming at the defects, the invention provides a micro-grid dynamic state estimation method and a micro-grid dynamic state estimation system considering data loss, and aims to solve the problems that the existing EKF-based grid state estimation method is highly dependent on an accurate mathematical model, is difficult to adapt to the condition of incomplete data, and causes the reduction of the accuracy and reliability of state estimation.
To achieve the purpose, the invention adopts the following technical scheme:
A micro-grid dynamic state estimation method considering data loss comprises the following steps:
Step S1, a micro-grid model and a Gilbert-Elliott model are established and combined, wherein the micro-grid model is used for simulating dynamic behaviors of a micro-grid, and the Gilbert-Elliott model is used for describing data loss behaviors in a communication network;
Step S2, acquiring historical measurement data of the micro-grid from a sensor and a data center of the micro-grid, and simulating and supplementing the historical measurement data of the micro-grid by utilizing a Gilbert-Elliott model to obtain a data set containing the historical measurement data and packet loss data;
Step S3, preprocessing data in a dataset containing historical measurement data and packet loss data to obtain a preprocessed dataset;
s4, constructing an LSTM-FNN model;
s5, training the LSTM-FNN model by using the preprocessed data set to obtain a trained LSTM-FNN model;
and S6, inputting the data acquired by the sensors of the micro-grid in real time into a trained LSTM-FNN model to estimate the missing state caused by data loss, and outputting an estimated value of the missing state of the micro-grid.
Preferably, in step S1, the Gilbert-Elliott model describes transition probabilities between system states through a markov chain state transition matrix to simulate a data loss process, wherein the mathematical expression of the markov chain state transition matrix is as follows:
Where a denotes a markov chain state transition matrix, prmγ k+1=0|γk =0 denotes a probability of transition from a system state of γ k =0 to a system state of γ k+1 =0, p=pr (γ k+1=1|γk =0) denotes a probability of transition from a system state of γ k =0 to a system state of γ k+1 =1, q=pr (γ k+1=0|γk =1 denotes a probability of transition from a system state of γ k =1 to a system state of γ k+1 =0, and Pr (γ k+1=1|γk =1) denotes a probability of transition from a system state of γ k =1 to a system state of γ k+1 =1.
Preferably, in the step S3, the method specifically comprises the substeps of deleting repeated values and abnormal values in the dataset containing the history measurement data and the packet loss data and filling missing values in the dataset containing the history measurement data and the packet loss data to obtain a deleted and filled dataset, and the substep of normalizing the data in the deleted and filled dataset to obtain a preprocessed dataset.
Preferably, in step S5, the following substeps are specifically included:
Step S51, training LSTM1, FNN1 and FNN2 in an LSTM-FNN model sequentially by using the preprocessed data set to complete training in a first stage, wherein LSTM1 represents a long-short-term memory network 1 for extracting time features under the condition of no data loss, FNN1 represents a feedforward neural network 1 for processing the time features extracted by LSTM1 in a deeper level, FNN2 represents a feedforward neural network 2 for extracting features from the output of FNN 1;
Step S52, parameters of LSTM1, FNN1 and FNN2 which are subjected to training in the first stage are kept unchanged, and a preprocessed data set is used for training LSTM2, FNN3 and FNN4 in the LSTM-FNN model in sequence to complete training in the second stage, wherein LSTM2 represents a long and short term memory network 2 and is used for extracting time features under the condition of data loss, FNN3 represents a feedforward neural network 3 and is used for processing information of a deeper level on the time features extracted by LSTM2, and FNN4 represents a feedforward neural network 4 and is used for extracting features from output of FNN 3;
And step S53, parameters of LSTM2, FNN3 and FNN4 which are subjected to the second stage training are kept unchanged, a random forest regressive is built at the output end of the FNN4, the random forest regressive is trained through the output of the FNN4, and parameters of the random forest regressive are optimized by using RandomizedSearchCV and GRIDSEARCHCV, so that the training of the third stage is completed.
Preferably, the method further comprises the following steps:
The optimal parameter set theta M of the trained LSTM-FNN model is found by adjusting the parameter theta of the trained LSTM-FNN model, so that the error between the predicted value and the actual value of the trained LSTM-FNN model is minimum, wherein the specific mathematical formula of the optimal parameter set theta M of the trained LSTM-FNN model is as follows:
Wherein θ M represents the optimal parameter set of the trained LSTM-FNN model, f nn(yu,yv) represents the output value of the trained LSTM-FNN model, i.e., the predicted value, and y m represents the target value, i.e., the actual value, for which the trained LSTM-FNN model is desired to be predicted.
Another aspect of the present application provides a dynamic state estimation system of a micro grid considering data loss, the system comprising:
The first building module is used for building a micro-grid model, wherein the micro-grid model is used for simulating the dynamic behavior of the micro-grid;
the second building module is used for building a Gilbert-Elliott model, wherein the Gilbert-Elliott model is used for describing data loss behaviors in a communication network;
the combining module is used for combining the micro-grid model and the Gilbert-Elliott model;
the acquisition module is used for acquiring historical measurement data of the micro-grid from the sensors and the data center of the micro-grid;
the simulation and supplementation module is used for simulating and supplementing historical measurement data of the micro-grid by using a Gilbert-Elliott model so as to obtain a data set containing the historical measurement data and packet loss data;
The preprocessing module is used for preprocessing data in a data set containing historical measurement data and packet loss data to obtain a preprocessed data set;
The building module is used for building an LSTM-FNN model;
the model training module is used for training the LSTM-FNN model by using the preprocessed data set to obtain a trained LSTM-FNN model;
The input module is used for inputting data acquired by the sensors of the micro-grid in real time into the trained LSTM-FNN model to estimate the missing state caused by data loss;
And the output module is used for outputting the estimated value of the missing state of the micro-grid.
Preferably, in the second building block, the Gilbert-Elliott model describes transition probabilities between system states through a markov chain state transition matrix to simulate a data loss process, wherein a mathematical expression of the markov chain state transition matrix is as follows:
Where a denotes a markov chain state transition matrix, pr (γ k+1=0|γk =0) denotes a probability of transition from a system state of γ k =0 to a system state of γ k+1 =0, p=pr (γ k+1=1|γk =0) denotes a probability of transition from a system state of γ k =0 to a system state of γ k+1 =1, q=pr (γ k+1=0|γk =1) denotes a probability of transition from a system state of γ k =1 to a system state of γ k+1 =0, and Pr (γ k+1=1|γk =1) denotes a probability of transition from a system state of γ k =1 to a system state of γ k+1 =1.
Preferably, the preprocessing module comprises a data deleting sub-module for deleting repeated values and abnormal values in a dataset containing historical measurement data and packet loss data, a data filling sub-module for filling missing values in the dataset containing historical measurement data and packet loss data, and a data normalizing sub-module for normalizing the deleted and filled data in the dataset to obtain a preprocessed dataset.
Preferably, the model training module comprises:
The system comprises a first training submodule, a feedforward neural network 1, a feedforward neural network 2 and a first training submodule, wherein the first training submodule is used for training LSTM1, FNN1 and FNN2 in an LSTM-FNN model in sequence by using a preprocessed data set to complete training of a first stage, LSTM1 represents a long-short-period memory network 1 and is used for extracting time characteristics under the condition of no data loss, FNN1 represents the feedforward neural network 1 and is used for carrying out deeper information processing on the time characteristics extracted by LSTM1, and FNN2 represents the feedforward neural network 2 and is used for extracting the characteristics from the output of FNN 1;
The first parameter setting sub-module is used for keeping the parameters of the LSTM1, the FNN1 and the FNN2 which are trained in the first stage unchanged;
The second training sub-module is used for training the LSTM2, the FNN3 and the FNN4 in the LSTM-FNN model sequentially by using the preprocessed data set to complete the training of the second stage, wherein the LSTM2 represents a long-short-period memory network 2 for extracting time characteristics under the condition of data loss, the FNN3 represents a feedforward neural network 3 for carrying out deeper information processing on the time characteristics extracted by the LSTM2, and the FNN4 represents a feedforward neural network 4 for extracting the characteristics from the output of the FNN 3;
The second parameter setting submodule is used for keeping the parameters of LSTM2, FNN3 and FNN4 which are subjected to the second-stage training unchanged;
the construction submodule is used for constructing a random forest regressor at the output end of the FNN 4;
the third training submodule is used for training the random forest regressor through the output of the FNN 4;
A parameter optimization sub-module for optimizing parameters of the random forest regressor using RandomizedSearchCV and GRIDSEARCHCV.
Preferably, the model parameter optimization module is further included, and the model parameter optimization module is configured to find an optimal parameter set θ M of the trained LSTM-FNN model by adjusting a parameter θ of the trained LSTM-FNN model so as to minimize an error between a predicted value and an actual value of the trained LSTM-FNN model, where a specific mathematical formula of the optimal parameter set θ M of the trained LSTM-FNN model is as follows:
Wherein θ M represents the optimal parameter set of the trained LSTM-FNN model, f nn(yu,yv) represents the output value of the trained LSTM-FNN model, i.e., the predicted value, and y m represents the target value, i.e., the actual value, for which the trained LSTM-FNN model is desired to be predicted.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
According to the scheme, a Gilbert-Elliott model and an LSTM-FNN model are established, the Gilbert-Elliott model is utilized for data simulation and supplementation to obtain a data set containing historical measurement data and packet loss data, the data set is used for training the LSTM-FNN model, and the trained LSTM-FNN model is used for estimating the missing state caused by data loss. Compared with the existing EKF-based power grid state estimation method, the LSTM-FNN model in the scheme carries out state estimation by learning the mode in the data, so that dependence on an accurate mathematical model is reduced, and the adaptability and flexibility of the model are improved. Meanwhile, the LSTM-FNN model can process the data set containing the packet loss data, and can provide accurate state estimation under the condition of incomplete data, so that the accuracy and reliability of the state estimation are improved.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the present invention and are not to be construed as limiting the present invention.
A micro-grid dynamic state estimation method considering data loss comprises the following steps:
Step S1, a micro-grid model and a Gilbert-Elliott model are established and combined, wherein the micro-grid model is used for simulating dynamic behaviors of a micro-grid, and the Gilbert-Elliott model is used for describing data loss behaviors in a communication network;
Step S2, acquiring historical measurement data of the micro-grid from a sensor and a data center of the micro-grid, and simulating and supplementing the historical measurement data of the micro-grid by utilizing a Gilbert-Elliott model to obtain a data set containing the historical measurement data and packet loss data;
Step S3, preprocessing data in a dataset containing historical measurement data and packet loss data to obtain a preprocessed dataset;
s4, constructing an LSTM-FNN model;
s5, training the LSTM-FNN model by using the preprocessed data set to obtain a trained LSTM-FNN model;
and S6, inputting the data acquired by the sensors of the micro-grid in real time into a trained LSTM-FNN model to estimate the missing state caused by data loss, and outputting an estimated value of the missing state of the micro-grid.
In the method for estimating the dynamic state of the micro-grid in consideration of data loss, as shown in fig. 1, the first step is to establish a micro-grid model and a Gilbert-Elliott model, and combine the two models, wherein the micro-grid model is used for simulating the dynamic behavior of the micro-grid, and the Gilbert-Elliott model is used for describing the data loss behavior in a communication network. The Gilbert-Elliott model is a markov chain model for describing data loss behavior in a communication network. By building a Gilbert-Elliott model, the Gilbert-Elliott model can employ a markov chain to describe data loss in communications and help to improve accuracy and reliability of state estimation when communications are unstable by modeling data loss patterns. In addition, by combining the micro-grid model and the Gilbert-Elliott model, it is advantageous to adapt the state estimation more to complex environments. The second step is to acquire historical measurement data of the micro-grid from the sensors and the data center of the micro-grid, and simulate and supplement the historical measurement data of the micro-grid by using a Gilbert-Elliott model to obtain a data set containing the historical measurement data and packet loss data. And thirdly, preprocessing the data in the data set containing the historical measurement data and the packet loss data to obtain a preprocessed data set, wherein in the embodiment, the quality of the data set is improved by preprocessing the data in the data set containing the historical measurement data and the packet loss data. The fourth step is to construct an LSTM-FNN model, in this embodiment, the LSTM-FNN model is a model combining a long-short-term memory network (LSTM) and a Feedforward Neural Network (FNN), and the combination of the LSTM and the FNN can make full use of the advantages of the LSTM in terms of processing time-series data and the capability of the FNN in terms of nonlinear mapping, so as to construct a neural network model suitable for dynamic state estimation of a micro-grid. And fifthly, training the LSTM-FNN model by using the preprocessed data set to obtain a trained LSTM-FNN model, wherein in the embodiment, the LSTM-FNN model is trained by using the preprocessed data set so that the LSTM-FNN model can handle the problem of data loss, and therefore the prediction precision and generalization capability of the LSTM-FNN model are improved. Further, in the training process, the LSTM-FNN model receives the historical data sequence, including the observed state and the missing state, and considers the data loss pattern described by the Gilbert-Elliott model to learn how to minimize the prediction error between the estimated state and the real state, so that the LSTM-FNN model can accurately estimate the system state even in the face of a large amount of data loss. And the sixth step is to input the data acquired by the sensors of the micro-grid into the trained LSTM-FNN model to estimate the missing state caused by data loss and output the estimated value of the missing state of the micro-grid.
According to the scheme, a Gilbert-Elliott model and an LSTM-FNN model are established, the Gilbert-Elliott model is utilized for data simulation and supplementation to obtain a data set containing historical measurement data and packet loss data, the data set is used for training the LSTM-FNN model, and the trained LSTM-FNN model is used for estimating the missing state caused by data loss. Compared with the existing EKF-based power grid state estimation method, the LSTM-FNN model in the scheme carries out state estimation by learning the mode in the data, so that dependence on an accurate mathematical model is reduced, and the adaptability and flexibility of the model are improved. Meanwhile, the LSTM-FNN model can process the data set containing the packet loss data, and can provide accurate state estimation under the condition of incomplete data, so that the accuracy and reliability of the state estimation are improved.
Preferably, in step S1, the Gilbert-Elliott model describes transition probabilities between system states through a markov chain state transition matrix to simulate a data loss process, wherein the mathematical expression of the markov chain state transition matrix is as follows:
Where a denotes a markov chain state transition matrix, prmγ k+1=0|γk =0 denotes a probability of transition from a system state of γ k =0 to a system state of γ k+1 =0, p=pr (γ k+1=1|γk =0) denotes a probability of transition from a system state of γ k =0 to a system state of γ k+1 =1, q=pr (γ k+1=0|γk =1 denotes a probability of transition from a system state of γ k =1 to a system state of γ k+1 =0, and Pr (γ k+1=1|γk =1) denotes a probability of transition from a system state of γ k =1 to a system state of γ k+1 =1.
In this embodiment, the Gilbert-Elliott model is composed of two states, namely a state in which a data packet is successfully transmitted and a state in which the data packet is lost, and transitions between these states are limited by transition probabilities. Further, parameters p and q in the mathematical expression of the Markov chain state transition matrix are used to describe the transition characteristics of the system between different states, helping to analyze the dynamic behavior and stability of the system.
Preferably, in the step S3, the method specifically comprises the substeps of deleting repeated values and abnormal values in the dataset containing the history measurement data and the packet loss data, filling missing values in the dataset containing the history measurement data and the packet loss data to obtain a deleted and filled dataset, and normalizing the data in the deleted and filled dataset to obtain a preprocessed dataset in the step S32. In this embodiment, by deleting the repeated value and the abnormal value in the data set including the history measurement data and the packet loss data, filling the missing value in the data set including the history measurement data and the packet loss data, and performing normalization processing on the deleted and filled data in the data set, the data quality is improved, and the stability and reliability of the subsequent LSTM-FNN model training are ensured.
Preferably, in step S5, the method specifically comprises the following substeps:
Step S51, training LSTM1, FNN1 and FNN2 in an LSTM-FNN model sequentially by using the preprocessed data set to complete training in a first stage, wherein LSTM1 represents a long-short-term memory network 1 for extracting time features under the condition of no data loss, FNN1 represents a feedforward neural network 1 for processing the time features extracted by LSTM1 in a deeper level, FNN2 represents a feedforward neural network 2 for extracting features from the output of FNN 1;
Step S52, parameters of LSTM1, FNN1 and FNN2 which are subjected to training in the first stage are kept unchanged, and a preprocessed data set is used for training LSTM2, FNN3 and FNN4 in the LSTM-FNN model in sequence to complete training in the second stage, wherein LSTM2 represents a long and short term memory network 2 and is used for extracting time features under the condition of data loss, FNN3 represents a feedforward neural network 3 and is used for processing information of a deeper level on the time features extracted by LSTM2, and FNN4 represents a feedforward neural network 4 and is used for extracting features from output of FNN 3;
And step S53, parameters of LSTM2, FNN3 and FNN4 which are subjected to the second stage training are kept unchanged, a random forest regressive is built at the output end of the FNN4, the random forest regressive is trained through the output of the FNN4, and parameters of the random forest regressive are optimized by using RandomizedSearchCV and GRIDSEARCHCV, so that the training of the third stage is completed.
In this embodiment, in step S51, the data loss is ignored in the first stage training, which means that LSTM1 can obtain a true measurement value. The LSTM1, the FNN1 and the FNN2 are trained sequentially by using the preprocessed data set, so that the prediction error of the LSTM-FNN model can be effectively reduced, the basic accuracy of the LSTM-FNN model is improved, the training complexity when the non-ideal conditions such as data loss and the like are faced is simplified, and a firmer and reliable basis is provided for state estimation under the condition of data loss. In step S52, parameters of LSTM1, FNN1 and FNN2 after the first training are kept unchanged, so that it is beneficial to ensure that the three networks can learn real physical characteristics of the micro-grid when no packet is lost. LSTM2, FNN3, and FNN4 are trained sequentially by using the preprocessed data set to minimize errors between state estimates and true states based on missing data conditions. In the second training, the situation of data loss is considered, so that the adaptability and the robustness of the LSTM-FNN model to the data loss environment are enhanced, the LSTM-FNN model not only can adjust the prediction result under the condition of incomplete data, but also can maintain and even improve the estimation precision under the non-ideal condition. In step S53, parameters of LSTM2, FNN3 and FNN4 after the second stage training are kept unchanged, so that it is beneficial to ensure that the three networks can learn real physical characteristics of the micro-grid when the micro-grid loses packets. Training the random forest regressor through the output of the FNN4, and optimizing parameters of the random forest regressor by using RandomizedSearchCV and GRIDSEARCHCV, thereby being beneficial to improving the generalization capability of the LSTM-FNN model. Further described, randomizedSearchCV is a super-parametric optimization method in the Scikit-learn library, and RandomizedSearchCV is particularly suitable for fast exploration of high-dimensional or large-scale parameter spaces by randomly sampling the parameter spaces. GRIDSEARCHCV is a method for exhausting all possible parameter combinations, providing a more accurate optimization scheme for smaller parameter spaces.
Preferably, the method further comprises the following steps:
The optimal parameter set theta M of the trained LSTM-FNN model is found by adjusting the parameter theta of the trained LSTM-FNN model, so that the error between the predicted value and the actual value of the trained LSTM-FNN model is minimum, wherein the specific mathematical formula of the optimal parameter set theta M of the trained LSTM-FNN model is as follows:
Wherein θ M represents the optimal parameter set of the trained LSTM-FNN model, f nn(yu,yv) represents the output value of the trained LSTM-FNN model, i.e., the predicted value, and y m represents the target value, i.e., the actual value, for which the trained LSTM-FNN model is desired to be predicted.
In this embodiment, the optimal parameter set θ M is found by adjusting the parameter θ of the trained LSTM-FNN model, so that the error between the predicted value and the actual value of the trained LSTM-FNN model is minimized, which is beneficial to improving the prediction performance of the trained LSTM-FNN model.
Another aspect of the present application provides a dynamic state estimation system of a micro grid considering data loss, the system comprising:
The first building module is used for building a micro-grid model, wherein the micro-grid model is used for simulating the dynamic behavior of the micro-grid;
the second building module is used for building a Gilbert-Elliott model, wherein the Gilbert-Elliott model is used for describing data loss behaviors in a communication network;
the combining module is used for combining the micro-grid model and the Gilbert-Elliott model;
the acquisition module is used for acquiring historical measurement data of the micro-grid from the sensors and the data center of the micro-grid;
the simulation and supplementation module is used for simulating and supplementing historical measurement data of the micro-grid by using a Gilbert-Elliott model so as to obtain a data set containing the historical measurement data and packet loss data;
The preprocessing module is used for preprocessing data in a data set containing historical measurement data and packet loss data to obtain a preprocessed data set;
The building module is used for building an LSTM-FNN model;
the model training module is used for training the LSTM-FNN model by using the preprocessed data set to obtain a trained LSTM-FNN model;
The input module is used for inputting data acquired by the sensors of the micro-grid in real time into the trained LSTM-FNN model to estimate the missing state caused by data loss;
And the output module is used for outputting the estimated value of the missing state of the micro-grid.
According to the micro-grid dynamic state estimation system considering data loss, through the mutual coordination of the first building module, the second building module, the combining module, the acquisition module, the simulation and supplement module, the preprocessing module, the building module, the model training module, the input module and the output module, the estimation of the missing state of the micro-grid caused by the data loss is realized. Compared with the existing EKF-based power grid state estimation method, the LSTM-FNN model in the scheme carries out state estimation by learning the mode in the data, so that dependence on an accurate mathematical model is reduced, and the adaptability and flexibility of the model are improved. Meanwhile, the LSTM-FNN model can process the data set containing the packet loss data, and can provide accurate state estimation under the condition of incomplete data, so that the accuracy and reliability of the state estimation are improved.
Preferably, in the second building block, the Gilbert-Elliott model describes transition probabilities between system states through a markov chain state transition matrix to simulate a data loss process, wherein a mathematical expression of the markov chain state transition matrix is as follows:
Where a denotes a markov chain state transition matrix, pr (γ k+1=0|γk =0) denotes a probability of transition from a system state of γ k =0 to a system state of γ k+1 =0, p=pr (γ k+1=1|γk =0) denotes a probability of transition from a system state of γ k =0 to a system state of γ k+1 =1, q=pr (γ k+1=0|γk =1) denotes a probability of transition from a system state of γ k =1 to a system state of γ k+1 =0, and Pr (γ k+1=1|γk =1) denotes a probability of transition from a system state of γ k =1 to a system state of γ k+1 =1.
In this embodiment, the Gilbert-Elliott model is composed of two states, namely a state in which a data packet is successfully transmitted and a state in which the data packet is lost, and transitions between these states are limited by transition probabilities.
Preferably, the preprocessing module comprises a data deleting sub-module for deleting repeated values and abnormal values in a dataset containing historical measurement data and packet loss data, a data filling sub-module for filling missing values in the dataset containing historical measurement data and packet loss data, and a data normalizing sub-module for normalizing the deleted and filled data in the dataset to obtain a preprocessed dataset. In this embodiment, by setting the data deleting sub-module, the data filling sub-module and the data normalizing sub-module, the data quality is improved, and the stability and reliability of the subsequent LSTM-FNN model training are ensured.
Preferably, the model training module includes:
The system comprises a first training submodule, a feedforward neural network 1, a feedforward neural network 2 and a first training submodule, wherein the first training submodule is used for training LSTM1, FNN1 and FNN2 in an LSTM-FNN model in sequence by using a preprocessed data set to complete training of a first stage, LSTM1 represents a long-short-period memory network 1 and is used for extracting time characteristics under the condition of no data loss, FNN1 represents the feedforward neural network 1 and is used for carrying out deeper information processing on the time characteristics extracted by LSTM1, and FNN2 represents the feedforward neural network 2 and is used for extracting the characteristics from the output of FNN 1;
The first parameter setting sub-module is used for keeping the parameters of the LSTM1, the FNN1 and the FNN2 which are trained in the first stage unchanged;
The second training sub-module is used for training the LSTM2, the FNN3 and the FNN4 in the LSTM-FNN model sequentially by using the preprocessed data set to complete the training of the second stage, wherein the LSTM2 represents a long-short-period memory network 2 for extracting time characteristics under the condition of data loss, the FNN3 represents a feedforward neural network 3 for carrying out deeper information processing on the time characteristics extracted by the LSTM2, and the FNN4 represents a feedforward neural network 4 for extracting the characteristics from the output of the FNN 3;
The second parameter setting submodule is used for keeping the parameters of LSTM2, FNN3 and FNN4 which are subjected to the second-stage training unchanged;
the construction submodule is used for constructing a random forest regressor at the output end of the FNN 4;
the third training submodule is used for training the random forest regressor through the output of the FNN 4;
A parameter optimization sub-module for optimizing parameters of the random forest regressor using RandomizedSearchCV and GRIDSEARCHCV.
In this embodiment, by setting the first training sub-module, the prediction error of the LSTM-FNN model can be effectively reduced, thereby improving the basic accuracy of the LSTM-FNN model. By setting the first parameter setting sub-module, three networks of LSTM1, FNN1 and FNN2 can be guaranteed to learn the real physical characteristics of the micro-grid when no packet is lost. By arranging the second training submodule, the adaptability and the robustness of the LSTM-FNN model to the data missing environment are enhanced, so that the LSTM-FNN model not only can adjust the prediction result under the condition of incomplete data, but also can maintain and even improve the estimation accuracy under the non-ideal condition. By setting the second parameter setting sub-module, three networks of LSTM2, FNN3 and FNN4 can learn the real physical characteristics of the micro-grid when the micro-grid loses packets. The generalization capability of the LSTM-FNN model is improved by arranging the construction sub-module, the third training sub-module and the parameter optimization sub-module.
Preferably, the model parameter optimization module is further included, and the model parameter optimization module is configured to find an optimal parameter set θ M of the trained LSTM-FNN model by adjusting a parameter θ of the trained LSTM-FNN model, so that an error between a predicted value and an actual value of the trained LSTM-FNN model is minimized, where a specific mathematical formula of the optimal parameter set θ M of the trained LSTM-FNN model is as follows:
Wherein θ M represents the optimal parameter set of the trained LSTM-FNN model, f nn(yu,yv) represents the output value of the trained LSTM-FNN model, i.e., the predicted value, and y m represents the target value, i.e., the actual value, for which the trained LSTM-FNN model is desired to be predicted.
In the embodiment, the model parameter optimization module is arranged, so that the prediction performance of the trained LSTM-FNN model is improved.
Furthermore, functional units in various embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations of the above embodiments may be made by those skilled in the art within the scope of the invention.