CN118485973A - Substation safety operation monitoring method and system based on OpenCV - Google Patents
Substation safety operation monitoring method and system based on OpenCV Download PDFInfo
- Publication number
- CN118485973A CN118485973A CN202410948694.5A CN202410948694A CN118485973A CN 118485973 A CN118485973 A CN 118485973A CN 202410948694 A CN202410948694 A CN 202410948694A CN 118485973 A CN118485973 A CN 118485973A
- Authority
- CN
- China
- Prior art keywords
- image
- constructing
- adaptive
- scale
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/092—Reinforcement learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a substation safety operation monitoring method and system based on OpenCV, which are characterized in that a video camera arranged in advance is used for collecting video images of a substation operation area, extracting video frames and carrying out semantic segmentation on the images to obtain a segmented image set; extracting image features for each segmented image; calling a preconfigured self-adaptive Monte Carlo forest module to select a corresponding OpenCV processing module for each segmented image to obtain an optimized image data set; based on the optimized image data set, a preconfigured self-adaptive multi-scale space-time diagram module is called for processing, and space-time data comprising a space-time behavior mode, interaction information and trend analysis data is obtained; based on the spatiotemporal data, abnormal behavior and abnormal state are identified and output. The scheme improves the accuracy and efficiency of safety monitoring, reduces the resource consumption of the system, enhances the generalization capability of model configuration and reduces the use cost of the system.
Description
Technical Field
The invention relates to a safety monitoring technology, in particular to a substation safety operation monitoring method and system based on OpenCV.
Background
Substations are key nodes of the power system, and the safe operation of the substations is directly related to the stability and reliability of the power grid. With the rapid development of smart power grids, the automation degree of a transformer substation is continuously improved, but manual operation and automatic monitoring are still indispensable links. In a complex power environment, operators face multiple safety risks such as high-voltage electric shock and misoperation of equipment. The establishment of a set of high-efficiency and reliable substation safety operation monitoring system has important practical significance. The method can monitor the operation process in real time, discover potential hazards in time, provide data support for safety management, and improve the operation efficiency and the safety level.
Currently, substation safety operation monitoring research has made a certain progress. The traditional method mainly relies on a fixed camera and manual monitoring, and has the problems of limited coverage range, poor real-time performance and the like. With the development of computer vision technology, intelligent monitoring systems based on image analysis are becoming research hotspots. Some researchers have proposed a deep learning-based human behavior recognition method capable of automatically detecting illegal operations. In addition, the infrared thermal imaging technology is used for monitoring the temperature abnormality of the equipment, so that the fault early warning capability is improved. Most of these studies are still in the laboratory stage and their use in a practical substation environment presents challenges.
Although some results are achieved by existing researches, some key technical problems still exist in the field of substation safety operation monitoring and are needed to be solved. For example, the environment of a transformer substation is complex, the working environment of different transformer substations has large difference, the illumination condition has large change, and how to ensure the image quality and the analysis accuracy under different weather and time conditions is a challenge. For example, the existing abnormal behavior recognition algorithm is mostly based on predefined rules, is difficult to cope with complex and changeable job scenes, and lacks self-adaptive capability. In addition, the transformer substation equipment is various, how to realize accurate identification and state monitoring to different types of equipment is a technical difficulty, the generalization capability of the existing model can not meet the requirements, training and configuration are needed each time, and the cost is high.
Research and innovation is needed.
Disclosure of Invention
The invention aims to provide a substation safety operation monitoring method and system based on OpenCV, which aim to solve one of the problems existing in the prior art.
According to one aspect of the application, the substation safety operation monitoring method based on OpenCV is realized by at least two OpenCV modules, and the method comprises the following steps:
Step S1, acquiring video images of a substation operation area through a pre-configured video camera, extracting video frames, and carrying out semantic segmentation on the images to obtain a segmented image set;
Step S2, extracting image features for each segmented image in the segmented image set, wherein the image features comprise content features, quality features and associated features;
step S3, based on image characteristics, calling a preconfigured self-adaptive Monte Carlo forest module to select a corresponding OpenCV processing module for each segmented image, and processing to obtain an optimized image data set;
S4, based on the optimized image data set, calling a preconfigured self-adaptive multi-scale space-time diagram module to process, and acquiring space-time data, wherein the space-time data comprises a space-time behavior mode, interaction information and trend analysis data; based on the spatiotemporal data, abnormal behavior and abnormal state are identified and output.
According to another aspect of the present application, there is also provided an OpenCV-based substation safety operation monitoring system, including:
at least one processor; and
A memory communicatively coupled to at least one of the processors; wherein,
The memory stores instructions executable by the processor for execution by the processor to implement the OpenCV-based substation safety operation monitoring method according to any of the above-described aspects.
The method and the device have the beneficial effects that the accuracy and the efficiency of safety monitoring are improved, the resource consumption of the system is reduced, the generalization capability of model configuration is enhanced, and the use cost of the system is reduced. The problem that the traditional safety monitoring system needs to be trained and configured again in each type of scene and even in each client environment is solved. Through a plurality of OpenCV modules, be suitable for different scenes, handle the picture of different characteristics, can elasticity customization, can increase according to the condition. Meanwhile, through the idea of random forests, the combined call is carried out, the pictures are sent to one or more OpenCV modules with the best processing effect for processing, the recognition effect is improved, and a foundation is provided for subsequent analysis.
Drawings
Fig. 1 is a flow chart of the present invention.
Fig. 2 is a flowchart of step S1 of the present invention.
Fig. 3 is a flow chart of step S2 of the present invention.
Fig. 4 is a flowchart of step S3 of the present invention.
Fig. 5 is a flowchart of step S4 of the present invention.
Detailed Description
As shown in fig. 1, according to an aspect of the present application, the OpenCV-based substation security operation monitoring method is implemented by at least two OpenCV modules, and the method includes the steps of:
Step S1, acquiring video images of a substation operation area through a pre-configured video camera, extracting video frames, and carrying out semantic segmentation on the images to obtain a segmented image set;
Step S2, extracting image features for each segmented image in the segmented image set, wherein the image features comprise content features, quality features and associated features;
step S3, based on image characteristics, calling a preconfigured self-adaptive Monte Carlo forest module to select a corresponding OpenCV processing module for each segmented image, and processing to obtain an optimized image data set;
S4, based on the optimized image data set, calling a preconfigured self-adaptive multi-scale space-time diagram module to process, and acquiring space-time data, wherein the space-time data comprises a space-time behavior mode, interaction information and trend analysis data; based on the spatiotemporal data, abnormal behavior and abnormal state are identified and output.
In the embodiment, a video image of a substation operation area is acquired through a pre-configured video camera, and semantic segmentation is performed to obtain a segmented image set. The image acquisition and segmentation technology of OpenCV is utilized, and the image acquisition quality problem under the complex environment of the transformer substation is solved. The semantic segmentation technology can effectively distinguish operators, equipment and background, and lays a foundation for subsequent analysis. The extracted image features include content features, quality features, and associated features. The multi-dimensional feature extraction method fully utilizes the image processing capability of OpenCV, and can comprehensively capture various attributes of the image. The content features reflect visual information of the images, the quality features evaluate the definition and availability of the images, and the associated features reflect the space-time relationship between the images. The comprehensive feature extraction method remarkably improves the accuracy and the robustness of subsequent analysis. Through the adaptive Monte Carlo forest module, the system can dynamically select the most appropriate OpenCV processing module for each segmented image. The self-adaptive processing strategy improves the flexibility and the efficiency of the system, can automatically adjust the processing method according to different scenes and image characteristics, and solves the problem that the traditional fixed processing flow is difficult to cope with changing environments. And processing by using an adaptive multi-scale space-time diagram module to obtain space-time data including space-time behavior patterns, interaction information and trend analysis data. The multi-scale analysis method can capture short-term, medium-term and long-term behavior patterns and trends at the same time, and improves the prediction capability of the system and the accuracy of anomaly detection. Based on the obtained space-time data, the system can timely identify abnormal behaviors and abnormal states and output early warning. The safety management level of the transformer substation is directly improved, and the transformer substation can be timely found and prevented before danger occurs.
In some embodiments, the OpenCV module includes:
An edge enhancement module is designed to process the edge-emphasized image using a modified Canny (Canny) algorithm. And the low-illumination enhancement module is combined with the adaptive histogram equalization and the deep learning denoising network. And the motion blur correction module is used for processing the motion blur image by using a blind deconvolution algorithm. And the color correction module is used for processing the color cast image by using a color constancy algorithm. Noise suppression module: for processing various types of image noise, such as gaussian noise, pretzel noise, etc. Contrast enhancement module: the method is suitable for scenes with uneven illumination or low contrast. An image defogging module: for processing images in foggy or foggy environments. Halo removal module: handling halation effects caused by strong light sources, such as night illumination. Shadow elimination module: shadow cast by equipment and structures is reduced or removed. And a perspective correction module: correcting image distortion caused by camera angle. Super-resolution module: the quality of the low resolution image is improved. An image stabilization module: the blurring caused by camera shake is reduced. Thermal imaging enhancement module: the image captured by the thermal imaging camera is optimized. Night vision enhancement module: enhancing image quality in low light conditions. Smoke removal module: image disturbance caused by smoke is processed. Specular reflection suppression module: reducing strong reflection caused by metal surfaces and the like. Dynamic range compression module: high dynamic range scenes are processed, as are images that contain bright and dark areas. And an image fusion module: different types of sensor data are combined, such as fusion of visible light and infrared images. A local adaptive binarization module: used for text or meter reading extraction in complex contexts. And a geometric correction module: correcting image distortion caused by the wide-angle lens. Color enhancement module: improving color distortion due to weather or lighting conditions. Edge-preserving smoothing module: noise is reduced while important edge information is preserved. Motion detection module: a moving object in the image is identified for security monitoring.
As shown in fig. 2, according to an aspect of the present application, the step S1 specifically includes:
S11, acquiring image data of visible light, infrared and ultraviolet bands through a preconfigured multispectral camera, and adding a time stamp to each frame of image to obtain a video frame data set after time synchronization;
Step S12, acquiring a time-synchronized video frame data set, sequentially denoising each video frame by adopting a non-local mean value method, correcting and optimizing the image contrast of the video frame by adopting a self-adaptive gamma method, and acquiring a primarily processed video frame set;
Step S13, reading a preliminarily processed video frame set, calling a self-adaptive optical flow method to calculate the motion between adjacent video frames, and eliminating repeated redundant video frames according to a calculation result to obtain a screened video frame set;
Step S14, reading the filtered video frame set, dividing each video frame by adopting a graph cutting method, optimizing the dividing boundary, obtaining a divided video frame data set, carrying out dividing quality evaluation, and outputting a divided image set.
In this embodiment, the system can simultaneously acquire image data in the visible, infrared and ultraviolet bands by using a preconfigured multispectral camera. The multispectral acquisition method expands the perception capability of the system and can capture information which cannot be acquired by a single wave band. For example, the infrared band may detect device temperature anomalies, the ultraviolet band may identify corona discharge, and the visible light provides high resolution scene details. By adding the time stamp to each frame of image, the accurate time synchronization of multi-band data is realized, and a foundation is laid for the subsequent multi-mode data fusion.
The non-local mean value method is adopted to carry out denoising treatment, so that the image noise is effectively reduced, and important details are kept. The method is suitable for processing complex textures and edge information in the transformer substation environment. And then, correcting and optimizing the contrast of the image by using an adaptive gamma method, so that the brightness and the contrast of the image can be automatically adjusted under different illumination conditions, and the image is ensured to have good visibility under various environments. The two-step pretreatment significantly improves the accuracy and reliability of subsequent analysis.
By calculating the motion between adjacent video frames by the adaptive optical flow method, the system can identify and reject repeated redundant video frames. This step not only reduces the data storage and processing burden, but also preserves key change information in the scene. The self-adaptive optical flow method can adapt to different motion modes, and can accurately capture both fast movement and slow change.
The image segmentation method is adopted to segment and optimize the segmentation boundary, and the system can accurately segment the image into different semantic areas (such as personnel, equipment, background and the like). The advantage of the graph cut approach is that global and local information can be considered simultaneously, resulting in a consistent segmentation result. By optimizing the segmentation boundary, the accuracy of segmentation is further improved, especially the target edge under a complex background. And the quality evaluation is carried out on the segmentation result, so that the reliability of the subsequent analysis is ensured. This step can identify poorly segmented regions, providing quality feedback for subsequent processing, and thus quality control throughout the process flow. High quality, low redundancy, information rich image data is provided for subsequent advanced analysis. Particularly, the acquisition and fusion of multispectral data enhance the perception capability of the system to the environment of the transformer substation, and can detect abnormal conditions which are difficult to find in a single wave band.
As shown in fig. 3, according to an aspect of the present application, the step S2 is further:
Step S21, reading a set of segmented images, extracting content features for each segmented image, including: multi-scale local binary pattern features, directional gradient histogram and local structure tensor fusion features, color moment and color contrast distribution features, and Gabor filter bank features;
Step S22, reading a set of segmented images, extracting quality features for each segmented image, including: a non-reference image quality evaluation index based on natural scene statistics, a local quality map of multi-scale structure similarity, and a frequency domain quality feature based on phase consistency;
Step S23, reading a set of segmented images, extracting associated features for each segmented image, including: region relation features, space-time cube features, and fusion features of optical flow and track features;
And S24, invoking and configuring a feature fusion and dimension reduction module to fuse and reduce the dimension of the content features, the quality features and the associated features, and outputting a final image feature vector set, namely a fusion graph feature.
In this embodiment, the multiscale local binary pattern feature, the directional gradient histogram and local structure tensor fusion feature, the color moment and color contrast distribution feature, and the Gabor filter bank feature are extracted. The multi-dimensional feature extraction method can comprehensively capture texture, shape, color and direction information of the image. The multi-scale local binary pattern features can effectively describe local texture patterns, and are robust to illumination changes. The fusion characteristic of the direction gradient histogram and the local structure tensor combines the edge direction and the local structure information, so that the shape and the gesture of the object can be accurately described. The color moment and color contrast distribution features capture the color distribution and contrast information of the image, helping to distinguish between different types of objects and scenes. Gabor filter bank features provide multi-scale, multi-directional texture analysis suitable for detecting periodic structure and fine texture variations. The method comprises the steps of extracting a non-reference image quality evaluation index based on natural scene statistics, a local quality map of multi-scale structure similarity and frequency domain quality characteristics based on phase consistency. These quality features not only evaluate the overall quality of the image, but also provide distribution information of the local quality: the non-reference image quality evaluation index can evaluate the image quality under the condition of no reference image, and is suitable for real-time monitoring of scenes. The local quality map of multi-scale structural similarity provides detailed quality distribution information, which helps identify areas of poor quality in the image. The frequency domain quality features based on phase consistency enable capturing fine distortions of the image, especially in areas with rich edges and textures.
The features are fused by extracting the region relation features, the space-time cube features and the optical flow and track features. These associated features enable capturing spatiotemporal relationship and motion information in a sequence of images: the region relationship features describe the spatial relationship between different regions in the image, helping to understand the scene structure. The space-time cube features introduce a time dimension into feature extraction, which enables capturing patterns of object motion and scene changes. The fusion characteristics of the optical flow and the track characteristics combine the instantaneous motion information and the long-term motion track, and provide comprehensive motion analysis. Through the feature fusion and dimension reduction module, the system can effectively combine various features and reduce the dimension of the feature space. This not only improves the efficiency of the subsequent processing, but also enhances the discrimination of the features.
The embodiment can simultaneously consider the image content, quality and space-time relationship, and improves the accuracy and robustness of subsequent analysis. Particularly in a complex transformer substation environment, the multidimensional feature extraction method can effectively distinguish normal operation and abnormal behavior and identify equipment state change, so that reliable technical support is provided for timely finding potential safety hazards. Meanwhile, the steps of feature fusion and dimension reduction ensure that the system can process a large amount of video data in real time under limited computing resources, and the requirement of real-time monitoring is met.
As shown in fig. 4, according to an aspect of the present application, the step S3 is further:
S31, reading image characteristics, initializing a preconfigured self-adaptive Monte Carlo forest module, and constructing and training a self-organizing map network; performing self-organizing mapping and clustering on the image features, and outputting a clustering result;
Step S32, based on image features and the adaptive Monte Carlo forest module, performing adaptive Monte Carlo tree search and parameter optimization to obtain optimized parameters, and obtaining an OpenCV processing module combination strategy;
S33, constructing an image evaluation function, evaluating image processing quality, calculating a quality gain, and outputting an optimization parameter if the quality gain exceeds a threshold value;
And step S34, processing the image based on the optimized self-adaptive Monte Carlo forest module to obtain a multi-scale optimized image data set, namely a multi-scale space-time atlas.
In this embodiment, the system builds and trains the self-organizing map network by initializing a preconfigured adaptive Monte Carlo forest module. The intrinsic structure and mode of the image features can be automatically found: the use of self-organizing map networks enables nonlinear dimension reduction and visualization of high-dimensional image features. The clustering result provides a basis for subsequent processing strategy selection, so that the system can adopt different processing methods according to different types of image features. The non-supervision learning method enables the system to adapt to new and unseen image characteristic modes, and improves the universality and adaptability of the system.
By performing adaptive Monte Carlo tree search and parameter optimization, the system is able to dynamically adjust parameters of the OpenCV processing module: the Monte Carlo tree search method can efficiently explore optimal parameter combinations in a large scale parameter space. The adaptive optimization process considers the characteristics of the current image, so that parameter selection is more targeted and effective. The method solves the limitation of the traditional fixed parameter method when facing complex and changeable transformer substation environments, and remarkably improves the flexibility and performance of the system.
The system constructs an image evaluation function and evaluates the image processing quality in real time, and calculates the quality gain: the dynamic evaluation mechanism can timely discover the condition of poor processing effect and trigger parameter re-optimization. The calculation of the quality gain provides a clear optimization objective for the system, ensuring continuous improvement of the processing results. This step forms a closed loop feedback system that continuously optimizes the processing strategy to accommodate changing environmental conditions.
Based on the optimized self-adaptive Monte Carlo forest module processing image, the system can generate a multi-scale optimized image data set: the multi-scale processing can capture the global structure and local detail of the image at the same time, so that the comprehensiveness of subsequent analysis is improved. The optimized image data set obviously improves the image quality and provides high-quality input for subsequent advanced analysis (such as behavior recognition and anomaly detection). The method is suitable for processing targets (such as large-scale equipment and small-scale tools) with different scales in a transformer substation environment.
The adaptability and the performance of the transformer substation safety monitoring system are improved. Particularly, when facing complex and changeable scenes (such as illumination change, weather condition change, equipment state change and the like) in the transformer substation, the system can automatically adjust the processing strategy and always maintain high-quality image analysis results. This not only improves the accuracy of anomaly detection, but also reduces the probability of false alarms and false misses. Meanwhile, the generation of the multi-scale optimized image provides a comprehensive and detailed information basis for subsequent behavior analysis and equipment state monitoring, and large-scale scene change and tiny abnormal details can be focused at the same time.
As shown in fig. 5, according to another aspect of the present application, there is further provided the step S4 further including:
S41, reading an optimized image dataset, extracting time sequence characteristics and constructing a self-adaptive multi-scale space-time diagram module; constructing a basic space relation diagram by using an improved Deltay triangulation algorithm, constructing a self-adaptive time window, and establishing a time edge to generate a short-term, medium-term and long-term multi-scale diagram structure; performing graph compression and important node reservation, and outputting a multi-scale space-time atlas;
Step S42, reading the characteristics of the fusion map, and executing space-time pattern recognition and anomaly detection; extracting a space-time mode by applying a multidimensional time sequence decomposition algorithm and a self-adaptive time sequence segmentation method; constructing a behavior dictionary and performing behavior recognition by using an elastic space-time pattern matching algorithm; constructing a multi-scale anomaly detector, including local, global and time sequence anomaly detection; outputting a behavior identification result, an abnormality detection result and an abnormality interpretation through an integrated abnormality scoring system;
step S43, performing multi-main body interaction analysis based on the multi-scale space-time atlas and the behavior recognition result; constructing an interaction graph and constructing a multi-granularity interaction strength calculation method; carrying out group dynamic analysis by applying a community detection algorithm and an incremental community evolution tracking algorithm; constructing a multidimensional centrality index identification key node; constructing an interaction effect propagation model, quantifying the interaction effect by applying a causal inference method, and outputting an interaction mode, group dynamics, a key node list and an interaction effect matrix;
Step S44, reading time sequence characteristics, fusion graph characteristics and group dynamic data, and executing trend analysis and prediction; applying improved wavelet transformation to perform multi-scale trend decomposition; constructing a space-time sequence pattern mining algorithm and association rule analysis; constructing an integrated framework of multivariable time sequence prediction; and constructing a multi-scenario simulation algorithm based on a Monte Carlo method to perform risk assessment, and outputting a trend report, a prediction result and a risk assessment report.
In this embodiment, the system constructs short, medium and long term multi-scale graph structures through a modified Delaunay triangulation algorithm and an adaptive time window: the improved Delaunay triangulation algorithm can efficiently construct a basic spatial relationship graph and accurately reflect the spatial distribution and relationship of objects in a scene. The introduction of an adaptive time window enables the system to capture behavior patterns at different time scales simultaneously, effectively representing all of the trends from instantaneous action to long term. The graph compression and important node reservation technology ensures that the system keeps key information and simultaneously greatly reduces the computational complexity.
Extracting a space-time pattern by a multi-dimensional time sequence decomposition and self-adaptive time sequence segmentation method, and constructing a multi-scale anomaly detector: the multi-dimensional time series decomposition can separate out trend, period and random components, and helps to more accurately identify abnormal modes. The self-adaptive time sequence segmentation method can automatically identify key turning points of behaviors, and accuracy of behavior identification is improved. The multi-scale anomaly detector (including local, global and time sequence anomaly detection) can comprehensively capture anomalies of different types and scales, and remarkably reduces false positives and false negatives.
Constructing a multi-granularity interaction strength calculation method, and applying a community detection algorithm and an incremental community evolution tracking algorithm: the multi-granularity interaction strength calculation method can quantify the interaction degree between different entities (such as personnel and equipment) and is helpful for understanding complex operation scenes. The community detection and evolution tracking algorithm can identify and track the group dynamics of operators, and is helpful for finding potential collaboration problems or abnormal group behaviors. The introduction of the multidimensional centrality index helps identify key nodes (such as key equipment or core operators) and provides an important reference for safety management.
The improved wavelet transformation can be used for multi-scale trend decomposition, and an integrated framework of multivariable time sequence prediction is constructed: the multi-scale trend decomposition can separate trend components of different time scales, and is helpful for understanding long-term, medium-term and short-term change modes. The space-time sequence pattern mining algorithm and the association rule analysis can find complex space-time association patterns, and provide basis for prediction. The integration framework of the multivariable time sequence prediction improves the accuracy and the robustness of the prediction, and can simultaneously consider the influence of a plurality of related factors. The multi-scenario simulation based on the Monte Carlo method provides a probabilistic view angle for risk assessment, and is beneficial to formulating a more comprehensive security policy. The intelligent level of the transformer substation safety monitoring system is improved. The system not only can detect abnormal behaviors and states in real time, but also can deeply analyze interaction modes among multiple subjects and predict potential risks. The multi-scale and multi-angle analysis method is suitable for the complex operation environment of the transformer substation, and can simultaneously pay attention to instant potential safety hazards and long-term safety trends.
According to one aspect of the present application, in the step S12, each video frame is sequentially denoised by adopting a non-local mean method, and the image contrast of the video frame is corrected and optimized by adopting an adaptive gamma method, so as to obtain a primarily processed video frame set, which further includes:
Step S121, calculating a local covariance matrix of each pixel neighborhood, searching a similar block by using principal component analysis and extracting a main noise direction; dynamically adjusting the shape of the search window in consideration of the main direction of noise when searching for similar blocks;
step S122, introducing an adaptive weight factor when calculating a weighted average value, and dynamically adjusting according to the block similarity and the noise intensity; outputting the denoised video frame data set;
step S123, reading the denoised video frame dataset, calculating a histogram of each frame of image and estimating the overall brightness; constructing an adaptive gamma function gamma (L) =a x exp (-b x L) +c, wherein a, b, c are preset parameters, L is overall brightness, exp () represents an exponential function; applying gamma correction to each pixel; and outputting the video frame data set with optimized contrast, namely the video frame set which is primarily processed.
In this embodiment, the denoising process is performed by an improved non-local mean method, and the system achieves high-quality noise suppression: specifically, by calculating a local covariance matrix of each pixel neighborhood and searching for similar blocks using principal component analysis, noise patterns in an image can be more accurately identified. The shape of the search window is dynamically adjusted by considering the main direction of noise, so that the accuracy and the efficiency of similar block search are improved. And an adaptive weight factor is introduced, and the denoising process is more flexible and effective according to the block similarity and the noise intensity dynamic adjustment. The method is suitable for processing the images under the complex environment of the transformer substation, and can effectively remove noise while keeping important detail information such as equipment edges and small-sized components.
By calculating the histogram of each frame of image and estimating the overall brightness, the system can automatically adapt to different lighting conditions. The constructed adaptive gamma function gamma (L) =a exp (-b x L) +c, where the parameters a, b, c can be preset, L is the overall brightness, exp () represents an exponential function, so that the contrast correction can be automatically adjusted according to the overall brightness of the image. Applying gamma correction to each pixel ensures an improvement in the overall visual quality of the image.
Various types of image noise, such as noise caused by electromagnetic interference, low light conditions, or the camera itself, can be effectively handled. While removing noise, important detailed information such as microcrack or anomaly indications of the device surface are retained. The device is automatically adapted to different illumination conditions, and can provide clear and discernable images in both an outdoor area where sunlight is directly irradiated and an indoor area where the sunlight is insufficient. The accuracy and reliability of subsequent image analysis and processing steps (such as target detection, behavior recognition and the like) are improved.
According to one aspect of the present application, the step S13 specifically includes:
step S131, a video frame data set with optimized contrast is applied to calculate a dense optical flow field F between continuous frames by using a French Beck optical flow algorithm;
Step S132, calculating a gradient amplitude map of the optical flow field and using an adaptive threshold t=mean (G) +k×std (G) binarized gradient amplitude map, G being the gradient amplitude of the optical flow field, mean () representing the average value, k representing the coefficient of controlling the threshold sensitivity, std () representing the standard deviation;
Step S133, calculating the proportion of non-zero pixels in the binarization map, and eliminating the current frame if the proportion is smaller than a preset threshold value; and outputting the video frame data set with redundant frames removed, namely the filtered video frame set.
In this embodiment, the dense optical flow field between successive frames is calculated by applying the French Beck (Farneback) optical flow algorithm: the Farneback algorithm can provide accurate motion estimation at the pixel level, and is suitable for fine motion detection in complex scenes. The computation of dense optical flow fields enables the system to capture comprehensive motion information throughout the scene, not just to specific targets. The device is suitable for the transformer substation environment, and can detect personnel movement and equipment state change at the same time.
The self-adaptive threshold t=mean (G) +k is the construction of std (G), where G is the gradient amplitude of the optical flow field, mean () represents the average value, k represents the coefficient controlling the threshold sensitivity, std () represents the standard deviation, so that the system can automatically adapt to the motion characteristics of different scenes. Can effectively work under different illumination conditions and scene complexity, and improves the robustness of the system.
By calculating the proportion of non-zero pixels in the binarized map, the system is able to intelligently identify and reject redundant frames: the method can effectively reduce the data storage and processing burden, and simultaneously keep key scene change information. By setting a proper threshold, the system can balance information retention and storage efficiency and adapt to different monitoring requirements.
The embodiment greatly reduces the storage requirement, and enables long-time and high-quality video monitoring. The calculation burden of the subsequent processing steps is reduced, and the real-time performance of the whole system is improved. Key change information in the scene is reserved, and important safety-related events are ensured not to be missed. The efficiency of abnormal event detection is improved because the system only needs to process frames containing significant changes. The method is suitable for long-term continuous monitoring requirements of the transformer substation. Important operation behaviors and equipment state changes can be effectively captured, and meanwhile, the resource consumption for storing and processing unnecessary static scenes is greatly reduced. Not only does the cost effectiveness of the system increase, but it also makes quick playback and event retrieval more efficient. By intelligently retaining key information, the method provides high-quality and low-redundancy data input for subsequent advanced analysis (such as behavior pattern recognition, anomaly detection and the like), thereby improving the performance and reliability of the whole safety monitoring system.
According to an aspect of the present application, the step S14 specifically includes:
Step S141, reading the screened video frame set, constructing and carrying out preliminary segmentation based on a super-pixel self-adaptive region growing algorithm, wherein the method specifically comprises the following steps:
Initializing a super-pixel set by using a simple linear iterative clustering algorithm, and calculating the average color and texture characteristics of each super-pixel; constructing a super-pixel adjacency graph, wherein the edge weight is the similarity of adjacent super-pixel characteristics; constructing an adaptive growth threshold T (si) =μ (si) +α×σ (si), wherein μ and σ are local mean and standard deviation, and α represents a coefficient controlling the sensitivity of the growth threshold; performing region growth based on the super-pixel adjacency graph and the adaptive growth threshold, and merging similar regions; outputting a preliminary segmentation result;
step S142, reading a preliminary segmentation result, and optimizing a segmentation boundary by applying an improved graph cutting method, wherein the method specifically comprises the following steps:
Constructing a cutting graph, wherein nodes are areas in a preliminary segmentation result, and edges are adjacent areas; constructing an inter-region similarity measure, a boundary penalty term and an energy function of graph cutting; iteratively optimizing an energy function using an alpha expansion algorithm; obtaining an optimized segmentation result;
Step S143, reading the optimized segmentation result, repairing the small region and the cavity by using morphological post-processing, wherein the method specifically comprises the following steps:
Removing the area smaller than the threshold value by applying area open operation; filling the internal cavity by using a reconstruction open operation; smoothing the region boundary by applying a conditional expansion algorithm; outputting the repaired segmentation result;
Step S144, reading the repaired segmentation result, and constructing a segmentation quality evaluation index based on fractal dimension, wherein the segmentation quality evaluation index specifically comprises the following steps:
Calculating the fractal dimension of each segmented region using a box counting method; calculating a global segmentation quality index q=Σ (wi×di), wherein wi is a regional area weight, di is a regional fractal dimension; if the global segmentation quality index Q is lower than the threshold T_Q, marking the global segmentation quality index Q as a region to be optimized; obtaining a quality evaluation result and a region mark to be optimized;
step S145, reading the region to be optimized and the repaired segmentation result, and applying an active contour model to the region to be optimized, specifically:
Initializing a contour as a current segmentation boundary; constructing an external force field based on gradient vector flow; introducing a curvature constraint term to prevent excessive deformation of the profile; iteratively updating the profile until convergence or maximum iteration number is reached; outputting the optimized region outline;
step S146, reading the repaired segmentation result and the optimized region outline, and integrating all the processing results, specifically:
Replacing the corresponding region in the repaired segmentation result with the optimized region contour; a final set of segmented images is generated.
In this embodiment, the use of a Simple Linear Iterative Clustering (SLIC) algorithm can generate compact, uniform superpixels, providing a good base unit for subsequent segmentation. The calculation of the average color and texture characteristics of each superpixel helps to accurately distinguish between different scene elements. The construction of the adaptive growth threshold T (si) =μ (si) +α×σ (si) enables the segmentation process to adapt to changes in local image features. The method is suitable for processing complex environments of transformer substations, and can effectively divide different elements such as equipment, personnel, background and the like. And constructing a cutting graph, taking the regions in the primary segmentation result as nodes and the relationship between adjacent regions as edges, and comprehensively considering the relationship between the regions. Inter-region similarity measures and boundary penalty terms are introduced so that the optimization process balances region internal consistency and boundary smoothness. The energy function is iteratively optimized by using an alpha expansion (alpha-expansion) algorithm, so that global optimization can be efficiently performed on a large-scale image.
The area open operation removes the area smaller than the threshold value, effectively eliminating noise and tiny mis-segmentation areas. The reconstruction operation fills the internal cavity, and the continuity and the integrity of the segmentation result are improved. The conditional expansion algorithm smoothes the region boundaries so that the segmentation result is more natural and accurate.
The fractal dimension of each divided area is calculated by using a box counting method, so that the complexity and texture characteristics of the area can be effectively measured. The construction of a global segmentation quality index q=Σ (wi×di), wherein wi is a regional area weight, di is a regional fractal dimension, and importance of different regions is considered. By setting the quality threshold T Q, the system can automatically identify areas that need further optimization.
The construction of the external force field based on the gradient vector flow enables the contour to better fit the edge of the target. The introduction of curvature constraint term prevents excessive deformation of the contour, and maintains smoothness and rationality of the segmentation result. The iterative update process ensures convergence and stability of the optimization results.
According to the method and the device, various elements in a transformer substation environment, such as equipment, personnel, operation tools and the like, can be accurately segmented, and a solid foundation is provided for subsequent target identification and tracking. The optimized segmentation boundary enables the system to pinpoint device edges, helping to detect device state changes and potential faults. The automated quality assessment and optimization process ensures consistency and reliability of segmentation results, reducing the need for human intervention.
By accurately identifying and tracking operators, behavioral analysis and security management is facilitated. The device area is accurately segmented, and reliable image data is provided for device state monitoring and anomaly detection. By high-quality scene segmentation, the accuracy of subsequent analysis (such as abnormal behavior detection, equipment state evaluation and the like) is improved.
According to an aspect of the present application, in the step S21, the process of extracting the content features specifically includes:
Step S211, reading a segmented image set, and sequentially carrying out Gaussian pyramid decomposition on the segmented image to obtain a multi-scale image set; calculating the local variance of the pixel for each scale, constructing a self-adaptive threshold function and calculating the local binary pattern characteristic of each scale; selecting the scale combination with the most discrimination by using an information gain criterion; outputting a multi-scale local binary pattern feature vector;
Step S212, reading a segmented image set, and calculating the gradient amplitude and direction of the image; constructing a local structure tensor and calculating a characteristic value and a characteristic vector thereof; constructing local anisotropic measurement and introducing anisotropic weight in the calculation of the directional gradient histogram; calculating the weighted direction gradient histogram feature to obtain a fusion feature;
step S213, reading a segmented image set, and converting the image into an LAB color space; calculating a global color contrast map and obtaining a significant region by self-adaptive segmentation through an Ojin algorithm; calculating the self-adaptive color moment of the salient region; constructing a color contrast distribution histogram;
Step S214, reading a segmented image set, and constructing a direction self-adaptive function and a self-adaptive Gabor filter; automatically selecting filter parameters by using a particle swarm optimization algorithm; applying the optimized Gabor filter group to the image to obtain a response atlas; the statistical features of each response plot are calculated.
In this embodiment, a multi-scale image set is obtained through gaussian pyramid decomposition, so that texture information of different scales can be captured. The construction of the self-adaptive threshold function enables the local binary pattern feature extraction to be more robust, and adapts to the texture characteristics of different areas. The information gain criterion is used for selecting the scale combination with the most discrimination, so that the effectiveness and the calculation efficiency of the characteristics are improved. The method is suitable for analysis of complex textures in the transformer substation environment, such as detection of fine changes of equipment surfaces. And calculating the gradient amplitude and direction of the image, and comprehensively capturing the local structure information by combining the characteristic value and the characteristic vector of the local structure tensor. The local anisotropic measurement is constructed, and anisotropic weight is introduced in the calculation of the direction gradient histogram, so that the sensitivity of the feature to the change of direction and structure is improved. The method is suitable for analyzing the shape and the gesture of the transformer substation equipment and is helpful for detecting abnormal states or position changes of the equipment. Global color contrast maps are calculated in LAB color space and segmented adaptively using the oxford algorithm (OTSU) to obtain salient regions. And calculating the self-adaptive color moment and color contrast distribution histogram of the salient region, and comprehensively capturing the color characteristics of the image. Color anomalies in the transformer substation environment, such as discoloration of insulating materials or spark discharge, can be effectively identified.
The direction adaptive function and the adaptive Gabor filter are constructed so that the filter can adapt to the local direction characteristics of the image. The filter parameters are automatically selected by using a particle swarm optimization algorithm, so that the adaptability and the efficiency of feature extraction are improved. And applying the optimized Gabor filter set to the image to obtain a multidirectional and multi-scale response chart set. The method is suitable for analyzing texture and direction characteristics of substation equipment, such as crack detection or wire abnormality detection on the surface of an insulator.
The multi-scale local binary pattern feature enables the system to detect small changes in the device surface, such as insulation degradation or corrosion of the metal surface. The fusion characteristic of the direction gradient histogram and the local structure tensor improves the sensitivity of the system to the posture change of the equipment, and is helpful for timely finding the inclination or displacement of the equipment. The color moment and color contrast distribution characteristics enable the system to quickly identify abnormal color changes, such as oil leakage of an oil immersed transformer or aging and discoloration of insulating materials. The Gabor filter group characteristic enhances the analysis capability of the system on complex textures, and can be used for detecting surface damage of a high-voltage circuit or pollution flashover of an insulator.
In a word, the abnormality detection capability of the system in a complex environment is improved. The method not only can capture obvious abnormality, but also can identify potential safety hazards, and provides reliable basis for preventive maintenance and timely intervention. Meanwhile, the self-adaptability and the optimized construction of the feature extraction process ensure that the system can keep high-efficiency and stable performance under different environmental conditions (such as different illumination and weather conditions).
According to one aspect of the present application, the step S22 of extracting quality features specifically includes:
step S221, reading a segmented image set, and constructing a non-reference image quality evaluation index based on natural scene statistics, wherein the non-reference image quality evaluation index specifically comprises the following steps:
Carrying out local mean value reduction and variance normalization on the image; calculating generalized Gaussian distribution parameters of the normalized image; extracting paired product statistical characteristics of the normalized images; calculating local second-order statistical characteristics, namely generalized Gaussian distribution parameters of a variance field; constructing and outputting a feature vector;
Step S222, reading a segmented image set, and constructing a local quality map based on multi-scale structural similarity, wherein the method specifically comprises the following steps:
Performing multi-scale decomposition on the image to obtain an image decomposition set; calculating a local structure similarity graph for each scale; constructing a scale weight function and calculating a weighted multi-scale local structure similarity graph; dividing the multi-scale local structure similarity graph by using the self-adaptive threshold to obtain a local quality graph; extracting statistical characteristics of the local quality map, including area ratio, average value and standard deviation;
Step S223, reading a segmented image set, and constructing a frequency domain quality feature based on phase consistency, wherein the frequency domain quality feature specifically comprises the following steps:
Performing discrete cosine transform on the image; calculating the amplitude and phase of the discrete cosine transform coefficient; constructing a phase consistency measurement and a phase consistency diagram; calculating the directional entropy and the scale entropy for constructing the phase consistency diagram; and extracting statistical moment for constructing the phase consistency graph.
In this embodiment, the local mean value reduction and variance normalization are performed on the image, so that the influence of the global brightness and contrast variation can be effectively removed. And calculating generalized Gaussian distribution parameters of the normalized image, and capturing the statistical characteristics of the image. And extracting the paired product statistical characteristics of the normalized image, and reflecting the local structure information of the image. The texture characteristics of the image are further described by calculating local second-order statistical features, namely generalized Gaussian distribution parameters of a variance field. The quality of the substation monitoring image can be accurately estimated under the condition that no reference image exists, and the method is suitable for real-time monitoring scenes.
The image is subjected to multi-scale decomposition, and image structures with different scales can be analyzed simultaneously. And calculating a local structure similarity graph of each scale, and comprehensively capturing the structure information of the image. And constructing a scale weight function, calculating a weighted multi-scale local structure similarity graph, and comprehensively considering the importance of different scales. And dividing a multi-scale local structural similarity (MS-SSIM) graph by using the self-adaptive threshold to obtain a local quality graph, and visually displaying the spatial distribution of the image quality. And extracting the statistical characteristics of the local quality map, including area ratio, average value and standard deviation, and quantifying the overall quality of the image. The method and the device have the advantages that the areas with poor quality in the monitoring images of the transformer substations are accurately positioned, and image degradation which can affect safety monitoring is recognized.
And calculating the amplitude and the phase of the discrete cosine transform coefficient, and comprehensively capturing the frequency domain information of the image. And constructing a phase consistency measure and a phase consistency graph, and reflecting the edge and texture definition of the image. And calculating the directional entropy and the scale entropy of the phase consistency diagram, and quantifying the directional and scale information of the image. The statistical moment of the phase consistency diagram is extracted, and the frequency domain characteristics of the image are further described. The method is suitable for detecting degradation phenomena such as blurring, noise and the like in the monitoring image of the transformer substation.
The non-reference quality assessment method enables the system to monitor the image quality in real time and discover the image degradation problem caused by weather, illumination or equipment failure in time. The multi-scale structural similarity analysis can accurately position quality problem areas in the images, and is helpful for a system to judge whether the camera needs to be adjusted or the images are enhanced. The phase consistency feature enables the system to effectively identify image blurring and detail loss caused by vibration, defocus or compression, ensuring clear visibility of key devices and areas. The method can automatically identify and mark the low-quality image, and avoid misjudgment or missed judgment caused by poor image quality. Provides accurate guidance for image enhancement and restoration, and enables the system to improve image quality in a targeted manner. By continuously monitoring the image quality, the system can timely discover the performance degradation of the image pickup device and prompt maintenance or replacement.
In the embodiment, the adaptive multi-scale collaborative filtering (AMCF) algorithm can effectively process complex and dynamically-changed scenes of the transformer substation through adaptive sampling and dynamic tree structure optimization. The input is an image feature vector, and the output is an optimized OpenCV processing module combination strategy. The advantages include:
The self-adaption can dynamically adjust the sampling strategy according to scene changes, and adapt to different weather and illumination conditions. Robustness, through multi-tree integration, improves resistance to noise and abnormal data. And an incremental learning mode is adopted to realize real-time updating, so that the real-time requirement of monitoring of the transformer substation is met.
According to one aspect of the application, a multi-scale space-time diagram (MSTG) effectively captures the space-time interaction mode of personnel and equipment in a transformer substation by constructing a multi-level space-time relation diagram. The input is an optimized image data set, and the output is space-time data, including behavior patterns, interaction information and trend analysis data. The advantages include:
multi-scale representation: while capturing short-term (seconds), medium-term (minutes) and long-term (hours) spatiotemporal patterns.
Modeling of the relationship: the complex relationship among the entities in the transformer substation is effectively represented through the graph structure.
Interpretability: the generated graph structure is convenient for visualization and explanation, and helps security management personnel understand system decisions.
In another embodiment of the present application, the process of multispectral data preprocessing, data fusion and transformation is as follows:
Image registration: the visible, infrared and ultraviolet images are accurately registered using a scale invariant feature matching (SIFT) algorithm and affine transformation.
Noise removal: adaptive wiener filtering is applied to each band image to remove sensor noise.
The registered three band images are flattened into vectors based on fusion of Principal Component Analysis (PCA). And calculating a covariance matrix and decomposing the eigenvalues. And selecting the first two main components to reconstruct a fusion image.
Based on the fusion of wavelet transforms, discrete Wavelet Transforms (DWTs) are performed on the three band images. And carrying out coefficient fusion in a wavelet domain, wherein high-frequency details are reserved in visible light, and low-frequency information is reserved in infrared and ultraviolet. And performing inverse wavelet transformation to obtain a fusion image.
Fusion based on deep learning: a multi-branch convolutional neural network is constructed, with each branch processing an image of a band. An attention mechanism is used to adaptively fuse at the feature map level. And outputting the fused characteristic diagram, and reconstructing to obtain a fused image.
And evaluating the fusion effect by using indexes such as information entropy, structural Similarity (SSIM) and the like, and selecting an optimal fusion result as input of subsequent processing.
In another embodiment of the present application, the process of optimizing the image preprocessing parameters is specifically as follows:
An adaptive non-local mean denoising, comprising: search window size adaptation: and dynamically adjusting the size of the search window according to the local variance of the image. Window size w=base_size (1+α×local_variance), where base_size is the base window size, α is the adjustment coefficient, and local_variance is the local variance of the image.
And (3) self-adapting the filtering strength: the filtering strength is adjusted according to the overall noise level of the image. Filter strength h=k×σ, where σ is the estimated noise standard deviation and k is the adjustment coefficient.
Similarity weight adaptation: image gradient information is introduced and gradient differences are taken into account when computing block similarity. Similarity s=exp (- |latch_i-latch_j|| j/(h/(1+β|grad_i-grad_j|j|))), where latch_i and latch j represent two image blocks in an image respectively, i.e. the local area extracted from the image, β represents the adjustment coefficient controlling the influence of the gradient difference on the similarity, and grad_i and grad_j represent the gradient information of the image blocks patch_i and patch_j, respectively.
An adaptive gamma correction comprising: and the brightness is self-adaptive, and the gamma value is dynamically adjusted according to the overall brightness level of the image.
Γ=a×exp (-b×level) +c, where a, b, c are preset parameters.
Contrast adaptation, introducing local contrast terms, allows regions of low contrast to be more enhanced. γ_local=γ (1-w_contrast), w is a weight coefficient, and local_contrast represents local contrast. Color retention, gamma correction in LAB color space, applying correction only to L channels, retains the original color.
In another embodiment of the application, the process of extracting the specific scene characteristics of the transformer substation is specifically as follows:
extracting power equipment profile features, comprising: the device profile is extracted using a Canny edge detector. And detecting straight lines and circles by using Hough transformation, and identifying geometric features of equipment such as transformers, switches and the like. Shape descriptors of the computed contours, including Hu moments, zernike moments, and the like.
Extracting personnel operation gesture features, including: key point coordinates are extracted by using a OpenPose human body posture estimation algorithm. And calculating the relative positions and angles among the key points, and constructing the attitude feature vector. A timing model, such as a long and short term memory network (LSTM), is used to capture motion sequence features.
Extracting device state features, including: and performing temperature mapping on the infrared image, and extracting the temperature distribution characteristics of the equipment. And using an image segmentation algorithm to identify the device meter area and extracting the reading characteristics. The optical flow algorithm is used for detecting the motion state of equipment, such as switch switching, mechanical arm movement and the like.
In another embodiment of the present application, the process of constructing and quantifying abnormal behavior based on the safety specification of the power industry specifically includes:
Illegal operating distance detection, comprising: a safe distance threshold d_safe is defined, set according to different voltage levels. The minimum distance D _ min of the detected person from the charged device is calculated. If D_min < D_safe, an exception alarm is triggered. The quantization index: safety margin= (d_min-d_safe)/d_safe.
Detecting that the safety device is not worn, comprising: the key safety equipment such as safety helmets, insulating gloves and the like are identified by using a target detection algorithm. Define the necessary security equipment list l_required. An intersection of the detected set of safety equipment l_ detected and l_required is calculated. If the intersection is incomplete, an anomaly alarm is triggered. The quantization index: safety equipment integrity degree= |l_ detected L_required +. l_required.
Detecting a violation order of operation, comprising: a standard operational flow sequence s_standard is defined. The actual operating sequence S _ actual is identified using a timing model, such as HMM. The edit distance D_wait of S_actual and S_standard is calculated. If D_wait > threshold, an exception alarm is triggered. The quantization index: operational norms = 1-d_wait/len (s_standard), where len () represents a function that calculates the length of the sequence.
In another embodiment of the present application, the process of trend analysis of key indicators of a transformer substation specifically includes:
Analyzing a trend of temperature change of the device, comprising: for each key device, time series temperature data T (T) is extracted. Wavelet transformation is applied to perform multi-scale decomposition, separating trend, period and noise components. And linearly regressing the trend component to obtain a slope k and an intercept b. Future temperatures are predicted using the ARIMA model. And outputting a predicted future 24-hour temperature curve according to the temperature change rate k.
Analyzing the person behavior pattern changes, including: the detected person behavior is encoded as a sequence of discrete states. The normal behavior pattern is learned using Hidden Markov Models (HMMs). Likelihood probabilities are calculated for the new observation sequence, where λ is the learned HMM parameter. The trend of the likelihood probability is tracked, and if the likelihood probability continuously decreases, the behavior pattern deviates. And outputting a behavior pattern deviation curve and an abnormal behavior early warning time point.
Analyzing device status trends, comprising: time series data of equipment state indexes (such as switching times, load rates and the like) are extracted. Trend components are extracted using Empirical Mode Decomposition (EMD). Non-linear regression is performed on the trend components, such as using Support Vector Regression (SVR). And calculating a prediction confidence interval for anomaly early detection. And outputting a device state prediction curve, wherein the abnormal probability changes along with time.
In a certain embodiment, in step S4, the adaptive multi-scale space-time diagram construction process specifically includes:
Reading the optimized image dataset I opt and the time sequence characteristic F ST-AMN, and constructing a basic spatial relationship diagram, wherein the basic spatial relationship diagram is specifically as follows: a modified Delaunay triangulation algorithm is used.
An adaptive edge weight :wij = exp(-||pi - pj||2 / (2σ2)) * sim(Fi, Fj), is introduced, wherein p i represents a point i in space, p j represents a point j in space, F i and F j represent two eigenvectors, respectively, σ represents the standard deviation of the gaussian kernel function; sim (F i, Fj) represents computing feature similarity using cosine similarity.
Constructing an adaptive time window: w t = max (minW, α x local_motion_speed), where minW represents the minimum time window size, α represents the time window adjustment coefficient, and local_motion_speed represents the local motion speed; establishing a temporal edge within the range of W t , edge weights W t = exp(-|ti - tj| / Wt), where t i and t j represent points in time;
Generating a multi-scale graph structure, and a short-term graph Gs: time range [ t, t+Ws ]; mid-term graph Gm: time range [ t, t+wm ], wm=5×ws; long-term graph Gl: time range [ t, t+wl ], wl=5×wm, where Ws, wm, wl represent short-, medium-, and long-term time windows, respectively.
Calculating a node importance score: si=digre (i) PageRank (i) betweenness (i), where digre (i) represents degree, pageRank (i) represents web page rank, betweenness (i) represents betting centrality; the Top-K strategy is used to preserve important nodes in each scale map, outputting a multi-scale spatiotemporal atlas gms= { Gs, gm, gl }.
In another embodiment of the present application, in step S4, the process of space-time pattern recognition and anomaly detection specifically includes:
Reading a fusion map feature F_graph, and extracting a space-time mode;
Constructing a multidimensional time sequence decomposition algorithm: ts_i=trend_i+seal_i+residual_i, where trend_i represents Trend, seal_i represents seasonality, residual_i represents Residual component, and ts_i represents the ith time sequence.
Constructing a self-adaptive time sequence segmentation method, and calculating sequence complexity: c_i (t) =lz_ complexity (ts_i [ t-w: t ]), wherein lz_ complexity represents Lempel-Ziv complexity, which is a method for measuring sequence complexity, and w represents a time window for complexity calculation; adaptive segmentation point: sp_i= { t|c_i (t) > μ_c+k σ_c }, μ_c represents the mean of the complexity, k represents the adjustment coefficient of the complexity threshold, and σ_c represents the standard deviation of the complexity.
The statistical characteristics and shape descriptors of the segmentation level are extracted, and the method is concretely as follows:
Identifying behavior patterns, and constructing a behavior dictionary: d= { b_1, b_2, =, b_k }; constructing an elastic space-time pattern matching algorithm, and setting an elastic distance: ED (P, Q) =dtw (P, Q) +λ EMD (P, Q); DTW is dynamic time warping distance, EMD is distance of a mover, and lambda represents weight coefficient in elastic distance; b1, B2, & gt, BK represents a behavior pattern in a dictionary;
identifying behavior using a K-nearest neighbor algorithm (KNN) classifier, comprising: a multi-scale anomaly detector is constructed,
For local anomaly, local anomaly factors (LOF) are calculated by adopting a local anomaly factor (LOF) algorithm, for global anomaly, global anomaly is detected by adopting a single-class support vector machine (One-class SVM), and for time sequence anomaly, an adaptive differential integration moving average autoregressive (ARIMA) model is constructed to predict anomaly.
Constructing an integrated anomaly scoring system comprising:
Abnormality score: s_ anom = w_1 x S local+w_2 S_global+w_3 s_temporal; the weights w_i are adaptively adjusted by reinforcement learning, w1, w2, w3 represent different anomaly detection weights, S_local, S_global, and S_temporal represent local, global, and time-series anomaly scores, respectively
Constructing an explanatory decision tree, comprising: an anomaly interpretation generator based on the counterfactual is constructed,
Generating a counterfactual sample, X ' =x+δ, s.t. f (X ') +.f (X), where X ', X and δ each represent a counterfactual sample related variable; a minimum set of disturbances is extracted, δ_min=argmin δ. S.t.f (X+delta) noteq.f (X), where argmin represents a parameter that minimizes the function.
Output abnormality cause and severity assessment, including: behavior recognition result b_rec, abnormality detection result a_det, abnormality interpretation a_exp.
Analyzing multi-subject interactions, comprising: and reading a multi-scale space-time atlas G_MS and a behavior recognition result B_rec.
Extracting an interaction mode, and constructing an interaction graph: i_g (t) = (V, E, W), where w_ij=interaction_length (I, j, t), interaction_length representing the interaction strength between node I and node j at point in time t, V representing the set of nodes in the graph, E representing the set of edges in the graph, W representing the set of edge weights in the graph, w_ij representing the interaction strength;
constructing a multi-granularity interaction strength calculation method:
spatial interaction: si_ij=gaussian (|) p-i-p j, sigma_s); where gaussian represents the gaussian function, p_i and p_j represent points in space, σ_s represent gaussian kernel parameters of the spatial interaction;
Behavioral interaction: bi_ij=sim (B-i), b_j); wherein b_i and b_j represent two behavior feature vectors, respectively;
time sequence interaction: TI _ ij = corr (TS _ i, TS_j); wherein ts_i and ts_j represent two world sequences, respectively, corr () represents a correlation function;
w_ij=α si_ij+beta bi_ij+γ ti_ij; wherein α, β and γ each represent a weight of the interaction strength calculation;
Setting community similarity: CS (C _ t, c_t+1) = |c t ≡C_t +. 1I/I C_t U.C_t+1I; CS (Ct, ct+1) represents community similarity at times t and t+1;
tracking community lifecycle: birth, growth, merger, division, and death.
Identifying key nodes, including:
The multi-dimensional centrality index is calculated,
Structural centrality: sc_i=pagerank (i) × betweenness (i);
Behavior centrality: BC _ i = Σjsim (B _ i, b_j)/|v|; wherein V represents the vertex set of the graph;
Timing centrality: tc_i= PERSISTENCE (I) activity (i), persistence denotes persistence, activity denotes activity.
Constructing self-adaptive key node ordering algorithm
Building sequence relation: i > j if (sc_i, bc_i, tc_i) dominates (sc_j, bc_j, tc_j); wherein dominates denotes that in order, the centrality index of the node i is better than that of the node j, and a Hasi (Hasse) diagram of the node is calculated by using the partial order set theory; extracting the maximum chain of Hasse graph as key node sequence
Constructing an interaction effect quantification method based on causal inference, and setting processing variables: t=interaction_level (i, j); where interaction_level represents the level of interaction between nodes, the result variable: y=performance_metric (i, j); where performance_metric represents the performance index of the node, estimated causal effects: e [ y|do (t=t) ] -E [ y|do (t=0) ], wherein E [ ] represents the desired function.
And outputting an interaction effect matrix, an interaction mode I_P, a group dynamic G_D, a key node list KN and an interaction effect matrix IE.
Constructing a self-adaptive trend extraction algorithm, and calculating a local trend: lt_i (t) =media (ts_i [ t-w: t+w ]); where media represents the median, global trend: gt_i=loess (lt_i), where LOESS represents locally weighted regression; and (3) periodicity: p_i=fft (ts_i-gt_i), where FFT represents the fast fourier transform; extracting trend characteristics: slope, inflection point, and period length.
Setting a space-time event: e= (location, time, attribute); wherein location, time, attribute represent the location, time and attribute of the event respectively, construct the sequence database: sdb= { s_1, s_2, once again, s_n }; an efficient pattern growth algorithm based on a prefix tree is constructed.
In another embodiment of the present application, step S3 may further be:
S31. Initializing the Adaptive Chaotic Monte Carlo Forest (ACMCF)
Constructing a heterogeneous decision tree set t= { t_1, t_2, & gt, t_n }; each tree t_i represents a different OpenCV processing module combination policy;
The image feature vector f= [ f_1, f_2, ], f_m ] is read.
A splitting strategy driven by a chaotic power system is built for each tree, comprising: setting a chaotic map: x_ (n+1) =r x_n_ (1-x_n), r being a control parameter; initializing a chaotic state x_0 epsilon (0, 1) of each node; setting splitting conditions: splitting if x_n > threshold;
Initializing a priori distribution of the tree using a Dirichlet-polynomial (Dirichlet-Multinomial) distribution: p (θ) =dir (α); dir represents dirichlet distribution, θ represents a parameter vector, and α represents a parameter vector of dirichlet distribution;
Constructing a performance index q= accuracy ×efficiency; accuracy denotes accuracy, efficiency denotes processing efficiency, and a rule is updated: r_new=r _old+η (Q-q_target); wherein r_new and r_old respectively represent new and old chaotic mapping parameters, and eta represents a chaotic mapping parameter adjustment coefficient; the initialized ACMCF model m_init is output.
S32, feature space self-organizing mapping and clustering
Read image feature set f_set= { f_1, f_2, f_n };
Constructing an ad hoc mapping (SOM) network, comprising: the two-dimensional grid G size is initialized to p×q, and each grid node n_ij is assigned a random weight vector w_ij.
Training a SOM network, comprising:
for each input f_k: find Best Matching Unit (BMU): bmu=argmin_ij||i F_k-w/u ij is; updating weights of the BMU and the neighborhood thereof: w_ij_new=w_ij_old+α (t) ×h (BMU, ij, t) ×f_k-w_ij_old, where α (t) is a learning rate and h () is a neighborhood function.
Clustering using a modified Density Peak Clustering (DPC) algorithm, comprising: calculating local density rho_i and distance delta_i; calculating an adaptive cut-off distance d_c: d_c=media ({ d_ij }) (1+β×std ({ d_ij }); wherein media represents the median; identifying a clustering center: gamma_i=ρ_i δ_i, the remaining points are assigned to the nearest high density neighbors, and OpenCV module combinations are assigned to each cluster.
Constructing a multi-objective optimal selection optimal module combination: target 1: maximizing the treatment effect; target 2: the computational complexity is minimized. The Pareto (Pareto) optimal solution set is solved by adopting a rapid non-dominant ordering genetic algorithm II (NSGA-II).
The clustering result c= { c_1, c_2, & gt, c_k } and its corresponding module combination m= { m_1, m_2, & gt, m_k } are output.
S33, self-Adaptive Monte Carlo Tree Searching (AMCTS).
Reading a current image feature F_cur, ACMCF model M; initializing a search tree ST, wherein a root node is F_cur; performing AMCTS iterations:
Starting from the root node, the child node is selected using the UCB1 policy: ucb1=x_j +C sqrt% 2X ln (n)/n_j); x_j: average prize for node j, n: total number of accesses, n_j: the number of accesses by node j; the selection path P is updated.
If the leaf node is reached, creating a new child node; a new feature segmentation threshold is generated using the chaotic map.
Estimating a prize R for the new node using the fast evaluation function; r=w1 predicts image quality +w2 predicts the processing speed; the number of accesses and average rewards for all nodes on path P are updated.
Dynamically adjusting the exploration parameter C in UCB 1: c_new=c_old (1+λ) (r_avg-r_target)); wherein R_avg represents evaluation rewards, R_target represents target rewards, lambda represents adjustment coefficients, C_old represents original exploration parameters, and OpenCV module combinations corresponding to the optimal paths are selected; the selected module combination m_best is output.
S34, module parameter self-adaptive optimization
Reading a selected module combination M_best and a current image I_cur; defining a parameter search space s= { s_1, s_2, & gt, s_m }; constructing a parameter optimizer based on a Firefly algorithm, comprising: initializing a firefly population p= { p_1, p_2, p_n }, each p_i representing a set of parameters; setting an attraction degree function β (r) =β_0 x exp (- γr); setting an objective function f (p) =image quality score (i_processed) - λ by time of processing;
for each pair of fireflies (i, j):
If f (p_j) > f (p_i), updating the position of p_i: p_i_new=p_i+β (r_ij) × (p_j-p_i) +α× (rand () -0.5), where β (r_ij) represents the attraction function and rand () represents the random number generation function.
Applying an adaptive step size: α_new=α_0 (1-item/max_items) μ. Updating a global optimal solution p_best, wherein alpha_0 represents an initial step size, iteration represents an iteration number, max_iterations represents a maximum iteration number, μ represents an adjustment coefficient
Performing local search on the global optimal solution p_best, and performing refinement search near the global optimal solution p_best by using a pattern search algorithm; and outputting the optimized parameter setting P_opt.
Reading the processed image I_processed ACMCF model M;
Constructing a multidimensional quality assessment function, wherein q=w1 x sharpness+w2 x contrast+w3 x information retention+w4 x processing speed; using adaptive weights: w_i=softmax (performance) history i); where performance_history_i represents a history, softmax represents a normalization function; the quality gain Δq=q (i_processed) -Q (i_cur) is calculated, where i_processed represents the processed picture and i_cur represents the current picture.
Updating ACMCF the model, for each node on the selected path:
Updating the access count: n_j=n _j+1; updating average rewards: x_j= (x_j + (n_j-1) +Δq)/n_j; adjusting chaotic mapping parameters: r_new=r_old (1+η×sign (Δq)), where r_new and r_old each represent a chaotic map parameter, η represents a chaotic map parameter adjustment coefficient, and sign () represents a sign function.
Calculating a node importance score: s_j=x\u j sqrt% n_j)/n_total; if S j < threshold, consider pruning the node, where n total represents the total number of accesses and threshold represents the threshold.
Identifying the K% tree with the worst performance, using the current best tree as a template, applying a genetic algorithm to generate a new tree, and outputting an updated ACMCF model M_updated.
Reading an original image I_cur, selecting a module combination M_best, and optimizing a parameter P_opt; processing the image by using the selected OpenCV module; the modules are called in the order specified by M_best, and the parameters of each module are set by using P_opt.
The potential processing artifacts are repaired using morphological operations, the used modules, parameter settings and processing times are recorded, various image quality metrics (peak signal to noise ratio (PSNR), structural Similarity (SSIM), learning perceived image block similarity (LPIPS), etc.) are calculated. And outputting the optimized image I_opt, and processing the report R.
In some embodiments of the present application, the calculation process is specifically:
Adapting a gamma function γ (L) =a x exp (-b x L) +c, where a, b, c are preset parameters;
gamma correction is applied to each pixel p: p_out=255 (p/255) (1 / γ(L));
Calculating a gradient magnitude graph g=sqrt ((ψf_x/ψx) of the optical flow field) bar+ (ψf_y/ψy), where ψ represents the partial derivative; adaptive threshold t=mean (G) +k×std (G) binarized gradient magnitude map, where f_x and f_y represent the components of the optical flow field in x and y directions, respectively.
In some embodiments, the multi-modal image fusion process is specifically:
and (3) performing multi-scale decomposition on each frame of image by applying wavelet transformation, and performing 3-layer decomposition on each mode of image by using a wavelet base to obtain a low-frequency coefficient LL and a high-frequency coefficient { LH, HL, HH } of each mode.
Constructing a feature level fusion algorithm based on a Pulse Coupled Neural Network (PCNN), comprising:
Constructing a PCNN network for the high-frequency coefficient of each scale, and introducing a local energy function E (i, j) =Σ (W|coef (i, j) |) as an input of the PCNN, wherein coef represents a wavelet transformation coefficient; constructing an adaptive link strength β (i, j) =exp (-E (i, j)/max (E)), wherein exp represents an exponential function and max represents a maximization function; iteratively updating PCNN until convergence, and obtaining a fused high-frequency coefficient; the low frequency coefficients are fused using a weighted average method. The fused image is reconstructed using an inverse wavelet transform.
And constructing a super-pixel adjacency graph G, wherein the edge weight is the similarity of adjacent super-pixel characteristics. An adaptive growth threshold T (si) =μ (si) +α σ (si) is constructed, where μ and σ are the local mean and standard deviation and α represents the coefficient controlling the growth threshold sensitivity.
Inter-region similarity measure S (ri, rj) =exp (- |fi-fj||bar/σ bar); the boundary penalty term B (ri, rj) =exp (- | ∇ I (ri, rj) || σ,; the graph cuts the energy function e=Σ (1-S (ri, rj)) +λ Σb (ri, rj). Global segmentation quality index q=Σ (wi×di), where wi is a regional area weight, di represents a regional fractal dimension, fi and fj represent feature vectors.
Local structure tensor t= [ Σ (i_x), Σ (i_ x I _y); Σ (i_ x I _y), Σ (i_y); eigenvalues λ_1, λ_2 and eigenvectors v_1, v_2 of the tensor; local anisotropy metric a= (λ1- λ2)/(λ1+λ2); anisotropic weight: w (x, y) =1+α×a (x, y); weighted HOG features: hog_w=Σ (w (x, y) ×m (x, y) ×bin (θ (x, y))); fusion feature f_hog-lst= [ hog_w, λ_1, λ_2, a ]. Self-adaptive color moment: acm= [ μ_s, σ_s, skew_s, kurt _s ]. The scale weight function w_i=log (1+i)/Σlog (1+j). The phase consistency metric PC (u, v) = |Σa (u, v) ×exp (j Φ (u, v))|/Σa (u, v). The inter-layer similarity measure S (i _ i, l_j) =exp (- |f_i-f_j||bar/σ bar). Where i_x and i_y represent gradients of the image in x and y directions, M (x, y) represents gradient magnitude, θ (x, y) represents gradient direction, μ_s, σ_ S, skew _s and kurt _s each represent statistics of adaptive color moments, a (u, v) represents magnitude of DCT coefficients, and Φ (u, v) represents phase of the DCT coefficients.
According to another aspect of the present application, there is also provided an OpenCV-based substation safety operation monitoring system, including:
at least one processor; and
A memory communicatively coupled to at least one of the processors; wherein,
The memory stores instructions executable by the processor for execution by the processor to implement the OpenCV-based substation safety operation monitoring method according to any of the above-described aspects.
The preferred embodiments of the present invention have been described in detail above, but the present invention is not limited to the specific details of the above embodiments, and various equivalent changes can be made to the technical solution of the present invention within the scope of the technical concept of the present invention, and all the equivalent changes belong to the protection scope of the present invention.
Claims (11)
1. The substation safety operation monitoring method based on OpenCV is characterized by being realized through at least two OpenCV modules, and comprises the following steps:
Step S1, acquiring video images of a substation operation area through a pre-configured video camera, extracting video frames, and carrying out semantic segmentation on the images to obtain a segmented image set;
Step S2, extracting image features for each segmented image in the segmented image set, wherein the image features comprise content features, quality features and associated features;
step S3, based on image characteristics, calling a preconfigured self-adaptive Monte Carlo forest module to select a corresponding OpenCV processing module for each segmented image, and processing to obtain an optimized image data set;
S4, based on the optimized image data set, calling a preconfigured self-adaptive multi-scale space-time diagram module to process, and acquiring space-time data, wherein the space-time data comprises a space-time behavior mode, interaction information and trend analysis data; based on the spatiotemporal data, abnormal behavior and abnormal state are identified and output.
2. The OpenCV-based substation security operation monitoring method of claim 1, wherein the step S1 is specifically:
S11, acquiring image data of visible light, infrared and ultraviolet bands through a preconfigured multispectral camera, and adding a time stamp to each frame of image to obtain a video frame data set after time synchronization;
Step S12, acquiring a time-synchronized video frame data set, sequentially denoising each video frame by adopting a non-local mean value method, correcting and optimizing the image contrast of the video frame by adopting a self-adaptive gamma method, and acquiring a primarily processed video frame set;
Step S13, reading a preliminarily processed video frame set, calling a self-adaptive optical flow method to calculate the motion between adjacent video frames, and eliminating repeated redundant video frames according to a calculation result to obtain a screened video frame set;
Step S14, reading the filtered video frame set, dividing each video frame by adopting a graph cutting method, optimizing the dividing boundary, obtaining a divided video frame data set, carrying out dividing quality evaluation, and outputting a divided image set.
3. The OpenCV-based substation security operation monitoring method of claim 1, wherein said step S2 is further:
Step S21, reading a set of segmented images, extracting content features for each segmented image, including: multi-scale local binary pattern features, directional gradient histogram and local structure tensor fusion features, color moment and color contrast distribution features, and Gabor filter bank features;
Step S22, reading a set of segmented images, extracting quality features for each segmented image, including: a non-reference image quality evaluation index based on natural scene statistics, a local quality map of multi-scale structure similarity, and a frequency domain quality feature based on phase consistency;
Step S23, reading a set of segmented images, extracting associated features for each segmented image, including: region relation features, space-time cube features, and fusion features of optical flow and track features;
And S24, invoking and configuring a feature fusion and dimension reduction module to fuse and reduce the dimension of the content features, the quality features and the associated features, and outputting a final image feature vector set, namely a fusion graph feature.
4. The OpenCV-based substation security operation monitoring method of claim 1, wherein said step S3 is further:
S31, reading image characteristics, initializing a preconfigured self-adaptive Monte Carlo forest module, and constructing and training a self-organizing map network; performing self-organizing mapping and clustering on the image features, and outputting a clustering result;
Step S32, based on image features and the adaptive Monte Carlo forest module, performing adaptive Monte Carlo tree search and parameter optimization to obtain optimized parameters, and obtaining an OpenCV processing module combination strategy;
S33, constructing an image evaluation function, evaluating image processing quality, calculating a quality gain, and outputting an optimization parameter if the quality gain exceeds a threshold value;
And step S34, processing the image based on the optimized self-adaptive Monte Carlo forest module to obtain a multi-scale optimized image data set, namely a multi-scale space-time atlas.
5. The OpenCV-based substation security operation monitoring method of claim 1, wherein said step S4 is further:
S41, reading an optimized image dataset, extracting time sequence characteristics and constructing a self-adaptive multi-scale space-time diagram module; constructing a basic space relation diagram by using an improved Deltay triangulation algorithm, constructing a self-adaptive time window, and establishing a time edge to generate a short-term, medium-term and long-term multi-scale diagram structure; performing graph compression and important node reservation, and outputting a multi-scale space-time atlas;
Step S42, reading the characteristics of the fusion map, and executing space-time pattern recognition and anomaly detection; extracting a space-time mode by applying a multidimensional time sequence decomposition algorithm and a self-adaptive time sequence segmentation method; constructing a behavior dictionary and performing behavior recognition by using an elastic space-time pattern matching algorithm; constructing a multi-scale anomaly detector, including local, global and time sequence anomaly detection; outputting a behavior identification result, an abnormality detection result and an abnormality interpretation through an integrated abnormality scoring system;
step S43, performing multi-main body interaction analysis based on the multi-scale space-time atlas and the behavior recognition result; constructing an interaction graph and constructing a multi-granularity interaction strength calculation method; carrying out group dynamic analysis by applying a community detection algorithm and an incremental community evolution tracking algorithm; constructing a multidimensional centrality index identification key node; constructing an interaction effect propagation model, quantifying the interaction effect by applying a causal inference method, and outputting an interaction mode, group dynamics, a key node list and an interaction effect matrix;
Step S44, reading time sequence characteristics, fusion graph characteristics and group dynamic data, and executing trend analysis and prediction; applying improved wavelet transformation to perform multi-scale trend decomposition; constructing a space-time sequence pattern mining algorithm and association rule analysis; constructing an integrated framework of multivariable time sequence prediction; and constructing a multi-scenario simulation algorithm based on a Monte Carlo method to perform risk assessment, and outputting a trend report, a prediction result and a risk assessment report.
6. The OpenCV-based substation security operation monitoring method according to claim 2, wherein in the step S12, each video frame is denoised sequentially by adopting a non-local mean method, and the image contrast of the video frame is corrected and optimized by adopting an adaptive gamma method, so as to obtain a primarily processed video frame set, which further includes:
Step S121, calculating a local covariance matrix of each pixel neighborhood, searching a similar block by using principal component analysis and extracting a main noise direction; dynamically adjusting the shape of the search window in consideration of the main direction of noise when searching for similar blocks;
step S122, introducing an adaptive weight factor when calculating a weighted average value, and dynamically adjusting according to the block similarity and the noise intensity; outputting the denoised video frame data set;
step S123, reading the denoised video frame dataset, calculating a histogram of each frame of image and estimating the overall brightness; constructing an adaptive gamma function gamma (L) =a×exp (-b×l) +c, wherein a, b, c are preset parameters, and L is overall brightness; applying gamma correction to each pixel; and outputting a video frame data set with optimized contrast, namely a video frame set which is primarily processed, wherein exp () represents an exponential function.
7. The OpenCV-based substation security operation monitoring method of claim 6, wherein step S13 is specifically:
step S131, a video frame data set with optimized contrast is applied to calculate a dense optical flow field F between continuous frames by using a French Beck optical flow algorithm;
Step S132, calculating a gradient amplitude map of the optical flow field and using an adaptive threshold t=mean (G) +k×std (G) binarized gradient amplitude map, G being the gradient amplitude of the optical flow field, mean () representing the average value, k representing the coefficient of controlling the threshold sensitivity, std () representing the standard deviation;
Step S133, calculating the proportion of non-zero pixels in the binarization map, and eliminating the current frame if the proportion is smaller than a preset threshold value; and outputting the video frame data set with redundant frames removed, namely the filtered video frame set.
8. The OpenCV-based substation security operation monitoring method of claim 6, wherein step S14 is specifically:
Step S141, reading the screened video frame set, constructing and carrying out preliminary segmentation based on a super-pixel self-adaptive region growing algorithm, wherein the method specifically comprises the following steps:
Initializing a super-pixel set by using a simple linear iterative clustering algorithm, and calculating the average color and texture characteristics of each super-pixel; constructing a super-pixel adjacency graph, wherein the edge weight is the similarity of adjacent super-pixel characteristics; constructing an adaptive growth threshold T (si) =μ (si) +α×σ (si), wherein μ and σ are local mean and standard deviation, and α represents a coefficient controlling the sensitivity of the growth threshold; performing region growth based on the super-pixel adjacency graph and the adaptive growth threshold, and merging similar regions; outputting a preliminary segmentation result;
step S142, reading a preliminary segmentation result, and optimizing a segmentation boundary by applying an improved graph cutting method, wherein the method specifically comprises the following steps:
Constructing a cutting graph, wherein nodes are areas in a preliminary segmentation result, and edges are adjacent areas; constructing an inter-region similarity measure, a boundary penalty term and an energy function of graph cutting; iteratively optimizing an energy function using an alpha expansion algorithm; obtaining an optimized segmentation result;
Step S143, reading the optimized segmentation result, repairing the small region and the cavity by using morphological post-processing, wherein the method specifically comprises the following steps:
Removing the area smaller than the threshold value by applying area open operation; filling the internal cavity by using a reconstruction open operation; smoothing the region boundary by applying a conditional expansion algorithm; outputting the repaired segmentation result;
Step S144, reading the repaired segmentation result, and constructing a segmentation quality evaluation index based on fractal dimension, wherein the segmentation quality evaluation index specifically comprises the following steps:
Calculating the fractal dimension of each segmented region using a box counting method; calculating a global segmentation quality index q=Σ (wi×di), wherein wi is a regional area weight, di is a regional fractal dimension; if the global segmentation quality index Q is lower than the threshold T_Q, marking the global segmentation quality index Q as a region to be optimized; obtaining a quality evaluation result and a region mark to be optimized;
step S145, reading the region to be optimized and the repaired segmentation result, and applying an active contour model to the region to be optimized, specifically:
Initializing a contour as a current segmentation boundary; constructing an external force field based on gradient vector flow; introducing a curvature constraint term to prevent excessive deformation of the profile; iteratively updating the profile until convergence or maximum iteration number is reached; outputting the optimized region outline;
step S146, reading the repaired segmentation result and the optimized region outline, and integrating all the processing results, specifically:
Replacing the corresponding region in the repaired segmentation result with the optimized region contour; a final set of segmented images is generated.
9. The OpenCV-based substation security operation monitoring method according to claim 3, wherein in the step S21, the process of extracting content features is specifically:
Step S211, reading a segmented image set, and sequentially carrying out Gaussian pyramid decomposition on the segmented image to obtain a multi-scale image set; calculating the local variance of the pixel for each scale, constructing a self-adaptive threshold function and calculating the local binary pattern characteristic of each scale; selecting the scale combination with the most discrimination by using an information gain criterion; outputting a multi-scale local binary pattern feature vector;
Step S212, reading a segmented image set, and calculating the gradient amplitude and direction of the image; constructing a local structure tensor and calculating a characteristic value and a characteristic vector thereof; constructing local anisotropic measurement and introducing anisotropic weight in the calculation of the directional gradient histogram; calculating the weighted direction gradient histogram feature to obtain a fusion feature;
step S213, reading a segmented image set, and converting the image into an LAB color space; calculating a global color contrast map and obtaining a significant region by self-adaptive segmentation through an Ojin algorithm; calculating the self-adaptive color moment of the salient region; constructing a color contrast distribution histogram;
Step S214, reading a segmented image set, and constructing a direction self-adaptive function and a self-adaptive Gabor filter; automatically selecting filter parameters by using a particle swarm optimization algorithm; applying the optimized Gabor filter group to the image to obtain a response atlas; the statistical features of each response plot are calculated.
10. The OpenCV-based substation security operation monitoring method of claim 9, wherein the process of extracting quality features in step S22 includes:
step S221, reading a segmented image set, and constructing a non-reference image quality evaluation index based on natural scene statistics, wherein the non-reference image quality evaluation index specifically comprises the following steps:
Carrying out local mean value reduction and variance normalization on the image; calculating generalized Gaussian distribution parameters of the normalized image; extracting paired product statistical characteristics of the normalized images; calculating local second-order statistical characteristics, namely generalized Gaussian distribution parameters of a variance field; constructing and outputting a feature vector;
Step S222, reading a segmented image set, and constructing a local quality map based on multi-scale structural similarity, wherein the method specifically comprises the following steps:
Performing multi-scale decomposition on the image to obtain an image decomposition set; calculating a local structure similarity graph for each scale; constructing a scale weight function and calculating a weighted multi-scale local structure similarity graph; dividing the multi-scale local structure similarity graph by using the self-adaptive threshold to obtain a local quality graph; extracting statistical characteristics of the local quality map, including area ratio, average value and standard deviation;
Step S223, reading a segmented image set, and constructing a frequency domain quality feature based on phase consistency, wherein the frequency domain quality feature specifically comprises the following steps:
Performing discrete cosine transform on the image; calculating the amplitude and phase of the discrete cosine transform coefficient; constructing a phase consistency measurement and a phase consistency diagram; calculating the directional entropy and the scale entropy for constructing the phase consistency diagram; and extracting statistical moment for constructing the phase consistency graph.
11. Substation safety operation monitoring system based on OpenCV, characterized by comprising:
at least one processor; and
A memory communicatively coupled to at least one of the processors; wherein,
The memory stores instructions executable by the processor for execution by the processor to implement the OpenCV-based substation security operation monitoring method of any of claims 1 to 10.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410948694.5A CN118485973B (en) | 2024-07-16 | 2024-07-16 | Substation safety operation monitoring method and system based on OpenCV |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410948694.5A CN118485973B (en) | 2024-07-16 | 2024-07-16 | Substation safety operation monitoring method and system based on OpenCV |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN118485973A true CN118485973A (en) | 2024-08-13 |
| CN118485973B CN118485973B (en) | 2024-09-06 |
Family
ID=92191762
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410948694.5A Active CN118485973B (en) | 2024-07-16 | 2024-07-16 | Substation safety operation monitoring method and system based on OpenCV |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118485973B (en) |
Cited By (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118822318A (en) * | 2024-09-14 | 2024-10-22 | 湖南联合智为信息技术有限公司 | A method, system and device for generating emergency plans |
| CN119067225A (en) * | 2024-11-06 | 2024-12-03 | 齐鲁工业大学(山东省科学院) | Industrial control anomaly explanation method and system based on generative counterfactual sample differences |
| CN119130127A (en) * | 2024-08-21 | 2024-12-13 | 中国长江电力股份有限公司 | A real-time prediction method for hydropower station risks based on causal attention mechanism and graph learning |
| CN119295667A (en) * | 2024-10-10 | 2025-01-10 | 中国电信股份有限公司技术创新中心 | Model building method, building device, equipment, storage medium and program product |
| CN119342168A (en) * | 2024-09-14 | 2025-01-21 | 国网山西省电力公司电力科学研究院 | An automatic and sensorless transmission control system for substation image data |
| CN119376464A (en) * | 2024-12-25 | 2025-01-28 | 深圳市南霸科技有限公司 | A thermal management control method and system for energy storage power supply |
| CN119600343A (en) * | 2024-11-18 | 2025-03-11 | 北京医院 | Method and device for predicting the recovery status of neck scar after thyroidectomy based on CV |
| CN119762559A (en) * | 2025-03-10 | 2025-04-04 | 浙江大学 | A multimodal image registration method based on automodal correlation and cross-modal estimation |
| CN119888864A (en) * | 2025-03-20 | 2025-04-25 | 山东省水利勘测设计院有限公司 | Dam personnel behavior monitoring and early warning method, system, device and medium |
| CN120014284A (en) * | 2024-12-31 | 2025-05-16 | 中南林业科技大学 | Method and system for identifying damage at steel-wood composite beam-column joints using machine vision |
| CN120105312A (en) * | 2025-05-06 | 2025-06-06 | 交通运输部天津水运工程科学研究所 | Remote online monitoring method and system based on machine vision and artificial intelligence |
| CN120216947A (en) * | 2025-05-22 | 2025-06-27 | 西安伯肯氢电科技有限公司 | Sensor-based overvoltage fault identification system for multiplexed circuits |
| CN120385382A (en) * | 2025-06-18 | 2025-07-29 | 河北天翼红外科技有限公司 | A drift self-calibration method for multimodal sensors in complex industrial environments |
| CN120635599A (en) * | 2025-08-12 | 2025-09-12 | 北京博数智源人工智能科技有限公司 | Wind turbine respirator anomaly detection method and system based on image segmentation |
| CN120635599B (en) * | 2025-08-12 | 2025-10-14 | 北京博数智源人工智能科技有限公司 | Wind turbine respirator anomaly detection method and system based on image segmentation |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2018172234A2 (en) * | 2017-03-20 | 2018-09-27 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Advanced video data stream extraction and multi-resolution video transmission |
| CN113408087A (en) * | 2021-05-25 | 2021-09-17 | 国网湖北省电力有限公司检修公司 | Substation inspection method based on cloud side system and video intelligent analysis |
| CN115049806A (en) * | 2022-06-21 | 2022-09-13 | 北京理工大学 | Face augmented reality calibration method and device based on Monte Carlo tree search |
| CN117788946A (en) * | 2024-01-03 | 2024-03-29 | 国网信息通信产业集团有限公司 | Image processing methods, devices, electronic equipment and storage media |
-
2024
- 2024-07-16 CN CN202410948694.5A patent/CN118485973B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2018172234A2 (en) * | 2017-03-20 | 2018-09-27 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Advanced video data stream extraction and multi-resolution video transmission |
| CN113408087A (en) * | 2021-05-25 | 2021-09-17 | 国网湖北省电力有限公司检修公司 | Substation inspection method based on cloud side system and video intelligent analysis |
| CN115049806A (en) * | 2022-06-21 | 2022-09-13 | 北京理工大学 | Face augmented reality calibration method and device based on Monte Carlo tree search |
| CN117788946A (en) * | 2024-01-03 | 2024-03-29 | 国网信息通信产业集团有限公司 | Image processing methods, devices, electronic equipment and storage media |
Non-Patent Citations (1)
| Title |
|---|
| K. ULLAH等: "Comparison of Person Tracking Algorithms Using Overhead View Implemented in OpenCV", 《2019 9TH ANNUAL INFORMATION TECHNOLOGY, ELECTROMECHANICAL ENGINEERING AND MICROELECTRONICS CONFERENCE (IEMECON)》, 21 October 2019 (2019-10-21), pages 284 - 289 * |
Cited By (20)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119130127A (en) * | 2024-08-21 | 2024-12-13 | 中国长江电力股份有限公司 | A real-time prediction method for hydropower station risks based on causal attention mechanism and graph learning |
| CN119342168A (en) * | 2024-09-14 | 2025-01-21 | 国网山西省电力公司电力科学研究院 | An automatic and sensorless transmission control system for substation image data |
| CN118822318A (en) * | 2024-09-14 | 2024-10-22 | 湖南联合智为信息技术有限公司 | A method, system and device for generating emergency plans |
| CN118822318B (en) * | 2024-09-14 | 2025-02-07 | 湖南联合智为信息技术有限公司 | A method, system and device for generating emergency plans |
| CN119295667A (en) * | 2024-10-10 | 2025-01-10 | 中国电信股份有限公司技术创新中心 | Model building method, building device, equipment, storage medium and program product |
| CN119067225B (en) * | 2024-11-06 | 2025-05-16 | 齐鲁工业大学(山东省科学院) | Industrial control anomaly interpretation method and system based on difference of generated counterfactual samples |
| CN119067225A (en) * | 2024-11-06 | 2024-12-03 | 齐鲁工业大学(山东省科学院) | Industrial control anomaly explanation method and system based on generative counterfactual sample differences |
| CN119600343A (en) * | 2024-11-18 | 2025-03-11 | 北京医院 | Method and device for predicting the recovery status of neck scar after thyroidectomy based on CV |
| CN119376464A (en) * | 2024-12-25 | 2025-01-28 | 深圳市南霸科技有限公司 | A thermal management control method and system for energy storage power supply |
| CN120014284A (en) * | 2024-12-31 | 2025-05-16 | 中南林业科技大学 | Method and system for identifying damage at steel-wood composite beam-column joints using machine vision |
| CN120014284B (en) * | 2024-12-31 | 2025-09-12 | 中南林业科技大学 | Method and system for identifying steel-wood composite beam column node damage by applying machine vision |
| CN119762559A (en) * | 2025-03-10 | 2025-04-04 | 浙江大学 | A multimodal image registration method based on automodal correlation and cross-modal estimation |
| CN119762559B (en) * | 2025-03-10 | 2025-05-16 | 浙江大学 | A multimodal image registration method based on automodal correlation and cross-modal estimation |
| CN119888864A (en) * | 2025-03-20 | 2025-04-25 | 山东省水利勘测设计院有限公司 | Dam personnel behavior monitoring and early warning method, system, device and medium |
| CN120105312A (en) * | 2025-05-06 | 2025-06-06 | 交通运输部天津水运工程科学研究所 | Remote online monitoring method and system based on machine vision and artificial intelligence |
| CN120216947A (en) * | 2025-05-22 | 2025-06-27 | 西安伯肯氢电科技有限公司 | Sensor-based overvoltage fault identification system for multiplexed circuits |
| CN120216947B (en) * | 2025-05-22 | 2025-07-18 | 西安伯肯氢电科技有限公司 | Sensor-based overvoltage fault identification system for multiplexed circuits |
| CN120385382A (en) * | 2025-06-18 | 2025-07-29 | 河北天翼红外科技有限公司 | A drift self-calibration method for multimodal sensors in complex industrial environments |
| CN120635599A (en) * | 2025-08-12 | 2025-09-12 | 北京博数智源人工智能科技有限公司 | Wind turbine respirator anomaly detection method and system based on image segmentation |
| CN120635599B (en) * | 2025-08-12 | 2025-10-14 | 北京博数智源人工智能科技有限公司 | Wind turbine respirator anomaly detection method and system based on image segmentation |
Also Published As
| Publication number | Publication date |
|---|---|
| CN118485973B (en) | 2024-09-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN118485973B (en) | Substation safety operation monitoring method and system based on OpenCV | |
| CN118628983B (en) | A cable identification method and system based on image processing | |
| Kim et al. | Illumination-invariant background subtraction: Comparative review, models, and prospects | |
| CN119091234B (en) | Intelligent decision-making and response method and system based on data analysis | |
| KR101433472B1 (en) | Apparatus, method and computer readable recording medium for detecting, recognizing and tracking an object based on a situation recognition | |
| CN118135460B (en) | Intelligent building site safety monitoring method and device based on machine learning | |
| CN118982847B (en) | Low-power monitoring wake-up method, device and equipment based on human figure recognition | |
| CN117974658B (en) | Cable anomaly identification method and system based on image processing | |
| CN119049133B (en) | Garbage classification throwing behavior identification method and system based on AI algorithm | |
| CN118379664B (en) | Video identification method and system based on artificial intelligence | |
| CN118799924B (en) | A bird-repelling control method based on environmental factor optimization | |
| CN116993738B (en) | Video quality evaluation method and system based on deep learning | |
| CN116629465A (en) | Smart power grids video monitoring and risk prediction response system | |
| Zhang et al. | Robust correlation filter learning with continuously weighted dynamic response for UAV visual tracking | |
| Guo et al. | Partially-sparse restricted boltzmann machine for background modeling and subtraction | |
| CN116611022B (en) | Intelligent campus education big data fusion method and platform | |
| CN119942767A (en) | Intelligent construction site management method and system based on photoelectric and video fence | |
| CN119810401B (en) | Transmission line abnormal area recognition method and system based on infrared thermal image feature fusion | |
| CN116503367A (en) | Transmission line insulator defect detection method, system, equipment and medium | |
| CN118629337B (en) | Control method of display module, electronic equipment and chip | |
| KR102819231B1 (en) | Device and Method for Anomaly Detection Using Unsupervised Learning | |
| Molleda et al. | Towards autonomic computing in machine vision applications: techniques and strategies for in-line 3D reconstruction in harsh industrial environments | |
| CN119516163B (en) | Target detection optimization and acceleration method based on diffraction neural network | |
| Zhao | Application analysis of improved LeNet5 model in library management | |
| CN120141807A (en) | A performance detection method and system for liquid crystal display module |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |