CN118506287A - Regional security monitoring method, system, readable storage medium and computer - Google Patents
Regional security monitoring method, system, readable storage medium and computer Download PDFInfo
- Publication number
- CN118506287A CN118506287A CN202410954531.8A CN202410954531A CN118506287A CN 118506287 A CN118506287 A CN 118506287A CN 202410954531 A CN202410954531 A CN 202410954531A CN 118506287 A CN118506287 A CN 118506287A
- Authority
- CN
- China
- Prior art keywords
- data
- monitoring
- face
- model
- base station
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Alarm Systems (AREA)
Abstract
本发明提供一种区域安全监控方法、系统、可读存储介质及计算机,该方法包括:在设备参数下控制数据采集设备对监控区域进行数据采集得到网格化监控数据;将标注及预处理后的人脸图像数据输入至人脸识别模型中进行训练得到人脸识别优化模型;利用人脸识别优化模型对网格化监控数据进行实时分析提取出人脸特征数据,根据各基站参数对监控区域进行目标检测计算出目标位置信息;将网格化监控数据、人脸特征数据及目标位置信息进行数据融合,并根据数据融合结果对监控区域进行安全监控。本发明利用融合技术实现自动化和智能化的数据处理与决策,大大提高监控效率,减少人工干预和响应时间,降低人力成本和安全风险,从而实现成本效益的最大化。
The present invention provides a regional security monitoring method, system, readable storage medium and computer, the method comprising: controlling a data acquisition device to collect data from a monitoring area under device parameters to obtain grid monitoring data; inputting labeled and pre-processed face image data into a face recognition model for training to obtain a face recognition optimization model; using the face recognition optimization model to perform real-time analysis on the grid monitoring data to extract face feature data, and performing target detection on the monitoring area according to the parameters of each base station to calculate the target location information; performing data fusion on the grid monitoring data, face feature data and target location information, and performing security monitoring on the monitoring area according to the data fusion result. The present invention uses fusion technology to realize automated and intelligent data processing and decision-making, greatly improves monitoring efficiency, reduces manual intervention and response time, reduces labor costs and safety risks, and thus maximizes cost-effectiveness.
Description
技术领域Technical Field
本发明涉及区域监控技术领域,特别涉及一种区域安全监控方法、系统、可读存储介质及计算机。The present invention relates to the field of regional monitoring technology, and in particular to a regional security monitoring method, system, readable storage medium and computer.
背景技术Background Art
随着安防需求的不断提升,而单一的视频监控、人脸识别或定位技术已不能满足复杂场景下的高精度、高效率的安全监控需求。相关技术存在技术及方法上的不足:As security needs continue to increase, single video surveillance, face recognition or positioning technology can no longer meet the needs of high-precision and high-efficiency security monitoring in complex scenarios. Related technologies have technical and methodological deficiencies:
数据预处理复杂性:数据来源的多样性导致预处理过程复杂,需要处理信息量大,实时性差,所需代价高。低层次的融合要求传感器原始信息在融合时有较高的纠错处理能力,这增加了技术实现的难度。Complexity of data preprocessing: The diversity of data sources leads to a complex preprocessing process, which requires a large amount of information to be processed, poor real-time performance, and high cost. Low-level fusion requires that the original information of the sensor has a high error correction processing capability when fused, which increases the difficulty of technical implementation.
复杂实体关联方法的不足:跨语言、跨领域的数据融合中,非结构化数据一般不显式包含属性名,其实体属性不一定出现在结构化数据中,导致关联方法在适用范围和准确率上存在不足。冲突解决技术依赖于实际参照数据的可用性,缺乏领域性和针对性的参照数据使得实用性变窄。Insufficient complex entity association methods: In cross-language and cross-domain data fusion, unstructured data generally does not explicitly contain attribute names, and its entity attributes do not necessarily appear in structured data, resulting in insufficiencies in the scope of application and accuracy of association methods. Conflict resolution technology relies on the availability of actual reference data, and the lack of domain-specific and targeted reference data narrows its practicality.
融合策略的局限性:基于统计的融合、基于模型的融合和基于规则的融合等策略都有其局限性,可能无法适应所有类型的数据和应用场景。权重分配和可信度评估往往基于经验或启发式方法,缺乏统一的评估标准和科学依据。Limitations of fusion strategies: Strategies such as statistical-based fusion, model-based fusion, and rule-based fusion all have their limitations and may not be suitable for all types of data and application scenarios. Weight allocation and credibility assessment are often based on experience or heuristic methods, lacking unified evaluation standards and scientific basis.
机器学习算法的局限性:机器学习算法在训练过程中可能受到异常值和噪声的影响,导致模型性能下降。对于高维度、非线性的数据,机器学习算法可能难以找到有效的映射关系,导致融合结果不准确。Limitations of machine learning algorithms: Machine learning algorithms may be affected by outliers and noise during training, resulting in degraded model performance. For high-dimensional, nonlinear data, machine learning algorithms may have difficulty finding effective mapping relationships, resulting in inaccurate fusion results.
如何将多种技术有效结合,并进行数据深度处理,实现优势互补,是当前面临的技术挑战。How to effectively combine multiple technologies and conduct in-depth data processing to achieve complementary advantages is the current technical challenge.
发明内容Summary of the invention
基于此,本发明的目的是提供一种区域安全监控方法、系统、可读存储介质及计算机,以至少解决上述技术中的不足。Based on this, the purpose of the present invention is to provide a regional security monitoring method, system, readable storage medium and computer to at least solve the deficiencies in the above-mentioned technology.
本发明提出一种区域安全监控方法,包括:The present invention provides a regional security monitoring method, comprising:
基于监控区域的区域信息确定数据采集设备的设备参数,并在所述设备参数下控制所述数据采集设备对所述监控区域进行数据采集,以得到对应的网格化监控数据;Determine the equipment parameters of the data acquisition device based on the area information of the monitoring area, and control the data acquisition device to collect data for the monitoring area under the equipment parameters to obtain corresponding grid monitoring data;
采集若干人脸图像数据,并对所述人脸图像数据进行标注及预处理,利用深度学习算法构建人脸识别模型,将标注及预处理后的人脸图像数据输入至所述人脸识别模型中进行训练,以得到人脸识别优化模型;Collecting a number of facial image data, annotating and preprocessing the facial image data, building a facial recognition model using a deep learning algorithm, and inputting the annotated and preprocessed facial image data into the facial recognition model for training to obtain a facial recognition optimization model;
利用所述人脸识别优化模型对所述网格化监控数据进行实时分析,以提取出对应的人脸特征数据,根据所述区域信息确定各基站的基站参数,并根据各所述基站参数对所述监控区域进行目标检测,以计算出对应的目标位置信息;The face recognition optimization model is used to perform real-time analysis on the grid monitoring data to extract corresponding face feature data, base station parameters of each base station are determined according to the regional information, and target detection is performed on the monitoring area according to each base station parameter to calculate corresponding target location information;
将所述网格化监控数据、所述人脸特征数据以及所述目标位置信息进行数据融合,并根据数据融合结果对所述监控区域进行安全监控。The grid monitoring data, the facial feature data and the target location information are fused, and security monitoring of the monitoring area is performed based on the data fusion result.
进一步的,基于监控区域的区域信息确定数据采集设备的设备参数,并在所述设备参数下控制所述数据采集设备对所述监控区域进行图像采集,以得到对应的网格化监控数据的步骤包括:Further, the step of determining device parameters of a data acquisition device based on the area information of the monitoring area, and controlling the data acquisition device to acquire images of the monitoring area under the device parameters to obtain corresponding gridded monitoring data includes:
基于监控区域的区域信息确定数据采集设备的数量、类型以及安装位置,以使所述数据采集设备覆盖所述监控区域;Determine the number, type and installation location of the data acquisition equipment based on the area information of the monitoring area, so that the data acquisition equipment covers the monitoring area;
利用所述数据采集设备对所述监控区域进行数据采集,并将所采集到的数据进行格式转换,以得到对应的网格化监控数据。The data acquisition device is used to acquire data from the monitoring area, and the acquired data is converted into a format to obtain corresponding grid monitoring data.
进一步的,将标注及预处理后的人脸图像数据输入至所述人脸识别模型中进行训练,以得到人脸识别优化模型的步骤包括:Furthermore, the step of inputting the labeled and preprocessed face image data into the face recognition model for training to obtain the face recognition optimization model includes:
构建人脸遮挡模型和侧脸模型,并将标注及预处理后的人脸图像数据依次输入至所述人脸遮挡模型和所述侧脸模型中,以判定人脸质量以及筛除侧脸图像数据得到初步图像数据;Constructing a face occlusion model and a profile face model, and inputting the labeled and preprocessed face image data into the face occlusion model and the profile face model in sequence to determine the face quality and filter out the profile face image data to obtain preliminary image data;
对所述初步图像数据进行人脸素材向量化,并将向量化结果与人脸数据库进行比对,基于人脸置信度阈值识别出对应的人脸识别结果,根据所述人脸识别结果对所述人脸识别模型中进行训练,以得到人脸识别优化模型。The preliminary image data is vectorized for facial material, and the vectorization result is compared with the face database, the corresponding face recognition result is identified based on the face confidence threshold, and the face recognition model is trained according to the face recognition result to obtain a face recognition optimization model.
进一步的,根据所述区域信息确定各基站的基站参数,并根据各所述基站参数对所述监控区域进行目标检测,以计算出对应的目标位置信息的步骤包括:Further, the steps of determining base station parameters of each base station according to the area information, and performing target detection on the monitoring area according to each base station parameter to calculate the corresponding target location information include:
对所述监控区域中目标进行标签分配,并计算出各所述基站向所述目标发射的测量信号在其对应的基站和所述监控区域中目标的标签之间的飞行时间以及到达角度;Assigning labels to the targets in the monitoring area, and calculating the flight time and arrival angle of the measurement signal transmitted by each base station to the target between the corresponding base station and the label of the target in the monitoring area;
基于所述测量信号的传播速度、所述飞行时间以及所述到达角度计算出各所述基站与所述标签之间的距离,并基于各所述基站与所述标签之间的距离确定所述目标的目标位置信息。The distance between each base station and the tag is calculated based on the propagation speed of the measurement signal, the flight time, and the arrival angle, and the target position information of the target is determined based on the distance between each base station and the tag.
进一步的,将所述网格化监控数据、所述人脸特征数据以及所述目标位置信息进行数据融合,并根据数据融合结果对所述监控区域进行安全监控的步骤包括:Furthermore, the steps of fusing the grid monitoring data, the facial feature data and the target location information, and performing security monitoring on the monitoring area according to the data fusion result include:
将所述网格化监控数据、所述人脸特征数据以及所述目标位置信息进行数据预处理,并根据数据预处理结果进行特征提取,以得到对应的特征数据;Preprocessing the gridded monitoring data, the facial feature data, and the target location information, and extracting features based on the data preprocessing results to obtain corresponding feature data;
将所述特征数据进行特征融合,并将融合后的特征进行决策分类,以得到对应的融合数据,并根据所述融合数据对所述监控区域进行安全监控。The feature data is subjected to feature fusion, and the fused features are subjected to decision classification to obtain corresponding fused data, and security monitoring of the monitoring area is performed based on the fused data.
本发明还提出一种区域安全监控系统,包括:The present invention also provides a regional security monitoring system, comprising:
数据采集模块,用于基于监控区域的区域信息确定数据采集设备的设备参数,并在所述设备参数下控制所述数据采集设备对所述监控区域进行数据采集,以得到对应的网格化监控数据;A data acquisition module, used to determine the equipment parameters of the data acquisition device based on the area information of the monitoring area, and control the data acquisition device to collect data for the monitoring area under the equipment parameters to obtain corresponding grid monitoring data;
模型优化模块,用于采集若干人脸图像数据,并对所述人脸图像数据进行标注及预处理,利用深度学习算法构建人脸识别模型,将标注及预处理后的人脸图像数据输入至所述人脸识别模型中进行训练,以得到人脸识别优化模型;A model optimization module is used to collect a number of facial image data, annotate and preprocess the facial image data, build a facial recognition model using a deep learning algorithm, and input the annotated and preprocessed facial image data into the facial recognition model for training to obtain a facial recognition optimization model;
位置信息计算模块,用于利用所述人脸识别优化模型对所述网格化监控数据进行实时分析,以提取出对应的人脸特征数据,根据所述区域信息确定各基站的基站参数,并根据各所述基站参数对所述监控区域进行目标检测,以计算出对应的目标位置信息;A position information calculation module, used to use the face recognition optimization model to perform real-time analysis on the grid monitoring data to extract corresponding face feature data, determine base station parameters of each base station according to the area information, and perform target detection on the monitoring area according to each base station parameter to calculate corresponding target position information;
安全监控模块,用于将所述网格化监控数据、所述人脸特征数据以及所述目标位置信息进行数据融合,并根据数据融合结果对所述监控区域进行安全监控。The security monitoring module is used to fuse the grid monitoring data, the facial feature data and the target location information, and perform security monitoring on the monitoring area according to the data fusion result.
进一步的,所述数据采集模块包括:Furthermore, the data acquisition module includes:
参数确定单元,用于基于监控区域的区域信息确定数据采集设备的数量、类型以及安装位置,以使所述数据采集设备覆盖所述监控区域;A parameter determination unit, configured to determine the number, type and installation location of the data acquisition equipment based on the area information of the monitoring area, so that the data acquisition equipment covers the monitoring area;
数据采集单元,用于利用所述数据采集设备对所述监控区域进行数据采集,并将所采集到的数据进行格式转换,以得到对应的网格化监控数据。The data collection unit is used to collect data from the monitoring area using the data collection device, and convert the format of the collected data to obtain corresponding grid monitoring data.
进一步的,所述模型优化模块包括:Furthermore, the model optimization module includes:
图像处理单元,用于构建人脸遮挡模型和侧脸模型,并将标注及预处理后的人脸图像数据依次输入至所述人脸遮挡模型和所述侧脸模型中,以判定人脸质量以及筛除侧脸图像数据得到初步图像数据;An image processing unit, used to construct a face occlusion model and a profile face model, and input the labeled and pre-processed face image data into the face occlusion model and the profile face model in sequence to determine the face quality and filter out the profile face image data to obtain preliminary image data;
模型优化单元,用于对所述初步图像数据进行人脸素材向量化,并将向量化结果与人脸数据库进行比对,基于人脸置信度阈值识别出对应的人脸识别结果,根据所述人脸识别结果对所述人脸识别模型中进行训练,以得到人脸识别优化模型。The model optimization unit is used to vectorize the face material of the preliminary image data, compare the vectorization result with the face database, identify the corresponding face recognition result based on the face confidence threshold, and train the face recognition model according to the face recognition result to obtain a face recognition optimization model.
进一步的,所述位置信息计算模块包括:Furthermore, the position information calculation module includes:
标签分配单元,用于对所述监控区域中目标进行标签分配,并计算出各所述基站向所述目标发射的测量信号在其对应的基站和所述监控区域中目标的标签之间的飞行时间以及到达角度;A label assignment unit, configured to assign labels to the targets in the monitoring area, and calculate the flight time and arrival angle of the measurement signal transmitted by each base station to the target between the corresponding base station and the label of the target in the monitoring area;
位置信息计算单元,用于基于所述测量信号的传播速度、所述飞行时间以及所述到达角度计算出各所述基站与所述标签之间的距离,并基于各所述基站与所述标签之间的距离确定所述目标的目标位置信息。The position information calculation unit is used to calculate the distance between each base station and the tag based on the propagation speed of the measurement signal, the flight time and the arrival angle, and determine the target position information of the target based on the distance between each base station and the tag.
进一步的,所述安全监控模块包括:Furthermore, the security monitoring module includes:
数据预处理单元,用于将所述网格化监控数据、所述人脸特征数据以及所述目标位置信息进行数据预处理,并根据数据预处理结果进行特征提取,以得到对应的特征数据;A data preprocessing unit, used for preprocessing the grid monitoring data, the face feature data and the target location information, and extracting features according to the data preprocessing results to obtain corresponding feature data;
安全监控单元,用于将所述特征数据进行特征融合,并将融合后的特征进行决策分类,以得到对应的融合数据,并根据所述融合数据对所述监控区域进行安全监控。The security monitoring unit is used to perform feature fusion on the feature data, and make decision classification on the fused features to obtain corresponding fused data, and perform security monitoring on the monitoring area according to the fused data.
本发明还提出一种可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述的区域安全监控方法。The present invention also provides a readable storage medium on which a computer program is stored, and when the program is executed by a processor, the above-mentioned regional safety monitoring method is implemented.
本发明还提出一种计算机,包括存储器、处理器以及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述的区域安全监控方法。The present invention also proposes a computer, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the above-mentioned regional safety monitoring method when executing the computer program.
本发明当中的区域安全监控方法、系统、可读存储介质及计算机,通过对监控区域的区域信息确定数据采集设备的设备参数,利用数据采集设备对监控区域进行数据采集得到网格化监控数据,为后续数据融合提供全面的视觉信息,利用人脸识别模型对网格化监控数据进行实时分析提取出人脸特征数据,以便于准确识别目标身份,根据区域信息确定各基站的基站参数,并根据各基站参数对监控区域进行目标检测计算出目标位置信息,精确追踪目标位置,将网格化监控数据、人脸特征数据以及目标位置信息进行数据融合,并根据数据融合结果对监控区域进行安全监控,利用融合技术实现自动化和智能化的数据处理与决策,大大提高监控效率,减少人工干预和响应时间,降低人力成本和安全风险,从而实现成本效益的最大化。The regional security monitoring method, system, readable storage medium and computer of the present invention determine the equipment parameters of the data acquisition device based on the regional information of the monitoring area, use the data acquisition device to collect data on the monitoring area to obtain grid monitoring data, provide comprehensive visual information for subsequent data fusion, use the face recognition model to perform real-time analysis on the grid monitoring data to extract facial feature data, so as to accurately identify the target identity, determine the base station parameters of each base station based on the regional information, and perform target detection on the monitoring area based on the parameters of each base station to calculate the target position information, accurately track the target position, fuse the grid monitoring data, facial feature data and target position information, and perform security monitoring on the monitoring area based on the data fusion result, use fusion technology to realize automatic and intelligent data processing and decision-making, greatly improve monitoring efficiency, reduce manual intervention and response time, reduce labor costs and safety risks, and thus maximize cost-effectiveness.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明第一实施例中的区域安全监控方法的流程图;FIG1 is a flow chart of a regional security monitoring method according to a first embodiment of the present invention;
图2为图1中步骤S101的详细流程图;FIG2 is a detailed flow chart of step S101 in FIG1 ;
图3为图1中步骤S102的详细流程图;FIG3 is a detailed flow chart of step S102 in FIG1 ;
图4为图1中步骤S103的详细流程图;FIG4 is a detailed flow chart of step S103 in FIG1 ;
图5为图1中步骤S104的详细流程图;FIG5 is a detailed flow chart of step S104 in FIG1 ;
图6为本发明第二实施例中的区域安全监控系统的结构框图;FIG6 is a block diagram of a regional security monitoring system according to a second embodiment of the present invention;
图7为本发明第三实施例中的计算机的结构框图。FIG. 7 is a block diagram of the structure of a computer in the third embodiment of the present invention.
如下具体实施方式将结合上述附图进一步说明本发明。The following specific implementation manner will further illustrate the present invention in conjunction with the above-mentioned drawings.
具体实施方式DETAILED DESCRIPTION
为了便于理解本发明,下面将参照相关附图对本发明进行更全面的描述。附图中给出了本发明的若干实施例。但是,本发明可以以许多不同的形式来实现,并不限于本文所描述的实施例。相反地,提供这些实施例的目的是使对本发明的公开内容更加透彻全面。In order to facilitate understanding of the present invention, the present invention will be described more fully below with reference to the relevant drawings. Several embodiments of the present invention are given in the drawings. However, the present invention can be implemented in many different forms and is not limited to the embodiments described herein. On the contrary, the purpose of providing these embodiments is to make the disclosure of the present invention more thorough and comprehensive.
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as those commonly understood by those skilled in the art to which the present invention belongs. The terms used herein in the specification of the present invention are only for the purpose of describing specific embodiments and are not intended to limit the present invention. The term "and/or" used herein includes any and all combinations of one or more of the related listed items.
实施例一Embodiment 1
请参阅图1,所示为本发明第一实施例中的区域安全监控方法,所述方法具体包括步骤S101至S104:Please refer to FIG. 1 , which shows a method for regional security monitoring in a first embodiment of the present invention. The method specifically includes steps S101 to S104:
S101,基于监控区域的区域信息确定数据采集设备的设备参数,并在所述设备参数下控制所述数据采集设备对所述监控区域进行数据采集,以得到对应的网格化监控数据;S101, determining device parameters of a data acquisition device based on area information of a monitoring area, and controlling the data acquisition device to collect data for the monitoring area under the device parameters to obtain corresponding grid monitoring data;
进一步的,请参阅图2,所述步骤S101具体包括步骤S1011~S1012:Further, referring to FIG. 2 , the step S101 specifically includes steps S1011 to S1012:
S1011,基于监控区域的区域信息确定数据采集设备的数量、类型以及安装位置,以使所述数据采集设备覆盖所述监控区域;S1011, determining the number, type and installation location of data acquisition devices based on the area information of the monitoring area, so that the data acquisition devices cover the monitoring area;
S1012,利用所述数据采集设备对所述监控区域进行数据采集,并将所采集到的数据进行格式转换,以得到对应的网格化监控数据。S1012: Utilize the data acquisition device to collect data from the monitoring area, and convert the format of the collected data to obtain corresponding grid monitoring data.
在具体实施时,根据监控区域的大小、形状和关键区域,确定所需数据采集设备(在本实施例中,数据采集设备选用摄像头)的数量、类型和安装位置。确保摄像头的视野能够覆盖整个监控区域,同时避免过多的重叠区域。将摄像头与分割后的网格片进行关联匹配。In the specific implementation, the number, type and installation location of the required data acquisition devices (in this embodiment, the data acquisition devices are cameras) are determined according to the size, shape and key areas of the monitoring area. Ensure that the camera's field of view can cover the entire monitoring area while avoiding too many overlapping areas. Associate and match the camera with the segmented grid pieces.
具体的,摄像头需要有效识别人脸素材,最优的识别距离为10米范围。将现场环境按照10X10米大小的等比网格化分割布局,标记分割后的现场布局网格编号如:1-1、1-2、1-3,明确监控区域的目的以及对监控画面的要求,摄像头分辨率采用1920X1080P以上、视野范围控制在10米范围内的最有效识别范围,在灯光比较昏暗的位置,增设红外监控设备,视频设备高度架设在3.5米高位置等要求等。Specifically, the camera needs to effectively identify human faces, and the optimal identification distance is within 10 meters. The on-site environment is divided and laid out in a geometric grid of 10X10 meters, and the grid numbers after division are marked, such as: 1-1, 1-2, 1-3. The purpose of the monitoring area and the requirements for the monitoring screen are clearly defined. The camera resolution should be above 1920X1080P, and the field of view should be controlled within the most effective identification range of 10 meters. Infrared monitoring equipment should be added in dimly lit locations, and the video equipment should be installed at a height of 3.5 meters.
根据监控区域和监控需求,选择合适的摄像头类型。例如,对于短距离监控,可以选择半球型摄像机或红外一体摄像机;对于中远距监控,枪型摄像机或球型摄像机可能更合适。根据监控区域的面积和形状,以及摄像头的视野范围,计算出所需的摄像头数量;根据监控区域的形状和摄像头的视野范围,确定每个摄像头的最佳安装位置。这通常涉及到对安装位置的精确测量和规划,以确保摄像头的视野能够覆盖整个监控区域,同时避免过多的重叠区域。Choose the right camera type based on the monitoring area and monitoring needs. For example, for short-distance monitoring, you can choose a dome camera or an infrared integrated camera; for medium and long-distance monitoring, a gun camera or a ball camera may be more suitable. Calculate the number of cameras required based on the area and shape of the monitoring area and the camera's field of view; determine the best installation location for each camera based on the shape of the monitoring area and the camera's field of view. This usually involves precise measurement and planning of the installation location to ensure that the camera's field of view can cover the entire monitoring area while avoiding too much overlapping area.
进一步的,根据监控区域,以及摄像头的最优监控范围,绑定每个摄像头所负责的监控网格编号集合。按照预定的安装位置,进行摄像头的安装工作。这包括固定摄像头支架、连接电源线和数据线等。在安装过程中,需要确保摄像头的稳定性和安全性,以免发生意外。安装完成后,对监控系统进行配置和测试。这包括设置摄像头的参数(如分辨率、帧率等)、调整摄像头的视角和焦距,以及测试系统的稳定性和画面质量。Furthermore, according to the monitoring area and the optimal monitoring range of the camera, bind the monitoring grid number set that each camera is responsible for. Install the camera according to the predetermined installation location. This includes fixing the camera bracket, connecting the power cable and data cable, etc. During the installation process, it is necessary to ensure the stability and safety of the camera to avoid accidents. After the installation is completed, configure and test the monitoring system. This includes setting the camera parameters (such as resolution, frame rate, etc.), adjusting the camera's viewing angle and focal length, and testing the system's stability and picture quality.
具体的,在完成摄像头的部署后,部署视频流传输网络,确保摄像头能够稳定、高效地传输视频流至中央服务器。在中央服务器上配置视频流媒体平台服务,实现设备按照国标28181注册,对接收到的视频流进行解码、格式转换和存储等操作。输出成各种视频协议流(rtsp/rtmp/HLS/flv/webrtc等),给前置终端进行播放使用。视频采集流程:①监控终端:监控终端主要为视频设备或提供配套视频存储的设备,主要作用是提供视频流(国标摄像头需注册到流媒体服务器)。②流媒体平台:流媒体平台主要包含SIP信令鉴权,提供国标摄像头注册、注销、心跳,保活、开流、关流等一系列摄像头服务;各种视频流协议推拉流直播点播等服务;③显示终端:主要为客户应用端(手机、电脑、电子墙等),接收流媒体编解码后的视频流进行显示播放。Specifically, after the camera is deployed, a video stream transmission network is deployed to ensure that the camera can stably and efficiently transmit the video stream to the central server. Configure the video streaming platform service on the central server to register the device according to the national standard 28181, and perform operations such as decoding, format conversion and storage on the received video stream. Output into various video protocol streams (rtsp/rtmp/HLS/flv/webrtc, etc.) for the front-end terminal to play. Video acquisition process: ① Monitoring terminal: The monitoring terminal is mainly a video device or a device that provides supporting video storage. Its main function is to provide video streams (national standard cameras need to be registered to the streaming media server). ② Streaming media platform: The streaming media platform mainly includes SIP signaling authentication, and provides a series of camera services such as national standard camera registration, deregistration, heartbeat, keep alive, open stream, close stream, etc.; various video streaming protocol push-pull stream live broadcast on demand and other services; ③ Display terminal: Mainly for the client application end (mobile phone, computer, electronic wall, etc.), receiving the video stream encoded and decoded by streaming media for display and playback.
S102,采集若干人脸图像数据,并对所述人脸图像数据进行标注及预处理,利用深度学习算法构建人脸识别模型,将标注及预处理后的人脸图像数据输入至所述人脸识别模型中进行训练,以得到人脸识别优化模型;S102, collecting a number of facial image data, annotating and preprocessing the facial image data, building a facial recognition model using a deep learning algorithm, and inputting the annotated and preprocessed facial image data into the facial recognition model for training to obtain a facial recognition optimization model;
进一步的,请参阅图3,所述步骤S102具体包括步骤S1021~S1022:Further, referring to FIG. 3 , the step S102 specifically includes steps S1021 to S1022:
S1021,构建人脸遮挡模型和侧脸模型,并将标注及预处理后的人脸图像数据依次输入至所述人脸遮挡模型和所述侧脸模型中,以判定人脸质量以及筛除侧脸图像数据得到初步图像数据;S1021, constructing a face occlusion model and a profile face model, and inputting the labeled and preprocessed face image data into the face occlusion model and the profile face model in sequence to determine the face quality and filter out the profile face image data to obtain preliminary image data;
S1022,对所述初步图像数据进行人脸素材向量化,并将向量化结果与人脸数据库进行比对,基于人脸置信度阈值识别出对应的人脸识别结果,根据所述人脸识别结果对所述人脸识别模型中进行训练,以得到人脸识别优化模型。S1022, vectorize the face material of the preliminary image data, and compare the vectorization result with the face database, identify the corresponding face recognition result based on the face confidence threshold, and train the face recognition model according to the face recognition result to obtain a face recognition optimization model.
在具体实施时,收集大量人脸图像数据,并进行标注和预处理,使用深度学习算法(如卷积神经网络)构建人脸识别模型,并利用标注数据进行训练。为提高识别准确率,又针对人脸遮挡模型、侧脸模型进行训练。针对非正脸或被遮挡人脸进行过滤,提高准确性。In the specific implementation, a large amount of facial image data is collected, annotated and preprocessed, and a deep learning algorithm (such as convolutional neural network) is used to build a face recognition model, and the annotated data is used for training. In order to improve the recognition accuracy, the face occlusion model and the profile face model are trained. Non-frontal or occluded faces are filtered to improve accuracy.
将训练好的人脸识别模型部署到中央服务器上,对上述获取的视频流(网格化监控数据)进行实时分析,提取出人脸特征,将提取出的人脸特征与预先存储的人脸库进行比对,实现身份认证。The trained face recognition model is deployed on the central server, and the above-obtained video stream (grid monitoring data) is analyzed in real time to extract facial features. The extracted facial features are compared with the pre-stored face database to achieve identity authentication.
S103,利用所述人脸识别优化模型对所述网格化监控数据进行实时分析,以提取出对应的人脸特征数据,根据所述区域信息确定各基站的基站参数,并根据各所述基站参数对所述监控区域进行目标检测,以计算出对应的目标位置信息;S103, using the face recognition optimization model to perform real-time analysis on the grid monitoring data to extract corresponding face feature data, determine base station parameters of each base station according to the area information, and perform target detection on the monitoring area according to each base station parameter to calculate corresponding target location information;
进一步的,请参阅图4,所述步骤S103具体包括步骤S1031~S1032:Further, referring to FIG. 4 , the step S103 specifically includes steps S1031 to S1032:
S1031,对所述监控区域中目标进行标签分配,并计算出各所述基站向所述目标发射的测量信号在其对应的基站和所述监控区域中目标的标签之间的飞行时间以及到达角度;S1031, assigning tags to the targets in the monitoring area, and calculating the flight time and arrival angle of the measurement signal transmitted by each base station to the target between the corresponding base station and the tag of the target in the monitoring area;
S1032,基于所述测量信号的传播速度、所述飞行时间以及所述到达角度计算出各所述基站与所述标签之间的距离,并基于各所述基站与所述标签之间的距离确定所述目标的目标位置信息。S1032: Calculate the distance between each base station and the tag based on the propagation speed of the measurement signal, the flight time, and the arrival angle, and determine the target position information of the target based on the distance between each base station and the tag.
在具体实施时,人脸识别流程如下:In specific implementation, the face recognition process is as follows:
①视频流接入:监控终端或者视频流媒体提供RTSP协议的视频流。① Video stream access: The monitoring terminal or video streaming media provides video streams using the RTSP protocol.
②视频处理单元:数据处理单元接入标准的视频流,按照配置的识别速度参数,按照规则抽取视频帧图片。② Video processing unit: The data processing unit accesses the standard video stream and extracts video frame images according to the configured recognition speed parameters and rules.
③人脸遮挡判断:基于前期进行遮挡训练后所得出的遮挡模型,该模型主要判断人脸质量,并且得出人脸被障碍物遮挡比例,如果被障碍物遮挡比例超出阈值那就不会进行下一步的人脸识别业务。③ Face occlusion judgment: Based on the occlusion model obtained after the previous occlusion training, this model mainly judges the quality of the face and obtains the proportion of the face occluded by obstacles. If the proportion of the face occluded by obstacles exceeds the threshold, the next step of face recognition will not be carried out.
④侧脸判断:主要基于前期所做的侧脸模型,判断人脸质量,是否为侧脸图片(侧脸判断规则为:眼距是否符合阈值设置要求),如果是侧脸图片那就不继续进行下一步的人脸识别业务。只有正脸图片才进行下一步的人脸识别逻辑。④ Side face judgment: mainly based on the side face model made in the early stage, judge the face quality and whether it is a side face picture (the side face judgment rule is: whether the eye distance meets the threshold setting requirements). If it is a side face picture, the next step of face recognition business will not be continued. Only the front face picture will proceed to the next step of face recognition logic.
眼距判断:获取人脸关键点,计算双眼距离是否大于人脸设定的比例的距离,小于则过滤。Eye distance judgment: Get the key points of the face and calculate whether the distance between the two eyes is greater than the distance of the set ratio of the face. If it is less, filter it.
计算公式如下:The calculation formula is as follows:
; ;
式中,x1,y1表示左眼相对图片的像素的坐标;x2,y2表示右眼相对图片的像素的坐标,|AB|表示左右眼距;Wherein, x 1 , y 1 represent the coordinates of the left eye relative to the pixel of the picture; x 2 , y 2 represent the coordinates of the right eye relative to the pixel of the picture, and |AB| represents the distance between the left and right eyes;
⑤人脸识别:首先通过人脸质量判断(det_score>=阈值),通过阈值判断代表相关图片中包含一张符合要求的人脸素材,不符合人脸质量要求的图片会退出后续判断逻辑。符合要求的人脸照片会将人脸素材向量化,与人脸库中的数据进行比对,比对结果为人脸置信度confidence,通过判断人脸置信度confidence值是否超出人脸比对阈值,比对完成后将相关结果进行输出。⑤ Face recognition: First, face quality is judged (det_score>=threshold). If the threshold is passed, it means that the relevant picture contains a face material that meets the requirements. Pictures that do not meet the face quality requirements will exit the subsequent judgment logic. The face material of the face photo that meets the requirements will be vectorized and compared with the data in the face database. The comparison result is the face confidence. By judging whether the face confidence value exceeds the face comparison threshold, the relevant results will be output after the comparison is completed.
人脸比对成功识别结构如下:The structure of successful face recognition is as follows:
{{
"content": "人脸识别",#算法类型名称"content": "Face Recognition",#Algorithm Type Name
"data":"data":
{{
"object": "人脸识别",#算法类型名称"object": "Face recognition",#Algorithm type name
"type_id": 109,#算法类型编码"type_id": 109,#algorithm type code
"det_score": 0.8930664 #是否是人脸概率"det_score": 0.8930664 #Is it a face probability?
"face_res": #识别人脸信息"face_res": #Recognize face information
{{
"user_id": "6f4466c4753a48049348c0ccf2f85c8d", #人脸ID"user_id": "6f4466c4753a48049348c0ccf2f85c8d", #Face ID
"confidence": 0.7826316709641026, #人脸置信度"confidence": 0.7826316709641026, #face confidence
"name": "朱宇", #人脸名称"name": "朱宇", #face name
}}
}}
};};
比对失败就是陌生人,结构如下:If the comparison fails, it means it is a stranger. The structure is as follows:
{{
"content": "人脸识别",#算法类型名称"content": "Face Recognition",#Algorithm Type Name
"data":"data":
{{
"object": "人脸识别",#算法类型名称"object": "Face recognition",#Algorithm type name
"type_id": 109,#算法类型编码"type_id": 109,#algorithm type code
"det_score": 0.8230664 #是否是人脸概率"det_score": 0.8230664 #Is it a face probability?
"face_res": #识别人脸信息"face_res": #Recognize face information
{{
"user_id": "unknown", #人脸ID"user_id": "unknown", #Face ID
"confidence": 0.4826316709641026, #人脸置信度"confidence": 0.4826316709641026, #face confidence
"name": "陌生人", #人脸名称"name": "Stranger", #face name
}}
}}
};};
⑥结果输出:将人脸识别结果中userId、type_id、给业务服务进行下一步操作。⑥Result output: Give the userId and type_id in the face recognition result to the business service for the next step.
具体的,根据监控区域的大小和形状,相关基站部署如下:Specifically, according to the size and shape of the monitoring area, the relevant base stations are deployed as follows:
走廊类长条形通道:按照通道长度,每隔10米部署一个定位基站,形成二维基站群。房间类长方形面积区域,按照5米半径覆盖,规划基站覆盖。形成三维基站群。Corridor-like long channels: Deploy a positioning base station every 10 meters according to the channel length to form a two-dimensional base station group. Room-like rectangular areas are covered according to a 5-meter radius and base station coverage is planned to form a three-dimensional base station group.
进一步的,为需要定位的目标(评标专家等)配备UWB标签,通过测量UWB信号在基站与标签之间的飞行时间(ToF)及到达角度(AoA),计算出目标的位置信息。Furthermore, the target that needs to be located (such as bid evaluation experts) is equipped with a UWB tag, and the location information of the target is calculated by measuring the flight time (ToF) and arrival angle (AoA) of the UWB signal between the base station and the tag.
TOF(Time of Flight)飞行时间法:TOF (Time of Flight) Flight Time Method:
1.信号发送与接收:1. Signal sending and receiving:
UWB基站(或称为锚点)发送UWB信号。The UWB base station (or anchor point) sends UWB signals.
目标标签(或称为移动设备)接收到信号后,记录接收时间,并立即发送一个响应信号回基站。When the target tag (or mobile device) receives the signal, it records the time of reception and immediately sends a response signal back to the base station.
基站接收到响应信号后,记录接收时间。After receiving the response signal, the base station records the receiving time.
2.时间测量:2. Time measurement:
计算信号从基站到标签的飞行时间(Round-trip Time, RTT)。RTT等于响应信号回到基站的时间减去信号从基站发出的时间。Calculate the round-trip time (RTT) of the signal from the base station to the tag. RTT is the time it takes for the response signal to return to the base station minus the time it takes for the signal to be sent from the base station.
由于RTT是信号往返的总时间,所以需要除以2来得到单程的飞行时间(TTOF)。Since RTT is the total time it takes for the signal to travel back and forth, it needs to be divided by 2 to get the one-way time of flight (TTOF).
3.距离计算:3. Distance calculation:
已知信号在空气中的传播速度(接近光速c),可以使用公式 距离 = TTOF * c 来计算基站到标签的距离。Knowing the propagation speed of the signal in the air (close to the speed of light c), the distance from the base station to the tag can be calculated using the formula distance = TTOF * c.
4.位置确定:4. Location determination:
如果有三个或更多的基站与标签进行了通信,并且已知这些基站的位置,那么可以使用三边测量法(或多边测量法)来确定标签的位置。If there are three or more base stations in communication with the tag, and the locations of those base stations are known, then trilateration (or multilateration) can be used to determine the tag's location.
以每个基站为圆心,以其到标签的距离为半径画圆,这些圆的交点就是标签的位置。Draw a circle with each base station as the center and the distance from the base station to the tag as the radius. The intersection of these circles is the location of the tag.
AOA(Angle of Arrival)到达角度法:AOA (Angle of Arrival) arrival angle method:
1.信号接收与角度测量:1. Signal reception and angle measurement:
在基站上配置多个天线或阵列天线来接收标签发送的信号。Multiple antennas or array antennas are configured on the base station to receive the signals sent by the tags.
通过测量信号到达不同天线的时间差或相位差,可以计算出信号到达基站的角度。By measuring the time difference or phase difference when the signal arrives at different antennas, the angle at which the signal arrives at the base station can be calculated.
2.角度确定:2. Angle determination:
使用特定的算法(如相位干涉法、波束形成法等)来确定信号到达基站的精确角度。Specific algorithms (such as phase interferometry, beamforming, etc.) are used to determine the precise angle at which the signal arrives at the base station.
3.位置确定:3. Location determination:
如果知道两个或更多基站的精确位置和它们各自接收到的信号到达角度,就可以通过解算角度交汇的几何关系来确定标签的位置。If the precise locations of two or more base stations and the angles of arrival of the signals they each receive are known, the location of the tag can be determined by solving the geometric relationship of the intersection of the angles.
需要注意的是,AOA方法通常需要至少两个不共线的基站来准确确定标签的位置。It should be noted that the AOA method usually requires at least two non-collinear base stations to accurately determine the location of the tag.
具体的,UWB定位校正优化算法通常包括一系列技术和方法,用于减少定位误差、提高定位精度。这些算法可能涉及对原始测量数据的处理、对基站和目标标签之间通信的改进以及对定位计算方法的优化。Specifically, UWB positioning correction optimization algorithms usually include a series of technologies and methods to reduce positioning errors and improve positioning accuracy. These algorithms may involve processing raw measurement data, improving communication between base stations and target tags, and optimizing positioning calculation methods.
1. 数据预处理算法1. Data preprocessing algorithm
滤波算法:如卡尔曼滤波、扩展卡尔曼滤波(EKF)等,用于对测量数据进行平滑处理,减少噪声和随机误差的影响。Filtering algorithms: such as Kalman filtering and extended Kalman filtering (EKF), are used to smooth the measurement data and reduce the impact of noise and random errors.
误差校正算法:基于统计模型或机器学习的方法,用于识别和校正系统性误差。Error correction algorithms: Methods based on statistical models or machine learning to identify and correct systematic errors.
2. 通信优化算法2. Communication Optimization Algorithm
信号增强算法:如波束形成、多输入多输出(MIMO)技术等,用于提高信号的传输质量和覆盖范围。Signal enhancement algorithms: such as beamforming and multiple-input multiple-output (MIMO) technology, are used to improve signal transmission quality and coverage.
时间同步算法:确保基站之间以及基站与标签之间的时间同步,以减少由于时钟差异导致的定位误差。Time synchronization algorithm: Ensure time synchronization between base stations and between base stations and tags to reduce positioning errors caused by clock differences.
3. 定位计算优化算法3. Positioning calculation optimization algorithm
多基站定位算法:如最小二乘法、加权最小二乘法等,通过结合多个基站的测量数据来提高定位精度。Multi-base station positioning algorithms: such as the least squares method and the weighted least squares method, improve positioning accuracy by combining the measurement data of multiple base stations.
基于模型的方法:如利用几何关系、信号传播模型等进行定位计算。Model-based methods: such as using geometric relationships, signal propagation models, etc. to perform positioning calculations.
优化过程:Optimization process:
UWB定位系统的优化过程通常包括以下步骤:The optimization process of UWB positioning system usually includes the following steps:
1.数据收集:收集大量实际场景下的定位数据,包括基站和标签的坐标、测量距离、时间戳等。1. Data collection: Collect a large amount of positioning data in actual scenarios, including the coordinates of base stations and tags, measured distances, timestamps, etc.
2.数据分析:对收集到的数据进行分析,识别定位误差的主要来源和特征。2. Data analysis: Analyze the collected data to identify the main sources and characteristics of positioning errors.
3.算法选择与设计:根据数据分析结果,选择合适的校正优化算法,并进行必要的算法设计和调整。3. Algorithm selection and design: According to the data analysis results, select the appropriate correction optimization algorithm and make necessary algorithm design and adjustments.
4.实验验证:在实际场景中进行实验验证,评估优化算法的效果和性能。4. Experimental verification: Conduct experimental verification in actual scenarios to evaluate the effect and performance of the optimization algorithm.
5.迭代优化:根据实验结果反馈,对算法进行迭代优化,直至满足预期的性能指标。5. Iterative optimization: Based on the experimental results, the algorithm is iteratively optimized until the expected performance indicators are met.
对应的补偿:Corresponding compensation:
在UWB定位系统中,补偿通常用于减少或消除由于各种因素导致的定位误差。补偿的方式和方法因系统设计和应用场景而异,但一般包括以下几种:In UWB positioning systems, compensation is usually used to reduce or eliminate positioning errors caused by various factors. The compensation methods vary depending on the system design and application scenario, but generally include the following:
1.硬件补偿:通过改进硬件设计、使用更高精度的测量设备等来减少硬件误差。1. Hardware compensation: Reduce hardware errors by improving hardware design and using higher-precision measuring equipment.
2.软件补偿:通过算法对测量数据进行处理、校正和补偿,以减少软件算法和数据处理过程中的误差。2. Software compensation: The measured data is processed, corrected and compensated through algorithms to reduce errors in software algorithms and data processing.
3.环境补偿:针对特定环境条件下的误差进行补偿,如多径效应、非视距(NLOS)误差等。这可能需要建立环境模型、使用机器学习等方法进行预测和补偿。3. Environmental compensation: Compensate for errors under specific environmental conditions, such as multipath effects, non-line-of-sight (NLOS) errors, etc. This may require the establishment of environmental models and the use of machine learning and other methods for prediction and compensation.
4.时钟补偿:对于时间同步误差,可以采用时钟补偿算法来校正基站之间以及基站与标签之间的时间差异。4. Clock compensation: For time synchronization errors, a clock compensation algorithm can be used to correct the time differences between base stations and between base stations and tags.
S104,将所述网格化监控数据、所述人脸特征数据以及所述目标位置信息进行数据融合,并根据数据融合结果对所述监控区域进行安全监控。S104, fusing the grid monitoring data, the facial feature data and the target location information, and performing security monitoring on the monitoring area according to the data fusion result.
进一步的,请参阅图5,所述步骤S104具体包括步骤S1041~S1042:Further, referring to FIG. 5 , the step S104 specifically includes steps S1041 to S1042:
S1041,将所述网格化监控数据、所述人脸特征数据以及所述目标位置信息进行数据预处理,并根据数据预处理结果进行特征提取,以得到对应的特征数据;S1041, performing data preprocessing on the grid monitoring data, the facial feature data, and the target location information, and performing feature extraction according to the data preprocessing result to obtain corresponding feature data;
S1042,将所述特征数据进行特征融合,并将融合后的特征进行决策分类,以得到对应的融合数据,并根据所述融合数据对所述监控区域进行安全监控。S1042, performing feature fusion on the feature data, and performing decision classification on the fused features to obtain corresponding fused data, and performing security monitoring on the monitoring area according to the fused data.
在具体实施时,接收上述的网格化监控数据、人脸特征数据以及目标位置信息。对接收到的数据进行时间同步处理,确保不同来源的数据在时间上的一致性。In the specific implementation, the above-mentioned grid monitoring data, facial feature data and target location information are received, and the received data is time synchronized to ensure the consistency of data from different sources in time.
设计合适的数据融合算法,将视频流数据、身份认证结果和定位数据进行融合处理,设置数据源的权重和可信度,以及它们之间的相关性,以提高融合结果的准确性。Design a suitable data fusion algorithm to fuse video stream data, identity authentication results and positioning data, set the weight and credibility of data sources, and the correlation between them to improve the accuracy of fusion results.
一、数据融合步骤如下:1. The steps of data fusion are as follows:
1、数据预处理:1. Data preprocessing:
数据清洗:去除重复、错误或无效的数据。Data cleaning: Remove duplicate, erroneous or invalid data.
数据转换:将不同来源的数据转换为统一的格式和结构,以便于后续处理。Data conversion: Convert data from different sources into a unified format and structure for subsequent processing.
2、数据融合:2. Data Fusion:
特征提取:从预处理后的数据中提取有用的特征,这些特征可能包括形状、颜色、纹理、位置等。Feature extraction: Extract useful features from preprocessed data. These features may include shape, color, texture, position, etc.
特征融合:将来自不同数据源的特征进行融合,形成统一的特征表示。这可以通过简单的加权平均、主成分分析、神经网络等方法实现。Feature fusion: Fusion of features from different data sources to form a unified feature representation. This can be achieved through simple weighted averaging, principal component analysis, neural networks, etc.
决策融合:在特征融合的基础上,根据融合后的特征进行决策或分类。这通常涉及到一个或多个分类器或回归模型。Decision fusion: Based on feature fusion, decisions or classifications are made based on the fused features. This usually involves one or more classifiers or regression models.
3、权重和可信度分配:3. Weight and credibility allocation:
权重分配:不同数据源的数据质量和重要性可能不同,因此需要为它们分配不同的权重。权重可以通过经验、专家知识或数据质量评估来确定。在融合过程中,权重可以用于调整不同数据源对最终结果的贡献。Weight assignment: The data quality and importance of different data sources may be different, so different weights need to be assigned to them. The weights can be determined by experience, expert knowledge, or data quality assessment. During the fusion process, the weights can be used to adjust the contribution of different data sources to the final result.
可信度评估:可信度反映了数据的准确性和可靠性。它可以通过多种方法评估,如基于历史数据的统计方法、基于模型的方法等。在数据融合中,可信度可以用于过滤掉低质量的数据或调整融合过程中的权重分配。Credibility assessment: Credibility reflects the accuracy and reliability of data. It can be evaluated through a variety of methods, such as statistical methods based on historical data, model-based methods, etc. In data fusion, credibility can be used to filter out low-quality data or adjust the weight distribution in the fusion process.
4、结果输出与评估:4. Result output and evaluation:
融合后的数据或决策结果需要以适当的方式输出,以便于用户或后续处理使用。The fused data or decision results need to be output in an appropriate manner for the convenience of users or subsequent processing.
评估融合效果:通过比较融合结果与实际结果或参考结果,评估融合算法的性能和效果。评估指标可能包括准确性、完整性、一致性等。Evaluate fusion effect: Evaluate the performance and effect of the fusion algorithm by comparing the fusion result with the actual result or reference result. Evaluation indicators may include accuracy, completeness, consistency, etc.
二、权重和可信度的具体处理2. Specific treatment of weight and credibility
1、权重处理:1. Weight processing:
在特征融合阶段,可以为每个数据源的特征分配一个权重值,该值反映了该数据源的重要性或贡献度。In the feature fusion stage, a weight value can be assigned to the features of each data source, which reflects the importance or contribution of the data source.
在决策融合阶段,可以为每个分类器或回归模型的输出分配一个权重值,该值反映了该模型在特定任务中的性能或可靠性。In the decision fusion stage, the output of each classifier or regression model can be assigned a weight value that reflects the performance or reliability of the model in a specific task.
权重值可以通过学习算法(如机器学习中的权重更新算法)自动调整,也可以通过专家知识或经验手动设定。Weight values can be adjusted automatically through learning algorithms (such as weight update algorithms in machine learning) or can be set manually through expert knowledge or experience.
2、可信度处理:2. Credibility processing:
在数据预处理阶段,可以通过数据质量评估来初步确定数据的可信度。During the data preprocessing stage, the credibility of the data can be preliminarily determined through data quality assessment.
在融合过程中,可以根据数据的实际表现(如分类准确性、预测误差等)动态调整数据的可信度。During the fusion process, the credibility of the data can be dynamically adjusted according to the actual performance of the data (such as classification accuracy, prediction error, etc.).
对于低可信度的数据,可以采取降低其权重、过滤掉或进行特殊处理的方法。For data with low credibility, you can reduce its weight, filter it out, or perform special processing on it.
具体的,将融合处理后的结果通过API接口服务推送到对应的安全监控与预警模块中,根据融合结果判断,对于产生的告警信息进行预警,预警方式包含发送系统站内信,孪生平台高亮闪烁。Specifically, the fusion processing results are pushed to the corresponding security monitoring and early warning module through the API interface service. According to the fusion results, early warnings are issued for the generated alarm information. The early warning methods include sending system station messages and the twin platform flashing highlights.
安全监控与预警模块实现通过以下方式:The security monitoring and early warning module is implemented in the following ways:
1、异常行为检测与识别1. Abnormal behavior detection and identification
利用机器学习算法对融合后的数据进行分析和学习,识别出异常行为模式,实时监控目标的行为,当检测到异常行为时触发报警机制。Use machine learning algorithms to analyze and learn the fused data, identify abnormal behavior patterns, monitor the target's behavior in real time, and trigger an alarm mechanism when abnormal behavior is detected.
2、报警机制设计与实现2. Design and implementation of alarm mechanism
设计合适的报警机制,包括声音报警、短信通知、邮件通知等方式,根据异常行为的严重程度和紧急程度,选择合适的报警方式和报警级别。Design appropriate alarm mechanisms, including sound alarms, SMS notifications, email notifications, etc., and select appropriate alarm methods and alarm levels based on the severity and urgency of abnormal behavior.
3、报警信息处理与反馈3. Alarm information processing and feedback
对触发的报警信息进行记录和处理,包括保存报警视频、生成报警报告等操作,及时将报警信息反馈给相关人员进行处理和响应,确保安全事件的及时处置。The triggered alarm information is recorded and processed, including saving alarm videos, generating alarm reports, etc., and the alarm information is promptly fed back to relevant personnel for processing and response to ensure timely handling of security incidents.
综上,本发明上述实施例当中的区域安全监控方法,通过对监控区域的区域信息确定数据采集设备的设备参数,利用数据采集设备对监控区域进行数据采集得到网格化监控数据,为后续数据融合提供全面的视觉信息,利用人脸识别模型对网格化监控数据进行实时分析提取出人脸特征数据,以便于准确识别目标身份,根据区域信息确定各基站的基站参数,并根据各基站参数对监控区域进行目标检测计算出目标位置信息,精确追踪目标位置,将网格化监控数据、人脸特征数据以及目标位置信息进行数据融合,并根据数据融合结果对监控区域进行安全监控,利用融合技术实现自动化和智能化的数据处理与决策,大大提高监控效率,减少人工干预和响应时间,降低人力成本和安全风险,从而实现成本效益的最大化。In summary, the regional security monitoring method in the above-mentioned embodiment of the present invention determines the equipment parameters of the data acquisition device based on the regional information of the monitoring area, uses the data acquisition device to collect data on the monitoring area to obtain grid monitoring data, provides comprehensive visual information for subsequent data fusion, and uses a face recognition model to perform real-time analysis on the grid monitoring data to extract facial feature data to accurately identify the target identity, determines the base station parameters of each base station based on the regional information, and performs target detection on the monitoring area based on the parameters of each base station to calculate the target position information, accurately tracks the target position, fuses the grid monitoring data, facial feature data and target position information, and performs security monitoring on the monitoring area based on the data fusion result, and uses fusion technology to realize automated and intelligent data processing and decision-making, greatly improves monitoring efficiency, reduces manual intervention and response time, reduces labor costs and safety risks, and thus maximizes cost-effectiveness.
实施例二Embodiment 2
本发明另一方面还提出一种区域安全监控系统,请查阅图6,所示为本发明第二实施例中的区域安全监控系统,所述系统包括:Another aspect of the present invention further provides a regional security monitoring system. Please refer to FIG6 , which shows a regional security monitoring system in a second embodiment of the present invention. The system includes:
数据采集模块11,用于基于监控区域的区域信息确定数据采集设备的设备参数,并在所述设备参数下控制所述数据采集设备对所述监控区域进行数据采集,以得到对应的网格化监控数据;The data acquisition module 11 is used to determine the equipment parameters of the data acquisition device based on the area information of the monitoring area, and control the data acquisition device to collect data for the monitoring area under the equipment parameters to obtain corresponding grid monitoring data;
进一步的,所述数据采集模块11包括:Furthermore, the data acquisition module 11 includes:
参数确定单元,用于基于监控区域的区域信息确定数据采集设备的数量、类型以及安装位置,以使所述数据采集设备覆盖所述监控区域;A parameter determination unit, configured to determine the number, type and installation location of the data acquisition equipment based on the area information of the monitoring area, so that the data acquisition equipment covers the monitoring area;
数据采集单元,用于利用所述数据采集设备对所述监控区域进行数据采集,并将所采集到的数据进行格式转换,以得到对应的网格化监控数据。The data collection unit is used to collect data from the monitoring area using the data collection device, and convert the format of the collected data to obtain corresponding grid monitoring data.
模型优化模块12,用于采集若干人脸图像数据,并对所述人脸图像数据进行标注及预处理,利用深度学习算法构建人脸识别模型,将标注及预处理后的人脸图像数据输入至所述人脸识别模型中进行训练,以得到人脸识别优化模型;The model optimization module 12 is used to collect a number of facial image data, annotate and preprocess the facial image data, build a facial recognition model using a deep learning algorithm, and input the annotated and preprocessed facial image data into the facial recognition model for training to obtain a facial recognition optimization model;
进一步的,所述模型优化模块12包括:Furthermore, the model optimization module 12 includes:
图像处理单元,用于构建人脸遮挡模型和侧脸模型,并将标注及预处理后的人脸图像数据依次输入至所述人脸遮挡模型和所述侧脸模型中,以判定人脸质量以及筛除侧脸图像数据得到初步图像数据;An image processing unit, used to construct a face occlusion model and a profile face model, and input the labeled and pre-processed face image data into the face occlusion model and the profile face model in sequence to determine the face quality and filter out the profile face image data to obtain preliminary image data;
模型优化单元,用于对所述初步图像数据进行人脸素材向量化,并将向量化结果与人脸数据库进行比对,基于人脸置信度阈值识别出对应的人脸识别结果,根据所述人脸识别结果对所述人脸识别模型中进行训练,以得到人脸识别优化模型。The model optimization unit is used to vectorize the face material of the preliminary image data, compare the vectorization result with the face database, identify the corresponding face recognition result based on the face confidence threshold, and train the face recognition model according to the face recognition result to obtain a face recognition optimization model.
位置信息计算模块13,用于利用所述人脸识别优化模型对所述网格化监控数据进行实时分析,以提取出对应的人脸特征数据,根据所述区域信息确定各基站的基站参数,并根据各所述基站参数对所述监控区域进行目标检测,以计算出对应的目标位置信息;The position information calculation module 13 is used to perform real-time analysis on the grid monitoring data using the face recognition optimization model to extract corresponding face feature data, determine the base station parameters of each base station according to the area information, and perform target detection on the monitoring area according to each of the base station parameters to calculate the corresponding target position information;
进一步的,所述位置信息计算模块13包括:Furthermore, the position information calculation module 13 includes:
标签分配单元,用于对所述监控区域中目标进行标签分配,并计算出各所述基站向所述目标发射的测量信号在其对应的基站和所述监控区域中目标的标签之间的飞行时间以及到达角度;A label assignment unit, configured to assign labels to the targets in the monitoring area, and calculate the flight time and arrival angle of the measurement signal transmitted by each base station to the target between the corresponding base station and the label of the target in the monitoring area;
位置信息计算单元,用于基于所述测量信号的传播速度、所述飞行时间以及所述到达角度计算出各所述基站与所述标签之间的距离,并基于各所述基站与所述标签之间的距离确定所述目标的目标位置信息。The position information calculation unit is used to calculate the distance between each base station and the tag based on the propagation speed of the measurement signal, the flight time and the arrival angle, and determine the target position information of the target based on the distance between each base station and the tag.
安全监控模块14,用于将所述网格化监控数据、所述人脸特征数据以及所述目标位置信息进行数据融合,并根据数据融合结果对所述监控区域进行安全监控。The security monitoring module 14 is used to fuse the grid monitoring data, the facial feature data and the target location information, and perform security monitoring on the monitoring area according to the data fusion result.
进一步的,所述安全监控模块14包括:Furthermore, the security monitoring module 14 includes:
数据预处理单元,用于将所述网格化监控数据、所述人脸特征数据以及所述目标位置信息进行数据预处理,并根据数据预处理结果进行特征提取,以得到对应的特征数据;A data preprocessing unit, used for preprocessing the grid monitoring data, the face feature data and the target location information, and extracting features according to the data preprocessing results to obtain corresponding feature data;
安全监控单元,用于将所述特征数据进行特征融合,并将融合后的特征进行决策分类,以得到对应的融合数据,并根据所述融合数据对所述监控区域进行安全监控。The security monitoring unit is used to perform feature fusion on the feature data, and make decision classification on the fused features to obtain corresponding fused data, and perform security monitoring on the monitoring area according to the fused data.
上述各模块、单元被执行时所实现的功能或操作步骤与上述方法实施例大体相同,在此不再赘述。The functions or operation steps implemented when the above modules and units are executed are generally the same as those in the above method embodiments, and will not be repeated here.
本发明实施例所提供的区域安全监控系统,其实现原理及产生的技术效果和前述方法实施例相同,为简要描述,系统实施例部分未提及之处,可参考前述方法实施例中相应内容。The regional security monitoring system provided in the embodiment of the present invention has the same implementation principle and technical effects as those of the aforementioned method embodiment. For the sake of brief description, for matters not mentioned in the system embodiment, reference may be made to the corresponding contents in the aforementioned method embodiment.
实施例三Embodiment 3
本发明还提出一种计算机,请参阅图7,所示为本发明第三实施例中的计算机,包括存储器10、处理器20以及存储在所述存储器10上并可在所述处理器20上运行的计算机程序30,所述处理器20执行所述计算机程序30时实现上述的区域安全监控方法。The present invention also proposes a computer, please refer to Figure 7, which is a computer in the third embodiment of the present invention, including a memory 10, a processor 20, and a computer program 30 stored in the memory 10 and executable on the processor 20. When the processor 20 executes the computer program 30, the above-mentioned regional security monitoring method is implemented.
其中,存储器10至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性存储器、磁盘、光盘等。存储器10在一些实施例中可以是计算机的内部存储单元,例如该计算机的硬盘。存储器10在另一些实施例中也可以是外部存储装置,例如插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash Card)等。进一步地,存储器10还可以既包括计算机的内部存储单元也包括外部存储装置。存储器10不仅可以用于存储安装于计算机的应用软件及各类数据,还可以用于暂时地存储已经输出或者将要输出的数据。The memory 10 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., an SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the memory 10 may be an internal storage unit of a computer, such as a hard disk of the computer. In other embodiments, the memory 10 may also be an external storage device, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash card, etc. Further, the memory 10 may also include both an internal storage unit of the computer and an external storage device. The memory 10 may be used not only to store application software and various types of data installed in the computer, but also to temporarily store data that has been output or is to be output.
其中,处理器20在一些实施例中可以是电子控制单元 (Electronic ControlUnit,简称ECU,又称行车电脑)、中央处理器(Central Processing Unit, CPU)、控制器、微控制器、微处理器或其他数据处理芯片,用于运行存储器10中存储的程序代码或处理数据,例如执行访问限制程序等。Among them, in some embodiments, the processor 20 can be an electronic control unit (Electronic Control Unit, abbreviated as ECU, also known as a vehicle computer), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor or other data processing chip, used to run the program code stored in the memory 10 or process data, such as executing access restriction programs.
需要指出的是,图7示出的结构并不构成对计算机的限定,在其它实施例当中,该计算机可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。It should be noted that the structure shown in FIG. 7 does not constitute a limitation on the computer. In other embodiments, the computer may include fewer or more components than shown in the figure, or a combination of certain components, or a different arrangement of components.
本发明实施例还提出一种可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上述的区域安全监控方法。The embodiment of the present invention further provides a readable storage medium on which a computer program is stored. When the program is executed by a processor, the regional security monitoring method as described above is implemented.
本领域技术人员可以理解,在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,“计算机可读介质”可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。Those skilled in the art will appreciate that the logic and/or steps represented in the flowchart or otherwise described herein, for example, may be considered as an ordered list of executable instructions for implementing logical functions, and may be specifically implemented in any computer-readable medium for use by an instruction execution system, device or apparatus (such as a computer-based system, a system including a processor, or other system that can fetch instructions from an instruction execution system, device or apparatus and execute instructions), or in conjunction with such instruction execution systems, devices or apparatuses. For purposes of this specification, "computer-readable medium" may be any device that can contain, store, communicate, propagate or transmit a program for use by an instruction execution system, device or apparatus, or in conjunction with such instruction execution systems, devices or apparatuses.
计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。More specific examples of computer-readable media (a non-exhaustive list) include the following: an electrical connection with one or more wires (electronic device), a portable computer disk case (magnetic device), a random access memory (RAM), a read-only memory (ROM), an erasable and programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disk read-only memory (CDROM). In addition, the computer-readable medium may even be a paper or other suitable medium on which the program is printed, since the program may be obtained electronically, for example, by optically scanning the paper or other medium, followed by editing, deciphering or, if necessary, processing in another suitable manner, and then stored in a computer memory.
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或它们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that the various parts of the present invention can be implemented by hardware, software, firmware or a combination thereof. In the above-mentioned embodiments, multiple steps or methods can be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented by hardware, as in another embodiment, it can be implemented by any one of the following technologies known in the art or a combination thereof: a discrete logic circuit having a logic gate circuit for implementing a logic function for a data signal, a dedicated integrated circuit having a suitable combination of logic gate circuits, a programmable gate array (PGA), a field programmable gate array (FPGA), etc.
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above-described embodiments may be arbitrarily combined. To make the description concise, not all possible combinations of the technical features in the above-described embodiments are described. However, as long as there is no contradiction in the combination of these technical features, they should be considered to be within the scope of this specification.
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only express several implementation methods of the present application, and the descriptions thereof are relatively specific and detailed, but they cannot be understood as limiting the scope of the invention patent. It should be pointed out that, for a person of ordinary skill in the art, several variations and improvements can be made without departing from the concept of the present application, and these all belong to the protection scope of the present application. Therefore, the protection scope of the patent of the present application shall be subject to the attached claims.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410954531.8A CN118506287A (en) | 2024-07-17 | 2024-07-17 | Regional security monitoring method, system, readable storage medium and computer |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410954531.8A CN118506287A (en) | 2024-07-17 | 2024-07-17 | Regional security monitoring method, system, readable storage medium and computer |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN118506287A true CN118506287A (en) | 2024-08-16 |
Family
ID=92233439
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410954531.8A Pending CN118506287A (en) | 2024-07-17 | 2024-07-17 | Regional security monitoring method, system, readable storage medium and computer |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118506287A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119846557A (en) * | 2025-03-17 | 2025-04-18 | 广州晟能电子科技有限公司 | Production scene personnel supervision protection system based on AI |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106897716A (en) * | 2017-04-27 | 2017-06-27 | 广东工业大学 | A kind of dormitory safety monitoring system and method |
| CN110428522A (en) * | 2019-07-24 | 2019-11-08 | 青岛联合创智科技有限公司 | A kind of intelligent safety and defence system of wisdom new city |
| WO2020168960A1 (en) * | 2019-02-19 | 2020-08-27 | 杭州海康威视数字技术股份有限公司 | Video analysis method and apparatus |
| CN113160509A (en) * | 2021-04-21 | 2021-07-23 | 广州珠江住房租赁发展投资有限公司 | Risk sensing method and system suitable for communities and construction sites |
| CN114155571A (en) * | 2021-10-29 | 2022-03-08 | 南京烽火星空通信发展有限公司 | Method for mixed extraction of pedestrians and human faces in video |
| CN114758384A (en) * | 2022-03-29 | 2022-07-15 | 奇酷软件(深圳)有限公司 | Face detection method, device, equipment and storage medium |
| CN114936799A (en) * | 2022-06-16 | 2022-08-23 | 黄冈强源电力设计有限公司 | Risk identification method and system in cement fiberboard construction process |
| CN115272800A (en) * | 2022-08-03 | 2022-11-01 | 深圳市杉川机器人有限公司 | Training method of face recognition model, storage medium and intelligent door lock |
| CN115909435A (en) * | 2022-09-09 | 2023-04-04 | 中国平安人寿保险股份有限公司 | Face detection method, face detection device, electronic equipment and storage medium |
| WO2024011926A1 (en) * | 2022-07-11 | 2024-01-18 | 卡奥斯工业智能研究院(青岛)有限公司 | 5g-based security monitoring system and method, electronic device, and storage medium |
| CN118015672A (en) * | 2023-12-26 | 2024-05-10 | 深圳市优必选科技股份有限公司 | Human face single target tracking method, device, electronic device and storage medium |
-
2024
- 2024-07-17 CN CN202410954531.8A patent/CN118506287A/en active Pending
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106897716A (en) * | 2017-04-27 | 2017-06-27 | 广东工业大学 | A kind of dormitory safety monitoring system and method |
| WO2020168960A1 (en) * | 2019-02-19 | 2020-08-27 | 杭州海康威视数字技术股份有限公司 | Video analysis method and apparatus |
| CN110428522A (en) * | 2019-07-24 | 2019-11-08 | 青岛联合创智科技有限公司 | A kind of intelligent safety and defence system of wisdom new city |
| CN113160509A (en) * | 2021-04-21 | 2021-07-23 | 广州珠江住房租赁发展投资有限公司 | Risk sensing method and system suitable for communities and construction sites |
| CN114155571A (en) * | 2021-10-29 | 2022-03-08 | 南京烽火星空通信发展有限公司 | Method for mixed extraction of pedestrians and human faces in video |
| CN114758384A (en) * | 2022-03-29 | 2022-07-15 | 奇酷软件(深圳)有限公司 | Face detection method, device, equipment and storage medium |
| CN114936799A (en) * | 2022-06-16 | 2022-08-23 | 黄冈强源电力设计有限公司 | Risk identification method and system in cement fiberboard construction process |
| WO2024011926A1 (en) * | 2022-07-11 | 2024-01-18 | 卡奥斯工业智能研究院(青岛)有限公司 | 5g-based security monitoring system and method, electronic device, and storage medium |
| CN115272800A (en) * | 2022-08-03 | 2022-11-01 | 深圳市杉川机器人有限公司 | Training method of face recognition model, storage medium and intelligent door lock |
| CN115909435A (en) * | 2022-09-09 | 2023-04-04 | 中国平安人寿保险股份有限公司 | Face detection method, face detection device, electronic equipment and storage medium |
| CN118015672A (en) * | 2023-12-26 | 2024-05-10 | 深圳市优必选科技股份有限公司 | Human face single target tracking method, device, electronic device and storage medium |
Non-Patent Citations (1)
| Title |
|---|
| 晏鹏程;张一鸣;童光红;黄锋;欧先锋;: "基于卷积神经网络的视频监控人脸识别方法", 成都工业学院学报, no. 01, 15 March 2020 (2020-03-15) * |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119846557A (en) * | 2025-03-17 | 2025-04-18 | 广州晟能电子科技有限公司 | Production scene personnel supervision protection system based on AI |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10282617B2 (en) | Methods and systems for performing sleeping object detection and tracking in video analytics | |
| US9704393B2 (en) | Integrated intelligent server based system and method/systems adapted to facilitate fail-safe integration and/or optimized utilization of various sensory inputs | |
| WO2022105243A1 (en) | Event detection method, apparatus, electronic device, and storage medium | |
| CN110969644B (en) | Personnel track tracking method, device and system | |
| WO2012095867A2 (en) | An integrated intelligent server based system and method/systems adapted to facilitate fail-safe integration and /or optimized utilization of various sensory inputs | |
| CN105120217A (en) | Intelligent camera motion detection alarm system and method based on big data analysis and user feedback | |
| US20200005025A1 (en) | Method, apparatus, device and system for processing commodity identification and storage medium | |
| KR102163208B1 (en) | Hybrid unmanned traffic surveillance system, and method thereof | |
| CN102752574A (en) | Video monitoring system and method | |
| CN109544870A (en) | Alarm decision method and intelligent monitor system for intelligent monitor system | |
| CN111553947A (en) | Target object positioning method and device | |
| CN118506287A (en) | Regional security monitoring method, system, readable storage medium and computer | |
| CN112767569A (en) | Intelligent building personnel monitoring system and method based on face recognition | |
| CN104751639A (en) | Big-data-based video structured license plate recognition system and method | |
| CN112149551A (en) | Safety helmet identification method based on embedded equipment and deep learning | |
| CN115393681A (en) | Target fusion method and device, electronic equipment and storage medium | |
| CN113516102A (en) | Deep learning parabolic behavior detection method based on video | |
| CN104038775B (en) | A kind of channel information recognition methods and device | |
| CN114332925A (en) | Method, system, device and computer-readable storage medium for detecting pets in elevators | |
| CN118918537A (en) | Passenger flow statistics method and device based on umbrella-opening pedestrian and computer equipment | |
| CN118015559A (en) | Object identification method and device, electronic equipment and storage medium | |
| CN109558839A (en) | Adaptive face identification method and the equipment and system for realizing this method | |
| CN116612358A (en) | Data processing method, related device, equipment and storage medium | |
| CN116485508A (en) | Mobile intelligent evaluation system and intelligent evaluation supervision method based on cloud technology | |
| Yan et al. | [Retracted] Defect Point Location Method of Civil Bridge Based on Internet of Things Wireless Communication |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20240816 |