[go: up one dir, main page]

CN111144180B - Risk detection method and system for monitoring video - Google Patents

Risk detection method and system for monitoring video Download PDF

Info

Publication number
CN111144180B
CN111144180B CN201811311956.8A CN201811311956A CN111144180B CN 111144180 B CN111144180 B CN 111144180B CN 201811311956 A CN201811311956 A CN 201811311956A CN 111144180 B CN111144180 B CN 111144180B
Authority
CN
China
Prior art keywords
preset
face
concave
video data
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811311956.8A
Other languages
Chinese (zh)
Other versions
CN111144180A (en
Inventor
李东声
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tendyron Corp
Original Assignee
Tendyron Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tendyron Corp filed Critical Tendyron Corp
Priority to CN201811311956.8A priority Critical patent/CN111144180B/en
Publication of CN111144180A publication Critical patent/CN111144180A/en
Application granted granted Critical
Publication of CN111144180B publication Critical patent/CN111144180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F19/00Complete banking systems; Coded card-freed arrangements adapted for dispensing or receiving monies or the like and posting such transactions to existing accounts, e.g. automatic teller machines
    • G07F19/20Automatic teller machines [ATMs]
    • G07F19/209Monitoring, auditing or diagnose of functioning of ATMs

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Finance (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a risk detection method and a system for a surveillance video, wherein the method comprises the following steps: the monitoring device determines a user to be analyzed, calculates to obtain a matching degree between background features to be analyzed and a preset background collaborative model, calculates a time interval between a first time and a second time, calculates to obtain a first concave-convex degree matching value of each grid region face and a second concave-convex degree matching value of each grid region face, the matching degree is lower than a preset background threshold value, or the time interval is larger than a preset time threshold value, or sequentially judges whether the first concave-convex degree matching value of each grid region face is in accordance with a preset threshold value range, so as to obtain N1 matching results, obtains M1 matching results representing that the first concave-convex degree matching value of each grid region face is not in accordance with the preset threshold value range from the N1 matching results, and generates a first comparison result if the ratio of M1 to N1 is larger than the preset threshold value, or the ratio of M2 to N2 is larger than the preset threshold value, so as to determine that a preset risk exists.

Description

Risk detection method and system for monitoring video
Technical Field
The invention relates to the field of video monitoring, in particular to a risk detection method and system for a monitoring video.
Background
An existing Automatic Teller Machine (ATM for short) is generally arranged in an Automatic bank, and after a bank card is inserted into the ATM, bank counter services such as money withdrawal, deposit, transfer and the like can be performed on the ATM. Due to the public, convenience and environmental specificity of self-service banking and automated teller machines. Criminal activity has increased in recent years for self-service banks and automated teller machines.
However, the conventional ATM video monitoring system mainly records a video, and performs post-event evidence collection on the recorded video after an event occurs, so as to eliminate disputes and solve cases, but such a mechanism only provides a post-event evidence collection function, and cannot perform real-time or early warning.
Disclosure of Invention
The present invention aims to solve one of the above problems.
The invention mainly aims to provide a risk detection method and system for a surveillance video.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the invention provides a risk detection method of a monitoring video, which comprises the following steps: the method comprises the steps that a first camera carries out video acquisition on an environment to be detected to obtain first video data, and the first video data are sent to a monitoring device, wherein the first video data at least comprise first time information, and the first time information cannot be changed; the second camera carries out video acquisition on the environment to be detected, second video data are obtained, and the second video data are sent to the monitoring device, wherein the second video data at least comprise second time information which cannot be changed, the first camera and the second camera are arranged at different positions in the environment to be detected, and the first time information and the second time information are obtained based on the same reference timing; the monitoring device receives the first video data and the second video data, identifies the face corresponding to the user to be analyzed in the first video data and the second video data, and determines the user to be analyzed; the monitoring device extracts a first background feature when a user to be analyzed is located at a first preset position from the first video data, extracts a second background feature when the user to be analyzed is located at a second preset position from the second video data, and calculates to obtain a matching degree between the background feature to be analyzed and a preset background collaborative model, wherein the background feature to be analyzed comprises the first background feature and the second background feature; the monitoring device extracts a first moment corresponding to the user to be analyzed when the first preset position appears from the first time information, extracts a second moment corresponding to the user to be analyzed when the second preset position appears from the second time information, and calculates a time interval between the first moment and the second moment; dividing a face region of a user to be analyzed in the first video data into N1 grid regions by the monitoring device, extracting a face first concave-convex degree of each grid region, comparing the extracted face first concave-convex degree of each grid region with a preset face concave-convex degree matching model, calculating to obtain a face first concave-convex degree matching value of each grid region, dividing the face region of the user to be analyzed in the second video data into N2 grid regions, extracting a face second concave-convex degree of each grid region, comparing the extracted face second concave-convex degree of each grid region with the preset face concave-convex degree matching model, and calculating to obtain a face second concave-convex degree matching value of each grid region, wherein N1 is more than or equal to 1 and is a natural number, and N2 is more than or equal to 1 and is a natural number; the monitoring device compares the matching degree with a preset background threshold, if the matching degree is lower than the preset background threshold, or judges whether a time interval is larger than a preset time threshold, if the matching degree is larger than the preset time threshold, or sequentially judges whether a first concave-convex degree matching value of each grid area face accords with a preset threshold range, N1 matching results are obtained, M1 matching results representing that the first concave-convex degree matching value of each grid area face does not accord with the preset threshold range are obtained from the N1 matching results, if the ratio of M1 to N1 is larger than the preset threshold, or sequentially judges whether a second concave-convex degree matching value of each grid area face accords with the preset threshold range, N2 matching results are obtained, M2 matching results representing that the second concave-convex degree matching value of each grid area face does not accord with the preset threshold range are obtained from the N2 matching results, if the ratio of M2 to N2 is larger than the preset threshold, a first comparing result is generated, and it is determined that a preset risk exists, wherein M1 is not larger than N1 and is a natural number, and M2 is not larger than a natural number.
After the monitoring device extracts the first concave-convex degree of the face of each grid region, before the extracted first concave-convex degree of the face of each grid region is compared with a preset face concave-convex degree matching model, the method further comprises the following steps: the monitoring device carries out distortion correction on the first concave-convex degree of the face of each grid area; comparing the extracted first concave-convex degree of the face of each grid region with a preset face concave-convex degree matching model comprises the following steps: comparing the first concave-convex degree of the face of each grid region obtained after distortion correction with a preset face concave-convex degree matching model; and after the monitoring device extracts the second concave-convex degree of the face of each grid region, before comparing the extracted second concave-convex degree of the face of each grid region with a preset face concave-convex degree matching model, the method further comprises the following steps: the monitoring device carries out distortion correction on the second concave-convex degree of the face of each grid area; the step of comparing the extracted second concave-convex degree of the face of each grid region with a preset face concave-convex degree matching model comprises the following steps: and comparing the second concave-convex degree of the face of each grid region obtained after the distortion correction with a preset face concave-convex degree matching model.
Wherein, the method further comprises: when the matching degree of the monitoring device is not lower than a preset background threshold value, the time interval is not greater than a preset time threshold value, the ratio of the M1 to the N1 is not greater than a preset threshold value, and the ratio of the M2 to the N2 is not greater than the preset threshold value, a second comparison result is generated, and it is determined that no preset risk exists.
Wherein, the method further comprises: the monitoring device receives training video data acquired by the first camera and the second camera in advance; the monitoring device respectively extracts training elements from the training video data, and obtains a preset background collaborative model, a preset time threshold and a preset human face concave-convex degree matching model according to training of the training elements.
Wherein, the method further comprises: and the monitoring device executes alarm operation after determining that the user to be analyzed has the preset risk.
In another aspect, the present invention provides a risk detection system for monitoring video, including: the monitoring device comprises a first camera, a second camera and a monitoring device, wherein the first camera is used for carrying out video acquisition on an environment to be detected, obtaining first video data and sending the first video data to the monitoring device, and the first video data at least comprises first time information which cannot be changed; the second camera is used for carrying out video acquisition on the environment to be detected, obtaining second video data and sending the second video data to the monitoring device, wherein the second video data at least comprises second time information which cannot be changed, the first camera and the second camera are arranged at different positions in the environment to be detected, and the first time information and the second time information are obtained based on the same reference timing; the monitoring device is used for receiving the first video data and the second video data, identifying the face corresponding to the user to be analyzed in the first video data and the second video data, and determining the user to be analyzed; extracting a first background feature of a user to be analyzed when the user to be analyzed is located at a first preset position from the first video data, extracting a second background feature of the user to be analyzed when the user to be analyzed is located at a second preset position from the second video data, and calculating to obtain a matching degree between the background feature to be analyzed and a preset background collaborative model, wherein the background feature to be analyzed comprises the first background feature and the second background feature; extracting a first moment corresponding to the user to be analyzed when the first preset position appears from the first time information, extracting a second moment corresponding to the user to be analyzed when the second preset position appears from the second time information, and calculating the time interval between the first moment and the second moment; dividing a face area of a user to be analyzed in the first video data into N1 grid areas, extracting a face first concave-convex degree of each grid area, comparing the extracted face first concave-convex degree of each grid area with a preset face concave-convex degree matching model, calculating to obtain a face first concave-convex degree matching value of each grid area, dividing the face area of the user to be analyzed in the second video data into N2 grid areas, extracting a face second concave-convex degree of each grid area, comparing the extracted face second concave-convex degree of each grid area with the preset face concave-convex degree matching model, and calculating to obtain a face second concave-convex degree matching value of each grid area, wherein N1 is more than or equal to 1 and is a natural number, and N2 is more than or equal to 1 and is a natural number; comparing the matching degree with a preset background threshold, if the matching degree is lower than the preset background threshold, or judging whether a time interval is larger than a preset time threshold, if the matching degree is larger than the preset time threshold, or sequentially judging whether the first concave-convex degree matching value of each grid region face conforms to a preset threshold range, so as to obtain N1 matching results, obtaining M1 matching results representing that the first concave-convex degree matching value of each grid region face does not conform to the preset threshold range from the N1 matching results, if the ratio of M1 to N1 is larger than the preset threshold, or sequentially judging whether the second concave-convex degree matching value of each grid region face conforms to the preset threshold range, so as to obtain N2 matching results, obtaining the matching results representing that the second concave-convex degree matching value of each grid region face does not conform to the preset threshold range from the N2 matching results, if the ratio of M2 to N2 is larger than the preset threshold, so as to generate a first comparing result, and determine that a preset risk exists, wherein M1 is not larger than or equal to N1 and is a natural number, and M2 is not larger than or equal to N2 and is a natural number.
The monitoring device is further used for carrying out distortion correction on the first concave-convex degree of the face of each grid region after the first concave-convex degree of the face of each grid region is extracted and before the extracted first concave-convex degree of the face of each grid region is compared with a preset face concave-convex degree matching model; the monitoring device is specifically used for comparing the obtained first concave-convex degree of the face of each grid region after distortion correction with a preset face concave-convex degree matching model; the monitoring device is also used for carrying out distortion correction on the second concave-convex degree of the face of each grid region after the second concave-convex degree of the face of each grid region is extracted and before the extracted second concave-convex degree of the face of each grid region is compared with a preset face concave-convex degree matching model; and the monitoring device is specifically used for comparing the second concave-convex degree of the face of each grid region obtained after distortion correction with a preset face concave-convex degree matching model.
The monitoring device is further configured to generate a second comparison result and determine that no preset risk exists when the matching degree is not lower than a preset background threshold and the time interval is not greater than a preset time threshold, and the ratio of M1 to N1 is not greater than a preset threshold and the ratio of M2 to N2 is not greater than a preset threshold.
The monitoring device is also used for receiving training video data acquired by the first camera and the second camera in advance; and respectively extracting training elements from the training video data, and training according to the training elements to obtain a preset background collaborative model, a preset time threshold and a preset human face concave-convex degree matching model.
The monitoring device is also used for executing alarm operation after determining that the user to be analyzed has the preset risk.
Therefore, according to the risk detection method and system for monitoring videos provided by the embodiment of the invention, at least two cameras are arranged at different positions, so that the time length required by a user to be analyzed to pass through a first preset position shot by a first camera and a second preset position shot by a second camera is judged, the background characteristics of the user to be analyzed when the user to be analyzed passes through the first preset position and the second preset position are analyzed, in addition, the face recognition is carried out on video data transmitted by the cameras, the face is divided into a plurality of grid areas, the face concavity and convexity of each grid area are matched with a preset face concavity and convexity matching model, the face concavity and convexity matching value of each grid area is further judged to be in accordance with a preset threshold range, the face of the grid area can be determined to be abnormal faces under the condition that the preset threshold range is not met, the number of the grid areas of the abnormal faces is determined to be enough, if the face is enough, the existence of the preset risk is determined, and the personnel can be recognized, and the preset risks (such as illegal criminal intentions) can be found in real time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a flowchart of a risk detection method for surveillance video according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a risk detection system for a surveillance video according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or quantity or location.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
Fig. 1 shows a flowchart of a risk detection method for a surveillance video according to an embodiment of the present invention, and referring to fig. 1, the risk detection method for a surveillance video according to an embodiment of the present invention includes:
s101, a first camera carries out video acquisition on a first preset position in an environment to be detected to obtain first video data, and the first video data are sent to a monitoring device, wherein the first video data at least comprise first time information, and the first time information cannot be changed; the second camera carries out video acquisition on a second preset position in the environment to be detected, second video data are obtained, and the second video data are sent to the monitoring device, wherein the second video data at least comprise second time information, the second time information cannot be changed, the first camera and the second camera are arranged at different positions in the environment to be detected, and the first time information and the second time information are obtained based on the same reference timing.
Specifically, first camera and second camera are the camera that sets up the different positions in waiting to detect the environment, for example when waiting to detect the environment and be self service bank, first camera can be for setting up the camera on the ATM, and the second camera can be for setting up the environment camera in the environment other than ATM in self service bank. Of course, in practical application of the present invention, more than two cameras may be provided, which is not limited in the present invention.
The first camera carries out video acquisition on the first preset position, and the second camera carries out video acquisition on the second preset position, wherein the acquisition is carried out from different positions, and the first camera and the second camera have different background characteristics. The first preset position and the second preset position are positions which are inevitably passed by a user when the user enters the environment to be detected to process the service, and the first preset position and the second preset position can be preset in the embodiment of the invention.
Wherein first camera carries out video acquisition to first preset position, and the second camera carries out video acquisition to the second preset position, for example, first camera carries out video acquisition to self service bank entrance and exit, and the second camera carries out video acquisition to the environment before the ATM, through carrying out video acquisition to two at least preset positions, can arrive at the time difference of two positions according to waiting to analyze the user and analyze whether the user action has the anomaly.
As an optional implementation manner of the embodiment of the present invention, when the first camera and the second camera perform video acquisition, current time information is added to video data, and the current time information cannot be changed, for example, the current time information may be encrypted by using an encryption method to obtain encrypted current time information, or the current time information may be signed by using a signature method to ensure that the current time information cannot be changed.
It should be noted that the first time information and the second time information are all time information contained in the video data, that is, the first time information and the second time information are a time stream formed by a plurality of time instants continuously, for example, recorded continuously in units of seconds.
The first time information and the second time information are obtained based on the same reference time, and both use, for example, internet time, that is, the first time information and the second time information are the same at the same time.
The first video data acquired by the first camera and the second video data acquired by the second camera are sent to the monitoring device in real time, or the acquired video data are sent to the monitoring device at regular time according to a preset period.
And S102, the monitoring device receives the first video data and the second video data, identifies faces corresponding to the users to be analyzed in the first video data and the second video data, and determines the users to be analyzed.
Specifically, the monitoring device may be disposed near the camera or in the background. For example, in an automated banking environment, the system may be disposed in an ATM or in a bank monitoring background, which is not limited in the present invention. After the monitoring device receives the first video data and the second video data, a user is identified from the first video data by adopting a face recognition technology, a user is identified from the second video data, and when the two users are determined to be the same user, the user is determined to be a user to be analyzed.
S103, the monitoring device extracts a first background feature when the user to be analyzed is located at a first preset position from the first video data, extracts a second background feature when the user to be analyzed is located at a second preset position from the second video data, and calculates to obtain a matching degree between the background feature to be analyzed and a preset background collaborative model, wherein the background feature to be analyzed comprises the first background feature and the second background feature; the monitoring device extracts a first moment corresponding to the user to be analyzed when the first preset position appears from the first time information, extracts a second moment corresponding to the user to be analyzed when the second preset position appears from the second time information, and calculates a time interval between the first moment and the second moment; the monitoring device divides a face area of a user to be analyzed in the first video data into N1 grid areas, extracts a face first concave-convex degree of each grid area, compares the extracted face first concave-convex degree of each grid area with a preset face concave-convex degree matching model, calculates to obtain a face first concave-convex degree matching value of each grid area, divides the face area of the user to be analyzed in the second video data into N2 grid areas, extracts a face second concave-convex degree of each grid area, compares the extracted face second concave-convex degree of each grid area with the preset face concave-convex degree matching model, and calculates to obtain a face second concave-convex degree matching value of each grid area, wherein N1 is more than or equal to 1 and is a natural number, and N2 is more than or equal to 1 and is a natural number.
In particular, the background feature may comprise any feature of a background identifier in the environment and any combination thereof to serve as an identification of the background. For example, the position information of the static object, the shape information of the static object, the quantity information of the static object, the motion rule of the dynamic object, and the like can be included.
Specifically, a background cooperation model is preset in the monitoring device so as to analyze the background characteristics. As an optional implementation manner of the embodiment of the present invention, the monitoring device receives training video data acquired by the first camera and the second camera in advance; the monitoring device extracts training elements from the training video data respectively, and a preset background collaborative model is obtained according to training of the training elements. The background markers in the shooting range of each camera are analyzed to generate a background collaborative model, and a reasonable background threshold range is set according to a first preset position and a second preset position in different movement tracks of a normal user to judge, so that the intelligence and the accuracy of judgment are improved.
Inputting the extracted background features into a preset background collaborative model, and calculating a matching degree between the extracted background features and the background collaborative model, where the matching degree is a numerical value, and may be a percentage value, for example.
Specifically, the monitoring device acquires first time information from the first video data, determines a first moment when the first time information is located when the user to be analyzed is at a first preset position, acquires second time information from the second video data, and determines a second moment when the second time information is located when the user to be analyzed is at a second preset position.
As an optional implementation manner of the embodiment of the present invention, the video content risk detection method further includes: the monitoring device receives training video data acquired by a first camera and a second camera in advance; the monitoring device respectively extracts training elements from the training video data, and obtains a preset time threshold value according to training of the training elements. The judgment is carried out by utilizing the preset time threshold obtained by training, and the intelligence and the accuracy of the judgment are improved.
Specifically, because a normal face has concave-convex degrees, when light rays emitted from one point are projected onto the face, the concave-convex degrees of the face at all positions can be monitored. If the human face is divided into N grid areas, the concave-convex degree of the light projected to each grid area is different and accords with a certain rule. Specifically, the face area of the user to be analyzed in the first video data can be divided into N1 grid areas, the first concavity and convexity of the face of each grid area are extracted, the face area of the user to be analyzed in the second video data is divided into N2 grid areas, and the second concavity and convexity of the face of each grid area are extracted.
After a camera captures face information, extracting the concavity and convexity in each mesh as an optional implementation manner of the embodiment of the present invention, before the monitoring device extracts the face concavity and convexity of each mesh region and compares the extracted face concavity and convexity of each mesh region with a preset face concavity and convexity matching model, the method for detecting risk of a surveillance video according to the embodiment of the present invention may further include: and the monitoring device performs distortion correction on the concave-convex degree of the face of each grid area. And after distortion correction, the distortion correction is sent to the monitoring device for analysis, so that the analysis accuracy of the monitoring device is improved. Specifically, in the present invention, after the monitoring device extracts the first concavity and convexity of the face of each mesh region, before comparing the extracted first concavity and convexity of the face of each mesh region with the preset face concavity and convexity matching model, the risk detection method for the surveillance video further includes: the monitoring device carries out distortion correction on the first concave-convex degree of the face of each grid area; comparing the extracted first concave-convex degree of the face of each grid region with a preset face concave-convex degree matching model comprises the following steps: comparing the first concave-convex degree of the face of each grid region obtained after distortion correction with a preset face concave-convex degree matching model; and after the monitoring device extracts the second concave-convex degree of the face of each grid region, before comparing the extracted second concave-convex degree of the face of each grid region with a preset face concave-convex degree matching model, the risk detection method of the monitoring video further comprises the following steps: the monitoring device carries out distortion correction on the second concave-convex degree of the face of each grid area; the step of comparing the extracted second concave-convex degree of the face of each grid region with a preset face concave-convex degree matching model comprises the following steps: and comparing the second concave-convex degree of the face of each grid region obtained after the distortion correction with a preset face concave-convex degree matching model.
And comparing the human face concavity and convexity of each grid area with a preset human face concavity and convexity matching model to determine whether the human face concavity and convexity conform to the concavity and convexity of a normal human face for subsequent analysis.
As an optional implementation manner of the embodiment of the present invention, the method for detecting risk of a surveillance video, provided by the embodiment of the present invention, further includes: the monitoring device receives training video data acquired by a camera in advance; and respectively extracting training elements from the training video data by the monitoring device, and training according to the training elements to obtain a preset human face concave-convex degree matching model. The human face shot by the camera is subjected to face recognition, the human face concavity and convexity are analyzed, a human face concavity and convexity matching model is generated, a reasonable preset threshold range is set according to the human face concavity and convexity matching model of a normal user for judgment, and the intelligence and the accuracy of judgment are improved.
It is worth mentioning that in the risk detection method of the surveillance video, the surveillance device receives training video data acquired by the first camera and the second camera in advance; the monitoring device extracts training elements from the training video data respectively, and the preset background collaborative model, the preset time threshold and the preset face concave-convex degree matching model obtained according to training of the training elements can be obtained in the same training or can be obtained in multiple times of training respectively, which is not limited in the invention.
S104, the monitoring device compares the matching degree with a preset background threshold, if the matching degree is lower than the preset background threshold, or judges whether the time interval is larger than the preset time threshold, if the matching degree is larger than the preset time threshold, or sequentially judges whether the first concave-convex degree matching value of the face of each grid area conforms to the preset threshold range, N1 matching results are obtained, M1 matching results representing that the first concave-convex degree matching value of the face of each grid area does not conform to the preset threshold range are obtained from the N1 matching results, if the ratio of the M1 to the N1 is larger than the preset threshold, or sequentially judges whether the second concave-convex degree matching value of the face of each grid area conforms to the preset threshold range, N2 matching results are obtained, M2 matching results representing that the second concave-convex degree matching value of the face of each grid area does not conform to the preset threshold range are obtained from the N2 matching results, if the ratio of the M2 to the N2 is larger than the preset threshold, a first comparing result is generated, and it is determined that a preset risk exists, wherein M1N 1 is a natural number, and M2 is not larger than N2 and is a natural number.
Specifically, when the matching degree is lower than a preset background threshold, it is considered that the background feature is not matched with the background collaborative model, and if the background feature is not matched with the background collaborative model, it is considered that a preset risk exists, for example: the video with the background features extracted has risks or users to be analyzed have risks, such as tampering of the video, hijacking of a camera, damage to normal collection of the camera by the users, and the like.
When the time length for the user to be analyzed to walk from the first preset position to the second preset position is longer than the preset time threshold, it can be determined that the behavior of the user to be analyzed is abnormal and does not conform to the normal behavior, and therefore the existence of the preset risk can be determined.
As an optional implementation manner of the embodiment of the present invention, the preset time threshold may be a value or a value interval. Under the condition that the preset time threshold is a numerical value interval, the time interval is larger than the maximum value of the preset time threshold, or the time interval is smaller than the minimum value of the preset time threshold, the detection result can be generated, and the existence of the preset risk is determined. Therefore, the fact that the too long or too short time for the user to be analyzed to walk from the first preset position to the second preset position is abnormal can be judged, and the existence of the preset risk is determined.
The monitoring device sequentially judges whether the face concavity and convexity matching value of each grid area conforms to a preset threshold range, namely whether the matching value is larger than or equal to a first preset value and smaller than or equal to a second preset value, wherein the second preset value is larger than the first preset value, the matching value is smaller than the first preset value, or the matching value is larger than the second preset value, which belongs to the condition that the face concavity and convexity matching value of the grid area does not conform to the preset threshold range, and the face possibly has risks and is not normal, such as the face in a mask or a photo, and the like, under the condition that the face concavity and convexity matching value of the grid area does not conform to the preset threshold range, the monitoring device further can determine the preset risks by using a voting mechanism, for example, the face concavity and convexity matching values of some grid areas do not conform to the preset threshold range, and the ratio of the number of the preset thresholds to the total number of the faces of other grid areas does not conform to the preset threshold range, and then can further judge whether the ratio of the number of the preset thresholds to the total number of the grids meets the thresholds, such as more than 50%, and the risks. In this manner, a predetermined risk may be identified.
As an optional implementation manner of the present invention, when the matching degree is not lower than the preset background threshold, and the time interval is not greater than the preset time threshold, and the ratio of M1 to N1 is not greater than the preset threshold, and the ratio of M2 to N2 is not greater than the preset threshold, the monitoring device generates a second comparison result, and determines that there is no preset risk. Because the matching degree between the background features and the background collaborative model is high enough, the time interval is not more than the preset time threshold value and accords with the normal behavior, and the concave-convex degree matching value of the face in the grid region accords with the preset threshold value range, the risk can be considered to be absent, for example: the video is not at risk or the user to be analyzed is not at risk.
Optionally, as an optional embodiment of the present invention, the monitoring apparatus performs an alarm operation after determining that the user to be analyzed has a preset risk. The alarm operation can be that an alarm device in the environment to be detected gives an alarm, for example, by sound and light, or an alarm device in a monitoring room of a background monitoring person, for example, by displaying on a monitoring display screen to give an alarm or sound, or sending a short message to a monitoring person or a policeman, or the like. The efficiency of risk handling for self-service banking and ATMs is further improved by alarming when a risk occurs.
Therefore, according to the risk detection method for monitoring videos provided by the embodiment of the invention, at least two cameras are arranged at different positions, so that the time length required by a user to be analyzed to pass through a first preset position shot by a first camera and a second preset position shot by a second camera is judged, background characteristics of the user to be analyzed when the user to be analyzed passes through the first preset position and the second preset position are analyzed, in addition, the face recognition is carried out on video data transmitted by the cameras, the face is divided into a plurality of grid areas, the face concavity and convexity of each grid area are matched with a preset face concavity and convexity matching model, whether the face concavity and convexity matching value of each grid area meets a preset threshold range is further judged, the face in the grid area can be determined to be abnormal faces under the condition that the preset threshold range is not met, the grid areas with abnormal faces are determined to be enough, the human faces are determined to be at the preset risks if the human faces are enough, and the personnel can be recognized, and the preset risks (such as illegal intentions) can be found in real time.
As an optional embodiment of the present invention, first video data collected by a first camera is encrypted by a security chip disposed in the first camera, second video data collected by a second camera is encrypted by a security chip disposed in the second camera, the first camera sends the encrypted first video data to a monitoring device, and the second camera sends the encrypted second video data to the monitoring device; and after receiving the encrypted first video data and the encrypted second video data, the monitoring device decrypts the encrypted first video data and the encrypted second video data to obtain the first video data and the second video data. By carrying out encryption transmission on the video data, the security of video data transmission is improved, and the video data is prevented from being tampered after being cracked.
The method comprises the steps that first video data collected by a first camera are signed through a security chip arranged in the first camera to obtain first signature data, second video data collected by a second camera are signed through a security chip arranged in the second camera to obtain second signature data, the first camera sends the first video data and the first signature data to a monitoring device, and the second camera sends the second video data and the second signature data to the monitoring device; and after receiving the first video data, the first signature data, the second video data and the second signature data, the monitoring device checks the first signature data and the second signature data, and uses the first video data and the second video data to perform subsequent analysis after the first signature data and the second signature data pass the checking. By signing the video data, the authenticity of the video data source can be ensured, and the video data can be prevented from being tampered.
Fig. 2 is a schematic structural diagram of a risk detection system for a surveillance video according to an embodiment of the present invention, where the method is applied to the risk detection system for a surveillance video according to an embodiment of the present invention, and only the structure of the risk detection system for a surveillance video according to an embodiment of the present invention is briefly described below, and other things are not the least, referring to the related description of the risk detection method for a surveillance video, referring to fig. 2, the risk detection system for a surveillance video according to an embodiment of the present invention includes:
the first camera 201 is configured to perform video acquisition on an environment to be detected, obtain first video data, and send the first video data to the monitoring device, where the first video data at least includes first time information, and the first time information is unchangeable;
the second camera 202 is configured to perform video acquisition on an environment to be detected, obtain second video data, and send the second video data to the monitoring device, where the second video data at least includes second time information that cannot be changed, the first camera and the second camera are disposed at different positions in the environment to be detected, and the first time information and the second time information are obtained based on the same reference timing;
the monitoring device 203 is used for receiving the first video data and the second video data, identifying faces corresponding to users to be analyzed in the first video data and the second video data, and determining the users to be analyzed; extracting a first background feature of a user to be analyzed when the user to be analyzed is located at a first preset position from the first video data, extracting a second background feature of the user to be analyzed when the user to be analyzed is located at a second preset position from the second video data, and calculating to obtain a matching degree between the background feature to be analyzed and a preset background collaborative model, wherein the background feature to be analyzed comprises the first background feature and the second background feature; extracting a first moment corresponding to the user to be analyzed when the first preset position appears from the first time information, extracting a second moment corresponding to the user to be analyzed when the second preset position appears from the second time information, and calculating the time interval between the first moment and the second moment; dividing a face area of a user to be analyzed in the first video data into N1 grid areas, extracting a face first concave-convex degree of each grid area, comparing the extracted face first concave-convex degree of each grid area with a preset face concave-convex degree matching model, calculating to obtain a face first concave-convex degree matching value of each grid area, dividing the face area of the user to be analyzed in the second video data into N2 grid areas, extracting a face second concave-convex degree of each grid area, comparing the extracted face second concave-convex degree of each grid area with the preset face concave-convex degree matching model, and calculating to obtain a face second concave-convex degree matching value of each grid area, wherein N1 is more than or equal to 1 and is a natural number, and N2 is more than or equal to 1 and is a natural number; comparing the matching degree with a preset background threshold, if the matching degree is lower than the preset background threshold, or judging whether a time interval is larger than a preset time threshold, if the matching degree is larger than the preset time threshold, or sequentially judging whether the first concave-convex degree matching value of each grid region face conforms to a preset threshold range, so as to obtain N1 matching results, obtaining M1 matching results representing that the first concave-convex degree matching value of each grid region face does not conform to the preset threshold range from the N1 matching results, if the ratio of M1 to N1 is larger than the preset threshold, or sequentially judging whether the second concave-convex degree matching value of each grid region face conforms to the preset threshold range, so as to obtain N2 matching results, obtaining the matching results representing that the second concave-convex degree matching value of each grid region face does not conform to the preset threshold range from the N2 matching results, if the ratio of M2 to N2 is larger than the preset threshold, so as to generate a first comparing result, and determine that a preset risk exists, wherein M1 is not larger than or equal to N1 and is a natural number, and M2 is not larger than or equal to N2 and is a natural number.
Therefore, according to the risk detection system for monitoring videos provided by the embodiment of the invention, at least two cameras are arranged at different positions, so that the time length required by a user to be analyzed to pass through a first preset position shot by a first camera and a second preset position shot by a second camera is judged, background characteristics of the user to be analyzed when the user passes through the first preset position and the second preset position are analyzed, in addition, the face recognition is carried out on video data transmitted by the cameras, the face is divided into a plurality of grid areas, the face concavity and convexity of each grid area are matched with a preset face concavity and convexity matching model, whether the face concavity and convexity matching value of each grid area meets a preset threshold range is further judged, the face in the grid area is determined to be abnormal faces under the condition that the face does not meet the preset threshold range, the grid areas with abnormal faces are determined to be enough, the existence of the preset risks is determined to be enough, and the personnel can be recognized, and the preset risks (such as illegal intentions) can be found in real time.
As an optional embodiment of the present invention, the monitoring device 203 is further configured to, after extracting the first concave-convex degree of the face in each mesh region, perform distortion correction on the first concave-convex degree of the face in each mesh region before comparing the extracted first concave-convex degree of the face in each mesh region with a preset face concave-convex degree matching model; the monitoring device 203 is specifically configured to compare the first concave-convex degree of the face of each mesh region obtained after the distortion correction with a preset face concave-convex degree matching model; the monitoring device 203 is further configured to perform distortion correction on the second human face concavity and convexity of each mesh region after the second human face concavity and convexity of each mesh region is extracted and before the extracted second human face concavity and convexity of each mesh region is compared with a preset human face concavity and convexity matching model; the monitoring device 203 is specifically configured to compare the second human face concavity and convexity of each mesh region obtained after the distortion correction with a preset human face concavity and convexity matching model. And after distortion correction, the distortion correction is sent to the monitoring device for analysis, so that the analysis accuracy of the monitoring device is improved.
As an optional embodiment of the present invention, the monitoring device 203 is further configured to generate a second comparison result when the matching degree is not lower than the preset background threshold, and the time interval is not greater than the preset time threshold, and the ratio of M1 to N1 is not greater than the preset threshold, and the ratio of M2 to N2 is not greater than the preset threshold, so as to determine that there is no preset risk. Because the matching degree between the background features and the background collaborative model is high enough, the time interval is not more than the preset time threshold value and accords with the normal behavior, and the concave-convex degree matching value of the face in the grid region accords with the preset threshold value range, the risk can be considered to be absent, for example: the video is not at risk or the user to be analyzed is not at risk.
As an optional embodiment of the present invention, the monitoring device 203 is further configured to receive training video data acquired by the first camera and the second camera in advance; and respectively extracting training elements from the training video data, and training according to the training elements to obtain a preset background collaborative model, a preset time threshold and a preset human face concave-convex degree matching model. The method comprises the steps of analyzing background markers in shooting ranges of all cameras to generate a background collaborative model, setting reasonable background threshold value ranges according to a first preset position and a second preset position in different movement tracks of a normal user to judge, judging by utilizing a preset time threshold value obtained by training, carrying out face recognition on faces shot by the cameras, analyzing the face roughness to generate a face roughness matching model, setting a reasonable preset threshold value range according to the face roughness matching model of the normal user to judge, and improving the intelligence and accuracy of judgment.
An alternative embodiment of the present invention is characterized in that the monitoring device 203 is further configured to perform an alarm operation after determining that the user to be analyzed has a preset risk. The efficiency of risk handling for self-service banking and ATMs is further improved by alarming when a risk occurs.
As an optional embodiment of the present invention, first video data collected by the first camera 201 is encrypted by a security chip disposed in the first camera, second video data collected by the second camera 202 is encrypted by a security chip disposed in the second camera, the first camera 201 sends the encrypted first video data to the monitoring apparatus, and the second camera 202 sends the encrypted second video data to the monitoring apparatus 203; after receiving the encrypted first video data and the encrypted second video data, the monitoring device 203 decrypts the encrypted first video data and the encrypted second video data to obtain the first video data and the second video data. By carrying out encryption transmission on the video data, the security of video data transmission is improved, and the video data is prevented from being tampered after being cracked.
First video data acquired by a first camera 201 are signed through a security chip arranged in the first camera to obtain first signature data, second video data acquired by a second camera 202 are signed through a security chip arranged in the second camera to obtain second signature data, the first camera 201 sends the first video data and the first signature data to a monitoring device, and the second camera 202 sends the second video data and the second signature data to a monitoring device 203; after receiving the first video data and the first signature data, and the second video data and the second signature data, the monitoring device 203 checks the first signature data and the second signature data, and performs subsequent analysis using the first video data and the second video data after the check passes. By signing the video data, the authenticity of the video data source can be ensured, and the video data can be prevented from being tampered.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware that is related to instructions of a program, and the program may be stored in a computer-readable storage medium, and when executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art without departing from the principle and spirit of the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. A risk detection method for surveillance video is characterized by comprising the following steps:
the method comprises the steps that a first camera carries out video acquisition on an environment to be detected to obtain first video data, and the first video data are sent to a monitoring device, wherein the first video data at least comprise first time information, and the first time information cannot be changed;
the method comprises the steps that a second camera carries out video acquisition on an environment to be detected to obtain second video data, and the second video data are sent to a monitoring device, wherein the second video data at least comprise second time information which can not be changed, the first camera and the second camera are arranged at different positions in the environment to be detected, and the first time information and the second time information are obtained based on the same reference timing;
the monitoring device receives the first video data and the second video data, identifies a face corresponding to a user to be analyzed in the first video data and the second video data, and determines the user to be analyzed;
the monitoring device extracts a first background feature when the user to be analyzed is located at the first preset position from the first video data, extracts a second background feature when the user to be analyzed is located at the second preset position from the second video data, and calculates a matching degree between the background feature to be analyzed and a preset background collaborative model, wherein the background feature to be analyzed comprises the first background feature and the second background feature;
and
the monitoring device extracts a first moment corresponding to the user to be analyzed when the first preset position appears from the first time information, extracts a second moment corresponding to the user to be analyzed when the second preset position appears from the second time information, and calculates a time interval between the first moment and the second moment;
and
the monitoring device divides the face area of the user to be analyzed in the first video data into N1 grid areas, extracts the face first concave-convex degree of each grid area, compares the extracted face first concave-convex degree of each grid area with a preset face concave-convex degree matching model, calculates to obtain the face first concave-convex degree matching value of each grid area, divides the face area of the user to be analyzed in the second video data into N2 grid areas, extracts the face second concave-convex degree of each grid area, compares the extracted face second concave-convex degree of each grid area with the preset face concave-convex degree matching model, and calculates to obtain the face second concave-convex degree matching value of each grid area, wherein N1 is more than or equal to 1 and is a natural number, and N2 is more than or equal to 1 and is a natural number;
the monitoring device compares the matching degree with a preset background threshold, if the matching degree is lower than the preset background threshold, or judges whether the time interval is larger than a preset time threshold, if the time interval is larger than the preset time threshold, or sequentially judges whether the first concave-convex degree matching value of each grid region face accords with a preset threshold range, so as to obtain N1 matching results, M1 matching results representing that the first concave-convex degree matching value of each grid region face does not accord with the preset threshold range are obtained from the N1 matching results, if the ratio of M1 to N1 is larger than the preset threshold, or sequentially judges whether the second concave-convex degree matching value of each grid region face accords with the preset threshold range, so as to obtain N2 matching results, M2 matching results representing that the second concave-convex degree matching value of each grid region face is obtained from the N2 matching results, and if the ratio of M2 to N2 is larger than the preset threshold, a first comparing result is generated, so as to determine that a preset risk exists, wherein M1 is not larger than N1 and is a natural number, and not larger than N2 is not larger than a natural number.
2. The method of claim 1,
after the monitoring device extracts the first concave-convex degree of the face of each grid region, before the extracted first concave-convex degree of the face of each grid region is compared with a preset face concave-convex degree matching model, the monitoring device further comprises:
the monitoring device carries out distortion correction on the first concave-convex degree of the face of each grid area;
the step of comparing the extracted first concave-convex degree of the face of each grid region with a preset face concave-convex degree matching model comprises the following steps:
comparing the first concave-convex degree of the face of each grid region obtained after distortion correction with a preset face concave-convex degree matching model;
and
after the monitoring device extracts each mesh region's face second roughness, before comparing each extracted mesh region's face second roughness with preset face roughness matching model, still include:
the monitoring device carries out distortion correction on the second concave-convex degree of the face of each grid area;
the step of comparing the extracted second concave-convex degree of the face of each grid region with a preset face concave-convex degree matching model comprises the following steps:
and comparing the second concave-convex degree of the face of each grid region obtained after distortion correction with a preset face concave-convex degree matching model.
3. The method of claim 1 or 2, further comprising:
and when the matching degree is not lower than the preset background threshold value and the time interval is not greater than the preset time threshold value, the ratio of the M1 to the N1 is not greater than a preset threshold value, and the ratio of the M2 to the N2 is not greater than the preset threshold value, the monitoring device generates a second comparison result and determines that no preset risk exists.
4. The method of claim 1 or 2, further comprising:
the monitoring device receives training video data acquired by the first camera and the second camera in advance;
the monitoring device respectively extracts training elements from the training video data, and obtains the preset background collaborative model, the preset time threshold and the preset human face concave-convex degree matching model according to the training of the training elements.
5. The method of claim 1 or 2, further comprising:
and the monitoring device executes alarm operation after determining that the user to be analyzed has the preset risk.
6. A risk detection system for surveillance video, comprising:
the system comprises a first camera, a monitoring device and a second camera, wherein the first camera is used for carrying out video acquisition on an environment to be detected, obtaining first video data and sending the first video data to the monitoring device, and the first video data at least comprises first time information which cannot be changed;
the second camera is used for carrying out video acquisition on an environment to be detected, obtaining second video data and sending the second video data to the monitoring device, wherein the second video data at least comprises second time information which cannot be changed, the first camera and the second camera are arranged at different positions in the environment to be detected, and the first time information and the second time information are obtained based on the same reference timing;
the monitoring device is used for receiving the first video data and the second video data, identifying faces corresponding to users to be analyzed in the first video data and the second video data, and determining the users to be analyzed; extracting a first background feature when the user to be analyzed is located at the first preset position from the first video data, extracting a second background feature when the user to be analyzed is located at the second preset position from the second video data, and calculating to obtain a matching degree between the background feature to be analyzed and a preset background collaborative model, wherein the background feature to be analyzed comprises the first background feature and the second background feature; extracting a first time corresponding to the user to be analyzed when the first preset position appears from the first time information, extracting a second time corresponding to the user to be analyzed when the second preset position appears from the second time information, and calculating a time interval between the first time and the second time; dividing the face area of the user to be analyzed in the first video data into N1 grid areas, extracting the first face concavity and convexity of each grid area, comparing the extracted first face concavity and convexity of each grid area with a preset face concavity and convexity matching model, calculating to obtain a first face concavity and convexity matching value of each grid area, dividing the face area of the user to be analyzed in the second video data into N2 grid areas, extracting the second face concavity and convexity of each grid area, comparing the extracted second face concavity and convexity of each grid area with the preset face concavity and convexity matching model, and calculating to obtain a second face concavity and convexity matching value of each grid area, wherein N1 is more than or equal to 1 and is a natural number, and N2 is more than or equal to 1 and is a natural number; comparing the matching degree with a preset background threshold, if the matching degree is lower than the preset background threshold, or judging whether the time interval is larger than a preset time threshold, if the matching degree is larger than the preset time threshold, or sequentially judging whether the first concave-convex degree matching value of each grid region face conforms to a preset threshold range, so as to obtain N1 matching results, obtaining M1 matching results representing that the first concave-convex degree matching value of the grid region face does not conform to the preset threshold range from the N1 matching results, if the ratio of M1 to N1 is larger than the preset threshold, or sequentially judging whether the second concave-convex degree matching value of each grid region face conforms to the preset threshold range, so as to obtain N2 matching results, obtaining M2 matching results representing that the second concave-convex degree matching value of the grid region face does not conform to the preset threshold range from the N2 matching results, and generating a first comparing result to determine that a preset risk exists, wherein M1 is N1 and is a natural number, and M2 is not larger than or equal to N2 and is a natural number.
7. The system according to claim 6, wherein the monitoring device is further configured to, after extracting the first concave-convex degree of the face of each of the mesh regions, perform distortion correction on the first concave-convex degree of the face of each of the mesh regions before comparing the extracted first concave-convex degree of the face of each of the mesh regions with a preset face concave-convex degree matching model;
the monitoring device is specifically used for comparing the first concave-convex degree of the face of each grid region obtained after distortion correction with a preset face concave-convex degree matching model;
and
the monitoring device is further configured to perform distortion correction on the second human face concavity and convexity of each grid region after the second human face concavity and convexity of each grid region is extracted and before the extracted second human face concavity and convexity of each grid region is compared with a preset human face concavity and convexity matching model;
the monitoring device is specifically configured to compare the second human face concavity and convexity of each grid region obtained after distortion correction with a preset human face concavity and convexity matching model.
8. The system according to claim 6 or 7, wherein the monitoring device is further configured to generate a second comparison result when the matching degree is not lower than the preset background threshold, and the time interval is not greater than the preset time threshold, and a ratio of M1 to N1 is not greater than a preset threshold, and a ratio of M2 to N2 is not greater than a preset threshold, and it is determined that there is no preset risk.
9. The system according to claim 6 or 7, wherein the monitoring device is further configured to receive training video data acquired by the first camera and the second camera in advance; and respectively extracting training elements from the training video data, and training according to the training elements to obtain the preset background collaborative model, the preset time threshold and the preset face concave-convex degree matching model.
10. The system according to claim 6 or 7, wherein the monitoring device is further configured to perform an alarm operation after determining that the user to be analyzed has a preset risk.
CN201811311956.8A 2018-11-06 2018-11-06 Risk detection method and system for monitoring video Active CN111144180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811311956.8A CN111144180B (en) 2018-11-06 2018-11-06 Risk detection method and system for monitoring video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811311956.8A CN111144180B (en) 2018-11-06 2018-11-06 Risk detection method and system for monitoring video

Publications (2)

Publication Number Publication Date
CN111144180A CN111144180A (en) 2020-05-12
CN111144180B true CN111144180B (en) 2023-04-07

Family

ID=70516207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811311956.8A Active CN111144180B (en) 2018-11-06 2018-11-06 Risk detection method and system for monitoring video

Country Status (1)

Country Link
CN (1) CN111144180B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0502371D0 (en) * 2005-02-04 2005-03-16 British Telecomm Identifying spurious regions in a video frame
CN105825524A (en) * 2016-03-10 2016-08-03 浙江生辉照明有限公司 Target tracking method and apparatus
WO2018040099A1 (en) * 2016-08-31 2018-03-08 深圳市唯特视科技有限公司 Three-dimensional face reconstruction method based on grayscale and depth information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0502371D0 (en) * 2005-02-04 2005-03-16 British Telecomm Identifying spurious regions in a video frame
CN105825524A (en) * 2016-03-10 2016-08-03 浙江生辉照明有限公司 Target tracking method and apparatus
WO2018040099A1 (en) * 2016-08-31 2018-03-08 深圳市唯特视科技有限公司 Three-dimensional face reconstruction method based on grayscale and depth information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
三维人脸识别算法研究;胡敏等;《影像科学与光化学》;20170315(第02期);全文 *

Also Published As

Publication number Publication date
CN111144180A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN102004904B (en) Automatic teller machine-based safe monitoring device and method and automatic teller machine
TWI776796B (en) Financial terminal security system and financial terminal security method
CN103714631B (en) ATM cash dispenser intelligent monitor system based on recognition of face
CN102414724B (en) There is at least one for identifying the self-aided terminal of the video camera of crime attempt
CN101609581A (en) The anomalous video warning device of ATM
JP5992681B2 (en) Unusual condition detection system for congestion
CN102414725A (en) Method for recognizing attempts at manipulating self-service terminal, and data processing unit therefor
CN109961587A (en) A kind of monitoring system of self-service bank
CN107330414A (en) Act of violence monitoring method
CN101426128B (en) Detection system and method for stolen and lost packet
CN116564017A (en) Currency trap detection
CN113537034A (en) Cash receiving loss prevention method and system
CN111144181A (en) Risk detection method, device and system based on background collaboration
CN119339318B (en) Campus abnormal behavior detection system based on artificial intelligence
CN112016509B (en) Personnel station abnormality reminding method and device
CN114973135A (en) A head-and-shoulders-based time-series video sleeping post identification method, system and electronic device
CN103971100A (en) Video-based camouflage and peeping behavior detection method for automated teller machine
CN111144180B (en) Risk detection method and system for monitoring video
CN114882589A (en) Granary safety operation early warning system and method based on intelligent video analysis
CN202472805U (en) ATM safety prevention and control device
CN111144182B (en) Method and system for detecting face risk in video
CN109961588A (en) A kind of monitoring system
CN107368728A (en) Visualize alarm management system and method
CN111144183B (en) Risk detection method, device and system based on face concave-convex degree
CN117893972A (en) Abnormal behavior detection method based on video recording

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant