[go: up one dir, main page]

CN119580392B - Informationized registering management device and method for tourist attraction - Google Patents

Informationized registering management device and method for tourist attraction

Info

Publication number
CN119580392B
CN119580392B CN202411648676.1A CN202411648676A CN119580392B CN 119580392 B CN119580392 B CN 119580392B CN 202411648676 A CN202411648676 A CN 202411648676A CN 119580392 B CN119580392 B CN 119580392B
Authority
CN
China
Prior art keywords
image
target
face
histogram
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411648676.1A
Other languages
Chinese (zh)
Other versions
CN119580392A (en
Inventor
刘紫颖
罗芬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University of Forestry and Technology
Original Assignee
Central South University of Forestry and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University of Forestry and Technology filed Critical Central South University of Forestry and Technology
Priority to CN202411648676.1A priority Critical patent/CN119580392B/en
Publication of CN119580392A publication Critical patent/CN119580392A/en
Application granted granted Critical
Publication of CN119580392B publication Critical patent/CN119580392B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F9/00Details other than those peculiar to special kinds or types of apparatus
    • G07F9/009User recognition or proximity detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Signal Processing (AREA)
  • Geometry (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an informationized deposit management device and method for tourist attractions, and relates to the technical field of deposit management; the method comprises the steps of collecting a left image and a right image through a binocular camera, detecting a human face area to obtain the left human face image and the right human face image, obtaining a target parallax image according to the left human face image and the right human face image, determining depth change according to the target parallax image, extracting features of the left human face image and the right human face image to obtain target features if the depth change is larger than a preset change, and determining to start a corresponding storage cabinet according to the target features. The face region is obtained by carrying out face image recognition on the images acquired by the binocular camera, and then the depth change is finally obtained by determining the parallax image through the face image, so that whether the face recognition object is a real person or a photo can be effectively judged, the safety is improved, and finally the corresponding bin number is accurately found and the unlocking operation is carried out by extracting and matching the face depth characteristics, thereby simplifying the access process and ensuring the safety.

Description

Informationized registering management device and method for tourist attraction
Technical Field
The invention belongs to the technical field of deposit management, and particularly relates to an informationized deposit management device and method for tourist attractions.
Background
The conventional tourist attraction locker has a plurality of defects in practical application, the requirements of current tourists and attraction management are difficult to meet, the identity verification mode of the conventional locker usually depends on simple passwords, certificates or swipes cards, the modes are not safe and reliable enough in an environment with dense people flow in the attraction, the tourists can face the situation of difficult object taking due to the fact that the certificates are lost or the passwords are forgotten, moreover, the password mode is easy to peep or tamper by others, and once the certificates are lost or the passwords are revealed, the deposited objects can be mistakenly taken or stolen, and the personal property safety of the tourists can not be fully ensured.
In order to improve convenience and safety, many scenic spots attempt to introduce face recognition technology to replace the traditional identity verification mode, and access verification is performed through faces. However, the face recognition also faces new problems in practical application, especially the risk of photo attack, and a simple face recognition system is easily deceived by static photos, videos or fake images, so that the situation of unlocking or illegal fetching is caused, and although the face recognition simplifies the access process to a certain extent, the face recognition brings potential safety hazard.
Disclosure of Invention
The invention aims to solve the problems and provides an informationized deposit management device and method for tourist attractions.
In a first aspect of the present invention, there is provided a method for information-based deposit management for tourist attractions, the method comprising:
When receiving an object taking instruction, starting a binocular camera to acquire images in a target area to obtain a left image and a right image;
detecting face areas of the left image and the right image to obtain a first target area and a second target area, cutting the left image according to the first target area to obtain a left face image, and cutting the right image according to the second target area to obtain a right face image;
Obtaining a target parallax image according to the left face image and the right face image, performing depth change processing on the target parallax image to obtain a target depth image, and extracting the depth change of the target depth image;
If the depth change is larger than the preset change, extracting features of the left face image and the right face image to obtain target features;
and searching a preset database according to the target characteristics to obtain a storage cabinet number, and starting a corresponding storage cabinet according to the storage cabinet number.
Optionally, the detecting the face regions of the left image and the right image to obtain the first target region and the second target region includes:
Preprocessing an initial face image to obtain a preprocessed image, and performing image enhancement on the preprocessed image to obtain a target image;
Substituting the target image into a preset face detection model to obtain a target detection frame, wherein the target detection frame is the first target area if the initial face image is the left image, and the target detection frame is the second target area if the initial face image is the right image.
Optionally, obtaining the target parallax map according to the left face image and the right face image includes:
Performing key point identification on the left face image and the right face image through a key point detection algorithm to obtain a first key point set and a second key point set;
Matching the key points in the first key point set and the second key point set to obtain a key point combination set, and obtaining a key point combination parallax value by a semi-global block matching algorithm for each key point combination;
And substituting the left face image and the right face image into a real-time stereo matching network to obtain a global parallax image, and correcting the global parallax image according to the combined parallax values of all the key points to obtain a target parallax image.
Optionally, extracting the features of the left face image and the right face image to obtain the target features includes:
dividing the left face image and the right face image by a target grid aiming at the left face image and the right face image to obtain a left face square set and a right face square set;
for each square in the left face square set and the right face square set, extracting features of the square to obtain square features;
Generating a left face histogram according to all the square features in the left face square set, and generating a right face histogram according to all the square features in the right face square set;
and splicing the left face histogram and the right face histogram to obtain a target histogram, and substituting the left face histogram, the right face histogram and the target histogram into a preset feature fusion model to obtain target features.
Optionally, the method further comprises:
when receiving an object storage instruction, starting a binocular camera to acquire images in a target area to obtain a first left image and a first right image, and acquiring object information;
detecting face areas of the first left image and the first right image to obtain a first target acquisition area and a second target acquisition area, cutting the first left image according to the first target acquisition area to obtain a left face acquisition image, and cutting the first right image according to the second target acquisition area to obtain a right face acquisition image;
Substituting the left face acquisition image and the right face acquisition image into the preset model to obtain target characteristics, and searching the express cabinet according to the article information to obtain the express cabinet number;
and binding the express cabinet number with the target feature and then storing the bound express cabinet number and the target feature into the preset database.
In a second aspect of the present invention, an informationized deposit management device for tourist attractions is provided, comprising:
The object taking image acquisition module is used for starting the binocular camera to acquire images in the target area when an object taking instruction is received so as to obtain a left image and a right image;
The face image acquisition module is used for respectively carrying out face region detection on the left image and the right image to obtain a first target region and a second target region, cutting the left image according to the first target region to obtain a left face image, and cutting the right image according to the second target region to obtain a right face image;
The depth change determining module is used for obtaining a target parallax image according to the left face image and the right face image, carrying out depth change processing on the target parallax image to obtain a target depth image, and extracting the depth change of the target depth image;
the target feature determining module is used for extracting features of the left face image and the right face image to obtain target features if the depth change is larger than a preset change;
And the storage cabinet number searching module is used for searching a preset database according to the target characteristics to obtain a storage cabinet number, and opening the corresponding storage cabinet according to the storage cabinet number.
Optionally, the face image acquisition module includes:
the image enhancement module is used for preprocessing an initial face image to obtain a preprocessed image and enhancing the preprocessed image to obtain a target image;
the target detection frame determining module is used for substituting the target image into a preset face detection model to obtain a target detection frame, wherein the target detection frame is the first target area if the initial face image is the left image, and the target detection frame is the second target area if the initial face image is the right image.
Optionally, the depth change determining module includes:
The key point identification module is used for respectively carrying out key point identification on the left face image and the right face image through a key point detection algorithm to obtain a first key point set and a second key point set;
The key point matching module is used for matching key points in the first key point set and the second key point set to obtain a key point combination set, and aiming at each key point combination, the key point combination parallax value is obtained through a semi-global block matching algorithm;
And the target parallax map determining module is used for substituting the left face image and the right face image into a real-time stereo matching network to obtain a global parallax map, and correcting the global parallax map according to the combined parallax values of all the key points to obtain a target parallax map.
Optionally, the target feature determining module includes:
The image segmentation module is used for segmenting the left face image and the right face image through a target grid aiming at the left face image and the right face image to obtain a left face square set and a right face square set;
the block feature extraction module is used for extracting features of each block in the left face block set and the right face block set to obtain block features;
the histogram generation module is used for generating a left face histogram according to all the square features in the left face square set and generating a right face histogram according to all the square features in the right face square set;
And the histogram feature fusion module is used for splicing the left face histogram and the right face histogram to obtain a target histogram, and substituting the left face histogram, the right face histogram and the target histogram into a preset feature fusion model to obtain target features.
Optionally, the method further comprises:
The object storage image acquisition module is used for starting the binocular camera to acquire images in the target area when an object storage instruction is received to obtain a first left image and a first right image, and acquiring object information;
The second face image acquisition module is used for respectively carrying out face region detection on the first left image and the first right image to obtain a first target acquisition region and a second target acquisition region, cutting the first left image according to the first target acquisition region to obtain a left face acquisition image, and cutting the first right image according to the second target acquisition region to obtain a right face acquisition image;
The express cabinet number generation module is used for substituting the left face acquisition image and the right face acquisition image into the preset model to obtain target characteristics, and searching the express cabinet according to the article information to obtain the express cabinet number;
and the data storage module is used for binding the express cabinet number and the target characteristic and then storing the bound express cabinet number and target characteristic into the preset database.
The invention has the beneficial effects that:
The invention provides an informationized registering management method for tourist attractions, which comprises the steps of starting a binocular camera to collect images in a target area to obtain a left image and a right image when receiving an object taking instruction, respectively detecting the left image and the right image to obtain a first target area and a second target area, cutting the left image according to the first target area to obtain a left face image, cutting the right image according to the second target area to obtain a right face image, obtaining a target parallax image according to the left face image and the right face image, carrying out depth change processing on the target parallax image to obtain a target depth image, extracting the depth change of the target depth image, carrying out feature extraction on the left face image and the right face image to obtain target features if the depth change is larger than preset change, searching a preset database according to the target features to obtain a cabinet number, and starting a corresponding storage cabinet according to the cabinet number. The face region is obtained by carrying out face image recognition on the images acquired by the binocular camera, the data processing capacity is reduced, the detection speed is improved, the depth change is finally obtained by determining the parallax image through the face image, the fact that the face recognition object is a true person or a photo can be effectively judged, the safety is improved, and finally the corresponding cabinet number is accurately found and the unlocking operation is carried out by extracting and matching the face depth characteristics, so that the access process is simplified, and the safety is ensured.
Drawings
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a flow chart of a method for informationized register management for tourist attractions according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an informationized register management device for tourist attractions according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The term "and/or" is merely an association relation describing the association object, and means that three kinds of relations may exist, for example, a and B may mean that a exists alone, a and B exist together, and B exists alone. Furthermore, descriptions such as those referred to as "first," "second," and the like, are provided for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implying an order of magnitude of the indicated technical features in the present disclosure. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention provides an informationized registering management method for tourist attractions. Referring to fig. 1, fig. 1 is a flowchart of an informationized register management method for tourist attractions according to an embodiment of the present invention. The method comprises the following steps:
S101, when an object taking instruction is received, starting a binocular camera to acquire images in a target area to obtain a left image and a right image;
s102, face area detection is carried out on a left image and a right image respectively to obtain a first target area and a second target area, the left image is cut according to the first target area to obtain a left face image, and the right image is cut according to the second target area to obtain a right face image;
S103, obtaining a target parallax image according to the left face image and the right face image, performing depth change processing on the target parallax image to obtain a target depth image, and extracting the depth change of the target depth image;
S104, if the depth change is larger than the preset change, extracting the characteristics of the left face image and the right face image to obtain target characteristics;
s105, searching a preset database according to the target characteristics to obtain a storage cabinet number, and opening the corresponding storage cabinet according to the storage cabinet number.
According to the informationized registering management method for tourist attractions, provided by the embodiment of the invention, the face region is obtained by carrying out face image recognition on the images acquired by the binocular cameras, the data processing capacity is reduced, the detection speed is improved, the depth change is finally obtained by determining the parallax images through the face images, the human face identification object is a true person or a photo, the safety is improved, and finally the corresponding bin number is accurately found and the unlocking operation is carried out by extracting and matching the face depth characteristics, so that the access process is simplified, and the safety is ensured.
In an implementation mode, the left parallax image and the right parallax image can be constructed by acquiring the image of the target area through the binocular camera, so that extraction of depth information is realized, face positions are accurately positioned, false triggering rate is reduced, face feature extraction is started after depth change is detected to reach a specific threshold value, calculation resources are saved, recognition and matching are performed only when the conditions are met, and response efficiency of the system is improved.
In one implementation mode, the combination mode of face recognition and depth information can improve recognition accuracy, the possibility of false recognition is reduced, the reliability of user recognition is further enhanced through feature extraction and database matching, and therefore the risk of unauthorized access is reduced, a target area is an area in which a binocular camera can collect a face, and a visitor can be prompted to stand to a designated position by the target area.
In one implementation, if the depth change is less than or equal to the preset change, it indicates that the shape of the detected object is relatively flat, and there is no significant three-dimensional feature change, possibly not a real person, but a photo, and at this time, the subsequent detection process is stopped.
In one implementation mode, only the binocular camera acquires the image, so that the resource consumption is reduced, the user does not need to input a password, the access process is simplified, the preset change is determined by a technician, and the data stored in the preset database is data generated when the object exists.
In one embodiment, performing face region detection on the left image and the right image to obtain a first target region and a second target region includes:
Preprocessing an initial face image to obtain a preprocessed image, and performing image enhancement on the preprocessed image to obtain a target image;
Substituting the target image into a preset face detection model to obtain a target detection frame, wherein the target detection frame is a first target area if the initial face image is a left image, and is a second target area if the initial face image is a right image.
In one implementation, image preprocessing and image enhancement can improve image quality, enable images with blurring, insufficient light or poor contrast to become clear, improve recognition accuracy of a subsequent face detection model, and particularly have obvious effects under complex illumination or low resolution conditions, and the demarcation of a target detection frame enables face areas in left and right images to be accurately recognized and marked, so that false detection or omission detection is reduced, parallax analysis in subsequent stereoscopic vision calculation is facilitated, and reliability of depth information calculation is ensured.
In one implementation, the detection model can more effectively identify the features after preprocessing and enhancing the image quality, so that the detection speed is improved, the requirement on complex models or multiple detection is reduced in an optimization process, and resources are saved.
In one implementation, preprocessing is basic format conversion, normalization and other processing, image enhancement is to respectively perform low-light enhancement on a left image and a right image by using enhancement-Net, so that illumination information and details in the images are improved, and enhanced left images and right images are generated, thereby reducing detail loss caused by low light and providing a clearer visual basis for subsequent face region detection.
In one implementation, the preset face detection model may be RET I NAFACE, MTCNN, or the like face detection model.
In one embodiment, obtaining the target disparity map from the left face image and the right face image includes:
Performing key point identification on the left face image and the right face image through a key point detection algorithm to obtain a first key point set and a second key point set;
matching the key points in the first key point set and the second key point set to obtain a key point combination set, and aiming at each key point combination, obtaining a key point combination parallax value through a semi-global block matching algorithm;
And substituting the left face image and the right face image into a real-time stereo matching network to obtain a global parallax image, and correcting the global parallax image according to the combined parallax values of all the key points to obtain a target parallax image.
In one implementation mode, the key point matching algorithm can accurately find out the corresponding characteristic points in the left and right images, correct the depth information by calculating the parallax value of the key point combination, effectively reduce errors caused by factors such as illumination, noise and the like, and balance local and global information when the key point parallax is calculated by the semi-global block matching algorithm, reduce matching errors caused by problems such as local shielding and blurring and the like, and increase the accuracy of stereo matching.
In one implementation mode, the key point matching result is used for correcting the global disparity map, so that the accuracy of the disparity map can be greatly improved on the premise of not losing the overall efficiency, meanwhile, the dependence on a complex stereo matching network is reduced, the global disparity map is corrected through the disparity value of the key point combination, the disparity result generated by the real-time stereo matching network can be calibrated, the mismatch area caused by network errors is reduced, and finally, the more consistent target disparity map is obtained.
In one implementation, the key point detection algorithm may be an Adaboost algorithm, openFace, dl ib, SURF, etc. based on Haar features, and the matching is performed on the key points in the first key point set and the second key point set to obtain a key point combination set, specifically, a feature descriptor algorithm such as S FT, ORB, etc. is used for each key point to generate a descriptor for representing local information of the point, the most similar key point pair is found between the key point set of the left image and the key point set of the right image through a KNN matching algorithm, the euclidean distance between the two descriptors is calculated, and the distance with the smallest euclidean distance is obtained as the matching obtaining key point pair.
In one implementation, the semi-global block matching algorithm may be SGM, mu l pi-l eve l SGM, ADAPT I VE SGM, fast SGM, etc., and the real-time stereo matching network may be StereoNet, PSMNet, deepPruner, stereo-Unet, etc.
In one implementation manner, correcting the global disparity map according to the disparity values of all the key point combinations to obtain a target disparity map, namely, obtaining a first difference value by obtaining a difference value in the disparity values of the key point combinations, obtaining a second difference value by obtaining a difference value corresponding to the key point combinations in the global disparity map, calculating the difference value of the first difference value and the second difference value to obtain a corrected difference value, obtaining a pixel distance between the key point combinations, correcting the disparity value in each step by taking a preset step size as a gradient, and gradually updating the disparity value according to the preset step size within the pixel distance between the corrected difference values of each key point combination, wherein the pixel distance between the key point combinations is 100 pixels, the key points are A point and B point, the preset step size is 20 pixel points, the corrected difference value is 1, 1 to 20 pixel points before the A point to the B point is 0.2 on the basis of the formed disparity value, 0.4 on the basis of the formed disparity value of the first 21 to 40 pixel points, and 0.6 to the parallax value of the first 41 to 60 pixel points before the formed disparity value is 0.80 on the basis of the formed disparity value of the first 1 to the first 1.80 pixel points.
In one embodiment, extracting features from the left face image and the right face image to obtain the target features includes:
for the left face image and the right face image, dividing the left face image and the right face image through a target grid to obtain a left face square block set and a right face square block set;
For each square in the left face square set and the right face square set, extracting features of the square to obtain square features;
Generating a left face histogram according to all the square features in the left face square set, and generating a right face histogram according to all the square features in the right face square set;
And splicing the left face histogram and the right face histogram to obtain a target histogram, and substituting the left face histogram, the right face histogram and the target histogram into a preset feature fusion model to obtain target features.
In one implementation, the face image is subjected to grid segmentation and block feature extraction, each block represents local information of the image, the local features can capture image details such as textures, shapes and the like, different parts of the face can be represented more accurately, the local features can be effectively combined by generating histograms of left and right faces and then performing stitching and feature fusion, the target features with more comprehensive and more identifying ability are obtained, the matching accuracy of the face features can be improved by segmenting the left face image and the right face image into a plurality of blocks and extracting the features from each block, the traditional integral feature extraction method can ignore the change of the local areas of the image, and the feature optimization can be performed in a targeted mode by the method after segmentation, so that the situation of mismatching is reduced.
In one implementation, the histograms of the left face and the right face are spliced, visual information from left and right visual angles can be fused better, and as the left and right images generally contain different visual angle information, the advantages of the left and right images can be integrated through feature fusion, the comprehensiveness and the expressive power of the features are improved, the target features not only can describe the facial features of an individual more accurately, but also can fuse detailed information under different visual angles.
In one implementation, the system can adapt to complex environments more flexibly through target grid segmentation and local feature extraction, in some dynamic scenes, each region of an image may have different background, illumination or interference factors, and through feature extraction of local squares, the interference can be reduced, the recognition capability of objects or faces is improved, and the stability and speed of recognition are further improved.
In one implementation, the size of the target grid is determined by a technician, and the pixel grid is generally 36 by 36, and the feature extraction is performed on the square to obtain the square feature, specifically, an LBP operator is applied to each image square to extract the local texture feature of each small block.
In one implementation, since the left and right images are obtained from two different angles, they contain different perspective information, the local texture features should be extracted from the left and right images, respectively, and then combined to form a complete feature vector. Specifically, the block features of each of the left image and the right image are spliced, for example, the first row of the first block features in the left image and the first row of the block features in the right image are sequentially arranged into a longer block feature, and the target histogram is obtained by sequentially splicing each row, so that the recognition accuracy is improved by combining the feature information of the two visual angles, and the recognition accuracy is improved particularly under different visual angles and postures.
In an implementation mode, substituting a left face histogram, a right face histogram and a target histogram into a preset feature fusion model to obtain target features, specifically, performing scale feature extraction on the left face histogram, the right face histogram and the target histogram respectively by using target scales to obtain a first left face scale map, a first right face scale map and a first target face scale map, wherein the scales of the first left face scale map, the first right face scale map and the first target face scale map are the same, performing superposition on the first left face scale map and the first right face scale map respectively by using the first target face scale map to obtain a first left face fusion map, performing scale feature extraction on the first left face fusion map, the first right face fusion map and the target histogram respectively by using a second target scale to obtain a second left face scale map, a second right face scale map and a second target face scale map, performing a final step according to the same steps, and finally performing step-down scaling on the first left face scale map and the first target face scale map to obtain a final target face scale map, and performing final step-down, and finally performing step-down, namely, performing scale-average scaling on the first face scale map and the first target face map to obtain final target face scale map.
In one embodiment, the method further comprises:
when receiving an object storage instruction, starting a binocular camera to acquire images in a target area to obtain a first left image and a first right image, and acquiring object information;
The face region detection is carried out on the first left image and the first right image respectively to obtain a first target acquisition region and a second target acquisition region, the first left image is cut according to the first target acquisition region to obtain a left face acquisition image, and the first right image is cut according to the second target acquisition region to obtain a right face acquisition image;
substituting the left face acquisition image and the right face acquisition image into a preset model to obtain target characteristics, and searching the express cabinet according to the article information to obtain the number of the express cabinet;
And binding the number of the express cabinet and the target characteristic, and storing the bound number and target characteristic into a preset database.
In one implementation, tracking and recording of each item may be formed in the system by binding the item's characteristics to the courier cabinet number, in conjunction with a preset database store. The manager can easily inquire the specific position of the article storage, so that the efficiency of article management is improved, and the article information is information input by a user, such as the size, the weight and the like of the article.
The embodiment of the invention also provides an informationized registering management device for tourist attractions based on the same inventive concept. Referring to fig. 2, fig. 2 is a schematic structural diagram of an informationized register management device for tourist attractions, provided in an embodiment of the present invention, including:
The object taking image acquisition module is used for starting the binocular camera to acquire images in the target area when an object taking instruction is received so as to obtain a left image and a right image;
The face image acquisition module is used for respectively carrying out face region detection on the left image and the right image to obtain a first target region and a second target region, cutting the left image according to the first target region to obtain a left face image, and cutting the right image according to the second target region to obtain a right face image;
The depth change determining module is used for obtaining a target parallax image according to the left face image and the right face image, carrying out depth change processing on the target parallax image to obtain a target depth image, and extracting the depth change of the target depth image;
the target feature determining module is used for extracting features of the left face image and the right face image to obtain target features if the depth change is larger than the preset change;
The storage cabinet number searching module is used for searching a preset database according to the target characteristics to obtain the storage cabinet number, and opening the corresponding storage cabinet according to the storage cabinet number.
According to the informationized registering management device for tourist attractions, provided by the embodiment of the invention, the face area is obtained by carrying out face image recognition on the images acquired by the binocular cameras, the data processing capacity is reduced, the detection speed is improved, the depth change is finally obtained by determining the parallax image through the face images, the human face identification object is a true person or a photo, the safety is improved, and finally the corresponding bin number is accurately found and the unlocking operation is carried out by extracting and matching the face depth characteristics, so that the access process is simplified, and the safety is ensured.
In one embodiment, the face image acquisition module includes:
the image enhancement module is used for preprocessing the initial face image to obtain a preprocessed image and enhancing the preprocessed image to obtain a target image;
The target detection frame determining module is used for substituting the target image into a preset face detection model to obtain a target detection frame, wherein the target detection frame is a first target area if the initial face image is a left image, and the target detection frame is a second target area if the initial face image is a right image.
In one embodiment, the depth change determination module includes:
The key point identification module is used for carrying out key point identification on the left face image and the right face image through a key point detection algorithm to obtain a first key point set and a second key point set;
the key point matching module is used for matching key points in the first key point set and the second key point set to obtain a key point combination set, and aiming at each key point combination, the key point combination parallax value is obtained through a semi-global block matching algorithm;
and the target parallax map determining module is used for substituting the left face image and the right face image into the real-time stereo matching network to obtain a global parallax map, and correcting the global parallax map according to the combined parallax values of all the key points to obtain the target parallax map.
In one embodiment, the target feature determination module comprises:
The image segmentation module is used for segmenting the left face image and the right face image through the target grid aiming at the left face image and the right face image to obtain a left face square block set and a right face square block set;
The block feature extraction module is used for extracting features of each block in the left face block set and the right face block set to obtain block features;
the histogram generation module is used for generating a left face histogram according to all the square features in the left face square set and generating a right face histogram according to all the square features in the right face square set;
the histogram feature fusion module is used for splicing the left face histogram and the right face histogram to obtain a target histogram, and substituting the left face histogram, the right face histogram and the target histogram into a preset feature fusion model to obtain target features.
In one embodiment, the method further comprises:
The object storage image acquisition module is used for starting the binocular camera to acquire images in the target area when an object storage instruction is received to obtain a first left image and a first right image, and acquiring object information;
The second face image acquisition module is used for respectively carrying out face region detection on the first left image and the first right image to obtain a first target acquisition region and a second target acquisition region, cutting the first left image according to the first target acquisition region to obtain a left face acquisition image, and cutting the first right image according to the second target acquisition region to obtain a right face acquisition image;
the express cabinet number generation module is used for substituting the left face acquisition image and the right face acquisition image into a preset model to obtain target characteristics, and searching the express cabinet according to the article information to obtain the express cabinet number;
and the data storage module is used for binding the number of the express cabinet and the target characteristic and then storing the number of the express cabinet and the target characteristic into a preset database.
The foregoing describes one embodiment of the present invention in detail, but the disclosure is only a preferred embodiment of the present invention and should not be construed as limiting the scope of the invention. All equivalent changes and modifications within the scope of the present invention are intended to be covered by the present invention.

Claims (9)

1.一种旅游景区用信息化寄存管理方法,其特征在于,所述方法包括:1. A method for information storage management for tourist attractions, characterized in that the method comprises: 当接收到取物指令时开启双目摄像头采集目标区域内的图像得到左图像和右图像;When receiving the object-picking command, the binocular camera is turned on to collect images in the target area to obtain left and right images; 分别对所述左图像和所述右图像进行人脸区域检测得到第一目标区域和第二目标区域,根据所述第一目标区域对所述左图像进行切割得到左人脸图像,根据所述第二目标区域对所述右图像进行切割得到右人脸图像;Performing face region detection on the left image and the right image respectively to obtain a first target region and a second target region, cutting the left image according to the first target region to obtain a left face image, and cutting the right image according to the second target region to obtain a right face image; 根据所述左人脸图像和所述右人脸图像得到目标视差图,对所述目标视差图进行深度变化处理得到目标深度图像,提取所述目标深度图像的深度变化;Obtaining a target disparity map according to the left face image and the right face image, performing depth change processing on the target disparity map to obtain a target depth image, and extracting a depth change of the target depth image; 若所述深度变化大于预设变化,则对所述左人脸图像和所述右人脸图像进行特征提取得到目标特征;If the depth change is greater than a preset change, extracting features from the left face image and the right face image to obtain target features; 根据所述目标特征查找预设数据库得到储柜编号,根据所述储柜编号开启对应存储柜;Searching a preset database to obtain a storage cabinet number according to the target feature, and opening a corresponding storage cabinet according to the storage cabinet number; 对所述左人脸图像和所述右人脸图像进行特征提取得到目标特征包括:Extracting features from the left face image and the right face image to obtain target features includes: 针对所述左人脸图像和所述右人脸图像,通过目标网格对所述左人脸图像和所述右人脸图像进行分割得到左人脸方块集和右人脸方块集;For the left face image and the right face image, segmenting the left face image and the right face image through a target grid to obtain a left face block set and a right face block set; 针对所述左人脸方块集和所述右人脸方块集中的每一方块,对该方块进行特征提取得到方块特征;For each block in the left face block set and the right face block set, extracting features of the block to obtain block features; 根据所述左人脸方块集中的所有方块特征生成左人脸直方图,根据所述右人脸方块集中的所有方块特征生成右人脸直方图;Generate a left face histogram according to all block features in the left face block set, and generate a right face histogram according to all block features in the right face block set; 对所述左人脸直方图和所述右人脸直方图进行拼接得到目标直方图,将所述左人脸直方图、所述右人脸直方图和所述目标直方图代入预设特征融合模型得到目标特征;The left face histogram and the right face histogram are spliced to obtain a target histogram, and the left face histogram, the right face histogram and the target histogram are substituted into a preset feature fusion model to obtain a target feature; 目标网格大小为36乘36的像素网格,对该方块进行特征提取得到方块特征具体为:对每个图像方块应用LBP算子,提取每个小块的局部纹理特征,通过比较每个像素与其邻域像素的灰度值,计算出一个二进制数,再将二进制转换为十进制得到像素点对应特征值,获取所有像素点对应特征值得到方块特征;The target grid size is a 36 by 36 pixel grid. The feature extraction of the block is as follows: the LBP operator is applied to each image block to extract the local texture features of each small block. A binary number is calculated by comparing the grayscale value of each pixel with its neighboring pixels. The binary number is then converted to decimal to obtain the corresponding feature value of the pixel point. The feature values corresponding to all pixels are obtained to obtain the block features. 将左人脸直方图、右人脸直方图和目标直方图代入预设特征融合模型得到目标特征具体为:以目标尺度分别对左人脸直方图、右人脸直方图和目标直方图进行尺度特征提取,得到第一左人脸尺度图、第一右人脸尺度图和第一目标人脸尺度图,其中第一左人脸尺度图、第一右人脸尺度图和第一目标人脸尺度图的尺度相同,以第一目标人脸尺度图分别对第一左人脸尺度图、第一右人脸尺度图进行叠加得到第一左人脸融合图,第一右人脸融合图,以第二目标尺度分别对第一左人脸融合图、第一右人脸融合图和目标直方图进行尺度特征提取,得到第二左人脸尺度图、第二右人脸尺度图和第二目标人脸尺度图,然后根据相同步骤得到第三左人脸尺度图、第三右人脸尺度图,直到以最终目标尺度进行特征提取得到最终左人脸尺度图和最终右人脸尺度图,对最终左人脸尺度图和最终右人脸尺度图进行叠加后得到最终图,对最终图进行平均池化后通过sigmoid函数得到目标特征。Substituting the left face histogram, the right face histogram and the target histogram into the preset feature fusion model to obtain the target feature is specifically as follows: extracting scale features of the left face histogram, the right face histogram and the target histogram respectively at the target scale to obtain a first left face scale map, a first right face scale map and a first target face scale map, wherein the scales of the first left face scale map, the first right face scale map and the first target face scale map are the same, and superimposing the first left face scale map and the first right face scale map respectively with the first target face scale map to obtain a first left face fusion map and a first right face fusion map. , scale features are extracted from the first left face fusion image, the first right face fusion image and the target histogram at the second target scale to obtain the second left face scale map, the second right face scale map and the second target face scale map, and then the third left face scale map and the third right face scale map are obtained according to the same steps, until the final target scale is used for feature extraction to obtain the final left face scale map and the final right face scale map, the final left face scale map and the final right face scale map are superimposed to obtain the final map, and the final map is average pooled and the target feature is obtained through the sigmoid function. 2.根据权利要求1所述的一种旅游景区用信息化寄存管理方法,其特征在于,分别对所述左图像和所述右图像进行人脸区域检测得到第一目标区域和第二目标区域包括:2. The information storage management method for tourist attractions according to claim 1, characterized in that the facial area detection is performed on the left image and the right image to obtain the first target area and the second target area respectively, comprising: 针对初始人脸图像,对所述初始人脸图像进行预处理得到预处理图像,对所述预处理图像进行图像增强得到目标图像;For an initial face image, preprocessing the initial face image to obtain a preprocessed image, and performing image enhancement on the preprocessed image to obtain a target image; 将所述目标图像代入预设人脸检测模型得到目标检测框;若所述初始人脸图像为所述左图像,则所述目标检测框为所述第一目标区域;若所述初始人脸图像为所述右图像,则所述目标检测框为所述第二目标区域。Substitute the target image into a preset face detection model to obtain a target detection frame; if the initial face image is the left image, the target detection frame is the first target area; if the initial face image is the right image, the target detection frame is the second target area. 3.根据权利要求1所述的一种旅游景区用信息化寄存管理方法,其特征在于,根据所述左人脸图像和所述右人脸图像得到目标视差图包括:3. The information storage management method for tourist attractions according to claim 1, characterized in that obtaining a target disparity map according to the left face image and the right face image comprises: 通过关键点检测算法分别对所述左人脸图像和所述右人脸图像进行关键点识别得到第一关键点集和第二关键点集;Using a key point detection algorithm, key point recognition is performed on the left face image and the right face image to obtain a first key point set and a second key point set; 对所述第一关键点集和所述第二关键点集内的关键点进行匹配得到关键点组合集,针对每一关键点组合,通过半全局块匹配算法得到该关键点组合视差值;Matching the key points in the first key point set and the second key point set to obtain a key point combination set, and for each key point combination, obtaining a disparity value of the key point combination by a semi-global block matching algorithm; 分别将所述左人脸图像和所述右人脸图像代入实时立体匹配网络得到全局视差图,根据所有关键点组合视差值对所述全局视差图进行矫正得到目标视差图。The left face image and the right face image are respectively substituted into a real-time stereo matching network to obtain a global disparity map, and the global disparity map is corrected according to the combined disparity values of all key points to obtain a target disparity map. 4.根据权利要求1所述的一种旅游景区用信息化寄存管理方法,其特征在于,所述方法还包括:4. The information storage management method for tourist attractions according to claim 1, characterized in that the method further comprises: 当接收到存物指令时开启双目摄像头采集目标区域内的图像得到第一左图像和第一右图像,并获取物品信息;When receiving a storage instruction, the binocular camera is turned on to collect images in the target area to obtain a first left image and a first right image, and obtain item information; 分别对所述第一左图像和所述第一右图像进行人脸区域检测得到第一目标采集区域和第二目标采集区域,根据所述第一目标采集区域对所述第一左图像进行切割得到左人脸采集图像,根据所述第二目标采集区域对所述第一右图像进行切割得到右人脸采集图像;Performing face region detection on the first left image and the first right image respectively to obtain a first target acquisition region and a second target acquisition region, cutting the first left image according to the first target acquisition region to obtain a left face acquisition image, and cutting the first right image according to the second target acquisition region to obtain a right face acquisition image; 将所述左人脸采集图像和所述右人脸采集图像代入预设特征融合模型得到目标特征,根据所述物品信息查找快递柜得到快递柜编号;Substituting the left face captured image and the right face captured image into a preset feature fusion model to obtain a target feature, and searching for the express cabinet according to the item information to obtain the cabinet number; 将所述快递柜编号和所述目标特征绑定后保存到所述预设数据库。The express cabinet number and the target feature are bound and saved in the preset database. 5.一种旅游景区用信息化寄存管理装置,用于实现权利要求1-4任一所述的一种旅游景区用信息化寄存管理方法,其特征在于,所述装置包括:5. An information-based storage management device for a tourist attraction, used to implement an information-based storage management method for a tourist attraction as described in any one of claims 1 to 4, characterized in that the device comprises: 取物图像采集模块,用于当接收到取物指令时开启双目摄像头采集目标区域内的图像得到左图像和右图像;The object-picking image acquisition module is used to start the binocular camera to collect images in the target area to obtain left and right images when receiving the object-picking instruction; 人脸图像获取模块,用于分别对所述左图像和所述右图像进行人脸区域检测得到第一目标区域和第二目标区域,根据所述第一目标区域对所述左图像进行切割得到左人脸图像,根据所述第二目标区域对所述右图像进行切割得到右人脸图像;A face image acquisition module, used to perform face region detection on the left image and the right image to obtain a first target region and a second target region, cut the left image according to the first target region to obtain a left face image, and cut the right image according to the second target region to obtain a right face image; 深度变化确定模块,用于根据所述左人脸图像和所述右人脸图像得到目标视差图,对所述目标视差图进行深度变化处理得到目标深度图像,提取所述目标深度图像的深度变化;A depth change determination module, configured to obtain a target disparity map according to the left face image and the right face image, perform depth change processing on the target disparity map to obtain a target depth image, and extract a depth change of the target depth image; 目标特征确定模块,用于若所述深度变化大于预设变化,则对所述左人脸图像和所述右人脸图像进行特征提取得到目标特征;A target feature determination module, configured to extract features from the left face image and the right face image to obtain target features if the depth change is greater than a preset change; 储柜编号查找模块,用于根据所述目标特征查找预设数据库得到储柜编号,根据所述储柜编号开启对应存储柜。The cabinet number search module is used to search a preset database according to the target feature to obtain the cabinet number, and open the corresponding storage cabinet according to the cabinet number. 6.根据权利要求5所述的一种旅游景区用信息化寄存管理装置,其特征在于,所述人脸图像获取模块包括:6. The information storage management device for tourist attractions according to claim 5, characterized in that the face image acquisition module comprises: 图像增强模块,用于针对初始人脸图像,对所述初始人脸图像进行预处理得到预处理图像,对所述预处理图像进行图像增强得到目标图像;An image enhancement module is used to preprocess an initial face image to obtain a preprocessed image, and to enhance the preprocessed image to obtain a target image; 目标检测框确定模块,用于将所述目标图像代入预设人脸检测模型得到目标检测框;若所述初始人脸图像为所述左图像,则所述目标检测框为所述第一目标区域;若所述初始人脸图像为所述右图像,则所述目标检测框为所述第二目标区域。A target detection frame determination module is used to substitute the target image into a preset face detection model to obtain a target detection frame; if the initial face image is the left image, the target detection frame is the first target area; if the initial face image is the right image, the target detection frame is the second target area. 7.根据权利要求5所述的一种旅游景区用信息化寄存管理装置,其特征在于,所述深度变化确定模块包括:7. The information storage management device for tourist attractions according to claim 5, characterized in that the depth change determination module comprises: 关键点识别模块,用于通过关键点检测算法分别对所述左人脸图像和所述右人脸图像进行关键点识别得到第一关键点集和第二关键点集;A key point recognition module, used to perform key point recognition on the left face image and the right face image respectively by a key point detection algorithm to obtain a first key point set and a second key point set; 关键点匹配模块,用于对所述第一关键点集和所述第二关键点集内的关键点进行匹配得到关键点组合集,针对每一关键点组合,通过半全局块匹配算法得到该关键点组合视差值;A key point matching module, used for matching the key points in the first key point set and the second key point set to obtain a key point combination set, and for each key point combination, obtaining a disparity value of the key point combination by a semi-global block matching algorithm; 目标视差图确定模块,用于分别将所述左人脸图像和所述右人脸图像代入实时立体匹配网络得到全局视差图,根据所有关键点组合视差值对所述全局视差图进行矫正得到目标视差图。The target disparity map determination module is used to substitute the left face image and the right face image into the real-time stereo matching network to obtain a global disparity map, and correct the global disparity map according to the combined disparity values of all key points to obtain a target disparity map. 8.根据权利要求5所述的一种旅游景区用信息化寄存管理装置,其特征在于,所述目标特征确定模块包括:8. The information storage management device for tourist attractions according to claim 5, characterized in that the target feature determination module comprises: 图像分割模块,用于针对所述左人脸图像和所述右人脸图像,通过目标网格对所述左人脸图像和所述右人脸图像进行分割得到左人脸方块集和右人脸方块集;An image segmentation module, for segmenting the left face image and the right face image through a target grid to obtain a left face block set and a right face block set; 方块特征提取模块,用于针对所述左人脸方块集和所述右人脸方块集中的每一方块,对该方块进行特征提取得到方块特征;A block feature extraction module, for extracting features of each block in the left face block set and the right face block set to obtain block features; 直方图生成模块,用于根据所述左人脸方块集中的所有方块特征生成左人脸直方图,根据所述右人脸方块集中的所有方块特征生成右人脸直方图;A histogram generation module, used to generate a left face histogram according to all block features in the left face block set, and to generate a right face histogram according to all block features in the right face block set; 直方图特征融合模块,用于对所述左人脸直方图和所述右人脸直方图进行拼接得到目标直方图,将所述左人脸直方图、所述右人脸直方图和所述目标直方图代入预设特征融合模型得到目标特征。The histogram feature fusion module is used to splice the left face histogram and the right face histogram to obtain a target histogram, and substitute the left face histogram, the right face histogram and the target histogram into a preset feature fusion model to obtain target features. 9.根据权利要求5所述的一种旅游景区用信息化寄存管理装置,其特征在于,所述装置还包括:9. The information storage management device for tourist attractions according to claim 5, characterized in that the device also includes: 存物图像采集模块,用于当接收到存物指令时开启双目摄像头采集目标区域内的图像得到第一左图像和第一右图像,并获取物品信息;The storage image acquisition module is used to start the binocular camera to acquire the image in the target area to obtain the first left image and the first right image, and obtain the item information when receiving the storage instruction; 第二人脸图像获取模块,用于分别对所述第一左图像和所述第一右图像进行人脸区域检测得到第一目标采集区域和第二目标采集区域,根据所述第一目标采集区域对所述第一左图像进行切割得到左人脸采集图像,根据所述第二目标采集区域对所述第一右图像进行切割得到右人脸采集图像;A second face image acquisition module, configured to perform face region detection on the first left image and the first right image to obtain a first target acquisition region and a second target acquisition region, cut the first left image according to the first target acquisition region to obtain a left face acquisition image, and cut the first right image according to the second target acquisition region to obtain a right face acquisition image; 快递柜编号生成模块,用于将所述左人脸采集图像和所述右人脸采集图像代入预设特征融合模型得到目标特征,根据所述物品信息查找快递柜得到快递柜编号;A courier cabinet number generation module, used to substitute the left face acquisition image and the right face acquisition image into a preset feature fusion model to obtain a target feature, and to search for a courier cabinet according to the item information to obtain a courier cabinet number; 数据保存模块,用于将所述快递柜编号和所述目标特征绑定后保存到所述预设数据库。The data saving module is used to bind the express cabinet number and the target feature and save them to the preset database.
CN202411648676.1A 2024-11-19 2024-11-19 Informationized registering management device and method for tourist attraction Active CN119580392B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411648676.1A CN119580392B (en) 2024-11-19 2024-11-19 Informationized registering management device and method for tourist attraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411648676.1A CN119580392B (en) 2024-11-19 2024-11-19 Informationized registering management device and method for tourist attraction

Publications (2)

Publication Number Publication Date
CN119580392A CN119580392A (en) 2025-03-07
CN119580392B true CN119580392B (en) 2025-07-18

Family

ID=94800472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411648676.1A Active CN119580392B (en) 2024-11-19 2024-11-19 Informationized registering management device and method for tourist attraction

Country Status (1)

Country Link
CN (1) CN119580392B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910222A (en) * 2017-02-15 2017-06-30 中国科学院半导体研究所 Face three-dimensional rebuilding method based on binocular stereo vision
CN207302222U (en) * 2017-05-25 2018-05-01 新石器龙码(北京)科技有限公司 A kind of cabinet based on binocular stereo vision identification face
CN114663951A (en) * 2022-03-28 2022-06-24 深圳市赛为智能股份有限公司 Low illumination face detection method, device, computer equipment and storage medium
CN115909446A (en) * 2022-11-14 2023-04-04 华南理工大学 Method, device and storage medium for binocular face liveness discrimination
CN116363718A (en) * 2021-12-24 2023-06-30 北京达佳互联信息技术有限公司 Face recognition method and device, electronic equipment and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8121400B2 (en) * 2009-09-24 2012-02-21 Huper Laboratories Co., Ltd. Method of comparing similarity of 3D visual objects
KR101547281B1 (en) * 2013-04-09 2015-08-26 (주)쉬프트플러스 Method and apparatus for generating multiview 3d image signal
CN106897675B (en) * 2017-01-24 2021-08-17 上海交通大学 A face detection method based on the combination of binocular visual depth feature and apparent feature
CN108764091B (en) * 2018-05-18 2020-11-17 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and storage medium
CN109886197A (en) * 2019-02-21 2019-06-14 北京超维度计算科技有限公司 A kind of recognition of face binocular three-dimensional camera
CN213634692U (en) * 2020-09-18 2021-07-06 苏州睿知慧识智能科技有限公司 Face recognition intelligent cabinet
CN112052831B (en) * 2020-09-25 2023-08-08 北京百度网讯科技有限公司 Method, device and computer storage medium for face detection
CN112347904B (en) * 2020-11-04 2023-08-01 杭州锐颖科技有限公司 Living body detection method, device and medium based on binocular depth and picture structure
CN113299014A (en) * 2021-05-21 2021-08-24 中国工商银行股份有限公司 Intelligent cabinet
CN116343291A (en) * 2023-02-27 2023-06-27 河南中光学集团有限公司 Intelligent aiming method for fusing 2D and 3D faces
CN116524606A (en) * 2023-03-27 2023-08-01 深圳先进技术研究院 Face living body recognition method, device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910222A (en) * 2017-02-15 2017-06-30 中国科学院半导体研究所 Face three-dimensional rebuilding method based on binocular stereo vision
CN207302222U (en) * 2017-05-25 2018-05-01 新石器龙码(北京)科技有限公司 A kind of cabinet based on binocular stereo vision identification face
CN116363718A (en) * 2021-12-24 2023-06-30 北京达佳互联信息技术有限公司 Face recognition method and device, electronic equipment and storage medium
CN114663951A (en) * 2022-03-28 2022-06-24 深圳市赛为智能股份有限公司 Low illumination face detection method, device, computer equipment and storage medium
CN115909446A (en) * 2022-11-14 2023-04-04 华南理工大学 Method, device and storage medium for binocular face liveness discrimination

Also Published As

Publication number Publication date
CN119580392A (en) 2025-03-07

Similar Documents

Publication Publication Date Title
KR102324706B1 (en) Face recognition unlock method and device, device, medium
CN109711243B (en) A Deep Learning-Based Static 3D Face Liveness Detection Method
CN107093171B (en) Image processing method, device and system
CN108985134B (en) Face living body detection and face brushing transaction method and system based on binocular camera
TW202006602A (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
US9305206B2 (en) Method for enhancing depth maps
EP3642756B1 (en) Detecting artificial facial images using facial landmarks
KR101781358B1 (en) Personal Identification System And Method By Face Recognition In Digital Image
CN105243376A (en) Living body detection method and device
CN112528902B (en) Video monitoring dynamic face recognition method and device based on 3D face model
CN108573231B (en) Human body behavior identification method of depth motion map generated based on motion history point cloud
US10915739B2 (en) Face recognition device, face recognition method, and computer readable storage medium
CN107944416A (en) A kind of method that true man's verification is carried out by video
CN107609515B (en) A dual verification face comparison system and method based on Feiteng platform
Ji et al. LFHOG: A discriminative descriptor for live face detection from light field image
CN114299569A (en) Safe face authentication method based on eyeball motion
CN111767879A (en) Living body detection method
CN112686191A (en) Living body anti-counterfeiting method, system, terminal and medium based on face three-dimensional information
CN111767839B (en) Vehicle driving track determining method, device, equipment and medium
CN116524606A (en) Face living body recognition method, device, electronic equipment and storage medium
CN120124032A (en) Intelligent lock unlocking method and system based on face image processing
US7653219B2 (en) System and method for image attribute recording an analysis for biometric applications
CN119580392B (en) Informationized registering management device and method for tourist attraction
CN112668370A (en) Biological feature living body identification detection method and device based on depth image
CN114882551B (en) A face recognition processing method, device and equipment based on machine dimension

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant