CN119580392B - Informationized registering management device and method for tourist attraction - Google Patents
Informationized registering management device and method for tourist attractionInfo
- Publication number
- CN119580392B CN119580392B CN202411648676.1A CN202411648676A CN119580392B CN 119580392 B CN119580392 B CN 119580392B CN 202411648676 A CN202411648676 A CN 202411648676A CN 119580392 B CN119580392 B CN 119580392B
- Authority
- CN
- China
- Prior art keywords
- image
- target
- face
- histogram
- key point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07F—COIN-FREED OR LIKE APPARATUS
- G07F9/00—Details other than those peculiar to special kinds or types of apparatus
- G07F9/009—User recognition or proximity detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/57—Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Signal Processing (AREA)
- Geometry (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an informationized deposit management device and method for tourist attractions, and relates to the technical field of deposit management; the method comprises the steps of collecting a left image and a right image through a binocular camera, detecting a human face area to obtain the left human face image and the right human face image, obtaining a target parallax image according to the left human face image and the right human face image, determining depth change according to the target parallax image, extracting features of the left human face image and the right human face image to obtain target features if the depth change is larger than a preset change, and determining to start a corresponding storage cabinet according to the target features. The face region is obtained by carrying out face image recognition on the images acquired by the binocular camera, and then the depth change is finally obtained by determining the parallax image through the face image, so that whether the face recognition object is a real person or a photo can be effectively judged, the safety is improved, and finally the corresponding bin number is accurately found and the unlocking operation is carried out by extracting and matching the face depth characteristics, thereby simplifying the access process and ensuring the safety.
Description
Technical Field
The invention belongs to the technical field of deposit management, and particularly relates to an informationized deposit management device and method for tourist attractions.
Background
The conventional tourist attraction locker has a plurality of defects in practical application, the requirements of current tourists and attraction management are difficult to meet, the identity verification mode of the conventional locker usually depends on simple passwords, certificates or swipes cards, the modes are not safe and reliable enough in an environment with dense people flow in the attraction, the tourists can face the situation of difficult object taking due to the fact that the certificates are lost or the passwords are forgotten, moreover, the password mode is easy to peep or tamper by others, and once the certificates are lost or the passwords are revealed, the deposited objects can be mistakenly taken or stolen, and the personal property safety of the tourists can not be fully ensured.
In order to improve convenience and safety, many scenic spots attempt to introduce face recognition technology to replace the traditional identity verification mode, and access verification is performed through faces. However, the face recognition also faces new problems in practical application, especially the risk of photo attack, and a simple face recognition system is easily deceived by static photos, videos or fake images, so that the situation of unlocking or illegal fetching is caused, and although the face recognition simplifies the access process to a certain extent, the face recognition brings potential safety hazard.
Disclosure of Invention
The invention aims to solve the problems and provides an informationized deposit management device and method for tourist attractions.
In a first aspect of the present invention, there is provided a method for information-based deposit management for tourist attractions, the method comprising:
When receiving an object taking instruction, starting a binocular camera to acquire images in a target area to obtain a left image and a right image;
detecting face areas of the left image and the right image to obtain a first target area and a second target area, cutting the left image according to the first target area to obtain a left face image, and cutting the right image according to the second target area to obtain a right face image;
Obtaining a target parallax image according to the left face image and the right face image, performing depth change processing on the target parallax image to obtain a target depth image, and extracting the depth change of the target depth image;
If the depth change is larger than the preset change, extracting features of the left face image and the right face image to obtain target features;
and searching a preset database according to the target characteristics to obtain a storage cabinet number, and starting a corresponding storage cabinet according to the storage cabinet number.
Optionally, the detecting the face regions of the left image and the right image to obtain the first target region and the second target region includes:
Preprocessing an initial face image to obtain a preprocessed image, and performing image enhancement on the preprocessed image to obtain a target image;
Substituting the target image into a preset face detection model to obtain a target detection frame, wherein the target detection frame is the first target area if the initial face image is the left image, and the target detection frame is the second target area if the initial face image is the right image.
Optionally, obtaining the target parallax map according to the left face image and the right face image includes:
Performing key point identification on the left face image and the right face image through a key point detection algorithm to obtain a first key point set and a second key point set;
Matching the key points in the first key point set and the second key point set to obtain a key point combination set, and obtaining a key point combination parallax value by a semi-global block matching algorithm for each key point combination;
And substituting the left face image and the right face image into a real-time stereo matching network to obtain a global parallax image, and correcting the global parallax image according to the combined parallax values of all the key points to obtain a target parallax image.
Optionally, extracting the features of the left face image and the right face image to obtain the target features includes:
dividing the left face image and the right face image by a target grid aiming at the left face image and the right face image to obtain a left face square set and a right face square set;
for each square in the left face square set and the right face square set, extracting features of the square to obtain square features;
Generating a left face histogram according to all the square features in the left face square set, and generating a right face histogram according to all the square features in the right face square set;
and splicing the left face histogram and the right face histogram to obtain a target histogram, and substituting the left face histogram, the right face histogram and the target histogram into a preset feature fusion model to obtain target features.
Optionally, the method further comprises:
when receiving an object storage instruction, starting a binocular camera to acquire images in a target area to obtain a first left image and a first right image, and acquiring object information;
detecting face areas of the first left image and the first right image to obtain a first target acquisition area and a second target acquisition area, cutting the first left image according to the first target acquisition area to obtain a left face acquisition image, and cutting the first right image according to the second target acquisition area to obtain a right face acquisition image;
Substituting the left face acquisition image and the right face acquisition image into the preset model to obtain target characteristics, and searching the express cabinet according to the article information to obtain the express cabinet number;
and binding the express cabinet number with the target feature and then storing the bound express cabinet number and the target feature into the preset database.
In a second aspect of the present invention, an informationized deposit management device for tourist attractions is provided, comprising:
The object taking image acquisition module is used for starting the binocular camera to acquire images in the target area when an object taking instruction is received so as to obtain a left image and a right image;
The face image acquisition module is used for respectively carrying out face region detection on the left image and the right image to obtain a first target region and a second target region, cutting the left image according to the first target region to obtain a left face image, and cutting the right image according to the second target region to obtain a right face image;
The depth change determining module is used for obtaining a target parallax image according to the left face image and the right face image, carrying out depth change processing on the target parallax image to obtain a target depth image, and extracting the depth change of the target depth image;
the target feature determining module is used for extracting features of the left face image and the right face image to obtain target features if the depth change is larger than a preset change;
And the storage cabinet number searching module is used for searching a preset database according to the target characteristics to obtain a storage cabinet number, and opening the corresponding storage cabinet according to the storage cabinet number.
Optionally, the face image acquisition module includes:
the image enhancement module is used for preprocessing an initial face image to obtain a preprocessed image and enhancing the preprocessed image to obtain a target image;
the target detection frame determining module is used for substituting the target image into a preset face detection model to obtain a target detection frame, wherein the target detection frame is the first target area if the initial face image is the left image, and the target detection frame is the second target area if the initial face image is the right image.
Optionally, the depth change determining module includes:
The key point identification module is used for respectively carrying out key point identification on the left face image and the right face image through a key point detection algorithm to obtain a first key point set and a second key point set;
The key point matching module is used for matching key points in the first key point set and the second key point set to obtain a key point combination set, and aiming at each key point combination, the key point combination parallax value is obtained through a semi-global block matching algorithm;
And the target parallax map determining module is used for substituting the left face image and the right face image into a real-time stereo matching network to obtain a global parallax map, and correcting the global parallax map according to the combined parallax values of all the key points to obtain a target parallax map.
Optionally, the target feature determining module includes:
The image segmentation module is used for segmenting the left face image and the right face image through a target grid aiming at the left face image and the right face image to obtain a left face square set and a right face square set;
the block feature extraction module is used for extracting features of each block in the left face block set and the right face block set to obtain block features;
the histogram generation module is used for generating a left face histogram according to all the square features in the left face square set and generating a right face histogram according to all the square features in the right face square set;
And the histogram feature fusion module is used for splicing the left face histogram and the right face histogram to obtain a target histogram, and substituting the left face histogram, the right face histogram and the target histogram into a preset feature fusion model to obtain target features.
Optionally, the method further comprises:
The object storage image acquisition module is used for starting the binocular camera to acquire images in the target area when an object storage instruction is received to obtain a first left image and a first right image, and acquiring object information;
The second face image acquisition module is used for respectively carrying out face region detection on the first left image and the first right image to obtain a first target acquisition region and a second target acquisition region, cutting the first left image according to the first target acquisition region to obtain a left face acquisition image, and cutting the first right image according to the second target acquisition region to obtain a right face acquisition image;
The express cabinet number generation module is used for substituting the left face acquisition image and the right face acquisition image into the preset model to obtain target characteristics, and searching the express cabinet according to the article information to obtain the express cabinet number;
and the data storage module is used for binding the express cabinet number and the target characteristic and then storing the bound express cabinet number and target characteristic into the preset database.
The invention has the beneficial effects that:
The invention provides an informationized registering management method for tourist attractions, which comprises the steps of starting a binocular camera to collect images in a target area to obtain a left image and a right image when receiving an object taking instruction, respectively detecting the left image and the right image to obtain a first target area and a second target area, cutting the left image according to the first target area to obtain a left face image, cutting the right image according to the second target area to obtain a right face image, obtaining a target parallax image according to the left face image and the right face image, carrying out depth change processing on the target parallax image to obtain a target depth image, extracting the depth change of the target depth image, carrying out feature extraction on the left face image and the right face image to obtain target features if the depth change is larger than preset change, searching a preset database according to the target features to obtain a cabinet number, and starting a corresponding storage cabinet according to the cabinet number. The face region is obtained by carrying out face image recognition on the images acquired by the binocular camera, the data processing capacity is reduced, the detection speed is improved, the depth change is finally obtained by determining the parallax image through the face image, the fact that the face recognition object is a true person or a photo can be effectively judged, the safety is improved, and finally the corresponding cabinet number is accurately found and the unlocking operation is carried out by extracting and matching the face depth characteristics, so that the access process is simplified, and the safety is ensured.
Drawings
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a flow chart of a method for informationized register management for tourist attractions according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an informationized register management device for tourist attractions according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The term "and/or" is merely an association relation describing the association object, and means that three kinds of relations may exist, for example, a and B may mean that a exists alone, a and B exist together, and B exists alone. Furthermore, descriptions such as those referred to as "first," "second," and the like, are provided for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implying an order of magnitude of the indicated technical features in the present disclosure. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention provides an informationized registering management method for tourist attractions. Referring to fig. 1, fig. 1 is a flowchart of an informationized register management method for tourist attractions according to an embodiment of the present invention. The method comprises the following steps:
S101, when an object taking instruction is received, starting a binocular camera to acquire images in a target area to obtain a left image and a right image;
s102, face area detection is carried out on a left image and a right image respectively to obtain a first target area and a second target area, the left image is cut according to the first target area to obtain a left face image, and the right image is cut according to the second target area to obtain a right face image;
S103, obtaining a target parallax image according to the left face image and the right face image, performing depth change processing on the target parallax image to obtain a target depth image, and extracting the depth change of the target depth image;
S104, if the depth change is larger than the preset change, extracting the characteristics of the left face image and the right face image to obtain target characteristics;
s105, searching a preset database according to the target characteristics to obtain a storage cabinet number, and opening the corresponding storage cabinet according to the storage cabinet number.
According to the informationized registering management method for tourist attractions, provided by the embodiment of the invention, the face region is obtained by carrying out face image recognition on the images acquired by the binocular cameras, the data processing capacity is reduced, the detection speed is improved, the depth change is finally obtained by determining the parallax images through the face images, the human face identification object is a true person or a photo, the safety is improved, and finally the corresponding bin number is accurately found and the unlocking operation is carried out by extracting and matching the face depth characteristics, so that the access process is simplified, and the safety is ensured.
In an implementation mode, the left parallax image and the right parallax image can be constructed by acquiring the image of the target area through the binocular camera, so that extraction of depth information is realized, face positions are accurately positioned, false triggering rate is reduced, face feature extraction is started after depth change is detected to reach a specific threshold value, calculation resources are saved, recognition and matching are performed only when the conditions are met, and response efficiency of the system is improved.
In one implementation mode, the combination mode of face recognition and depth information can improve recognition accuracy, the possibility of false recognition is reduced, the reliability of user recognition is further enhanced through feature extraction and database matching, and therefore the risk of unauthorized access is reduced, a target area is an area in which a binocular camera can collect a face, and a visitor can be prompted to stand to a designated position by the target area.
In one implementation, if the depth change is less than or equal to the preset change, it indicates that the shape of the detected object is relatively flat, and there is no significant three-dimensional feature change, possibly not a real person, but a photo, and at this time, the subsequent detection process is stopped.
In one implementation mode, only the binocular camera acquires the image, so that the resource consumption is reduced, the user does not need to input a password, the access process is simplified, the preset change is determined by a technician, and the data stored in the preset database is data generated when the object exists.
In one embodiment, performing face region detection on the left image and the right image to obtain a first target region and a second target region includes:
Preprocessing an initial face image to obtain a preprocessed image, and performing image enhancement on the preprocessed image to obtain a target image;
Substituting the target image into a preset face detection model to obtain a target detection frame, wherein the target detection frame is a first target area if the initial face image is a left image, and is a second target area if the initial face image is a right image.
In one implementation, image preprocessing and image enhancement can improve image quality, enable images with blurring, insufficient light or poor contrast to become clear, improve recognition accuracy of a subsequent face detection model, and particularly have obvious effects under complex illumination or low resolution conditions, and the demarcation of a target detection frame enables face areas in left and right images to be accurately recognized and marked, so that false detection or omission detection is reduced, parallax analysis in subsequent stereoscopic vision calculation is facilitated, and reliability of depth information calculation is ensured.
In one implementation, the detection model can more effectively identify the features after preprocessing and enhancing the image quality, so that the detection speed is improved, the requirement on complex models or multiple detection is reduced in an optimization process, and resources are saved.
In one implementation, preprocessing is basic format conversion, normalization and other processing, image enhancement is to respectively perform low-light enhancement on a left image and a right image by using enhancement-Net, so that illumination information and details in the images are improved, and enhanced left images and right images are generated, thereby reducing detail loss caused by low light and providing a clearer visual basis for subsequent face region detection.
In one implementation, the preset face detection model may be RET I NAFACE, MTCNN, or the like face detection model.
In one embodiment, obtaining the target disparity map from the left face image and the right face image includes:
Performing key point identification on the left face image and the right face image through a key point detection algorithm to obtain a first key point set and a second key point set;
matching the key points in the first key point set and the second key point set to obtain a key point combination set, and aiming at each key point combination, obtaining a key point combination parallax value through a semi-global block matching algorithm;
And substituting the left face image and the right face image into a real-time stereo matching network to obtain a global parallax image, and correcting the global parallax image according to the combined parallax values of all the key points to obtain a target parallax image.
In one implementation mode, the key point matching algorithm can accurately find out the corresponding characteristic points in the left and right images, correct the depth information by calculating the parallax value of the key point combination, effectively reduce errors caused by factors such as illumination, noise and the like, and balance local and global information when the key point parallax is calculated by the semi-global block matching algorithm, reduce matching errors caused by problems such as local shielding and blurring and the like, and increase the accuracy of stereo matching.
In one implementation mode, the key point matching result is used for correcting the global disparity map, so that the accuracy of the disparity map can be greatly improved on the premise of not losing the overall efficiency, meanwhile, the dependence on a complex stereo matching network is reduced, the global disparity map is corrected through the disparity value of the key point combination, the disparity result generated by the real-time stereo matching network can be calibrated, the mismatch area caused by network errors is reduced, and finally, the more consistent target disparity map is obtained.
In one implementation, the key point detection algorithm may be an Adaboost algorithm, openFace, dl ib, SURF, etc. based on Haar features, and the matching is performed on the key points in the first key point set and the second key point set to obtain a key point combination set, specifically, a feature descriptor algorithm such as S FT, ORB, etc. is used for each key point to generate a descriptor for representing local information of the point, the most similar key point pair is found between the key point set of the left image and the key point set of the right image through a KNN matching algorithm, the euclidean distance between the two descriptors is calculated, and the distance with the smallest euclidean distance is obtained as the matching obtaining key point pair.
In one implementation, the semi-global block matching algorithm may be SGM, mu l pi-l eve l SGM, ADAPT I VE SGM, fast SGM, etc., and the real-time stereo matching network may be StereoNet, PSMNet, deepPruner, stereo-Unet, etc.
In one implementation manner, correcting the global disparity map according to the disparity values of all the key point combinations to obtain a target disparity map, namely, obtaining a first difference value by obtaining a difference value in the disparity values of the key point combinations, obtaining a second difference value by obtaining a difference value corresponding to the key point combinations in the global disparity map, calculating the difference value of the first difference value and the second difference value to obtain a corrected difference value, obtaining a pixel distance between the key point combinations, correcting the disparity value in each step by taking a preset step size as a gradient, and gradually updating the disparity value according to the preset step size within the pixel distance between the corrected difference values of each key point combination, wherein the pixel distance between the key point combinations is 100 pixels, the key points are A point and B point, the preset step size is 20 pixel points, the corrected difference value is 1, 1 to 20 pixel points before the A point to the B point is 0.2 on the basis of the formed disparity value, 0.4 on the basis of the formed disparity value of the first 21 to 40 pixel points, and 0.6 to the parallax value of the first 41 to 60 pixel points before the formed disparity value is 0.80 on the basis of the formed disparity value of the first 1 to the first 1.80 pixel points.
In one embodiment, extracting features from the left face image and the right face image to obtain the target features includes:
for the left face image and the right face image, dividing the left face image and the right face image through a target grid to obtain a left face square block set and a right face square block set;
For each square in the left face square set and the right face square set, extracting features of the square to obtain square features;
Generating a left face histogram according to all the square features in the left face square set, and generating a right face histogram according to all the square features in the right face square set;
And splicing the left face histogram and the right face histogram to obtain a target histogram, and substituting the left face histogram, the right face histogram and the target histogram into a preset feature fusion model to obtain target features.
In one implementation, the face image is subjected to grid segmentation and block feature extraction, each block represents local information of the image, the local features can capture image details such as textures, shapes and the like, different parts of the face can be represented more accurately, the local features can be effectively combined by generating histograms of left and right faces and then performing stitching and feature fusion, the target features with more comprehensive and more identifying ability are obtained, the matching accuracy of the face features can be improved by segmenting the left face image and the right face image into a plurality of blocks and extracting the features from each block, the traditional integral feature extraction method can ignore the change of the local areas of the image, and the feature optimization can be performed in a targeted mode by the method after segmentation, so that the situation of mismatching is reduced.
In one implementation, the histograms of the left face and the right face are spliced, visual information from left and right visual angles can be fused better, and as the left and right images generally contain different visual angle information, the advantages of the left and right images can be integrated through feature fusion, the comprehensiveness and the expressive power of the features are improved, the target features not only can describe the facial features of an individual more accurately, but also can fuse detailed information under different visual angles.
In one implementation, the system can adapt to complex environments more flexibly through target grid segmentation and local feature extraction, in some dynamic scenes, each region of an image may have different background, illumination or interference factors, and through feature extraction of local squares, the interference can be reduced, the recognition capability of objects or faces is improved, and the stability and speed of recognition are further improved.
In one implementation, the size of the target grid is determined by a technician, and the pixel grid is generally 36 by 36, and the feature extraction is performed on the square to obtain the square feature, specifically, an LBP operator is applied to each image square to extract the local texture feature of each small block.
In one implementation, since the left and right images are obtained from two different angles, they contain different perspective information, the local texture features should be extracted from the left and right images, respectively, and then combined to form a complete feature vector. Specifically, the block features of each of the left image and the right image are spliced, for example, the first row of the first block features in the left image and the first row of the block features in the right image are sequentially arranged into a longer block feature, and the target histogram is obtained by sequentially splicing each row, so that the recognition accuracy is improved by combining the feature information of the two visual angles, and the recognition accuracy is improved particularly under different visual angles and postures.
In an implementation mode, substituting a left face histogram, a right face histogram and a target histogram into a preset feature fusion model to obtain target features, specifically, performing scale feature extraction on the left face histogram, the right face histogram and the target histogram respectively by using target scales to obtain a first left face scale map, a first right face scale map and a first target face scale map, wherein the scales of the first left face scale map, the first right face scale map and the first target face scale map are the same, performing superposition on the first left face scale map and the first right face scale map respectively by using the first target face scale map to obtain a first left face fusion map, performing scale feature extraction on the first left face fusion map, the first right face fusion map and the target histogram respectively by using a second target scale to obtain a second left face scale map, a second right face scale map and a second target face scale map, performing a final step according to the same steps, and finally performing step-down scaling on the first left face scale map and the first target face scale map to obtain a final target face scale map, and performing final step-down, and finally performing step-down, namely, performing scale-average scaling on the first face scale map and the first target face map to obtain final target face scale map.
In one embodiment, the method further comprises:
when receiving an object storage instruction, starting a binocular camera to acquire images in a target area to obtain a first left image and a first right image, and acquiring object information;
The face region detection is carried out on the first left image and the first right image respectively to obtain a first target acquisition region and a second target acquisition region, the first left image is cut according to the first target acquisition region to obtain a left face acquisition image, and the first right image is cut according to the second target acquisition region to obtain a right face acquisition image;
substituting the left face acquisition image and the right face acquisition image into a preset model to obtain target characteristics, and searching the express cabinet according to the article information to obtain the number of the express cabinet;
And binding the number of the express cabinet and the target characteristic, and storing the bound number and target characteristic into a preset database.
In one implementation, tracking and recording of each item may be formed in the system by binding the item's characteristics to the courier cabinet number, in conjunction with a preset database store. The manager can easily inquire the specific position of the article storage, so that the efficiency of article management is improved, and the article information is information input by a user, such as the size, the weight and the like of the article.
The embodiment of the invention also provides an informationized registering management device for tourist attractions based on the same inventive concept. Referring to fig. 2, fig. 2 is a schematic structural diagram of an informationized register management device for tourist attractions, provided in an embodiment of the present invention, including:
The object taking image acquisition module is used for starting the binocular camera to acquire images in the target area when an object taking instruction is received so as to obtain a left image and a right image;
The face image acquisition module is used for respectively carrying out face region detection on the left image and the right image to obtain a first target region and a second target region, cutting the left image according to the first target region to obtain a left face image, and cutting the right image according to the second target region to obtain a right face image;
The depth change determining module is used for obtaining a target parallax image according to the left face image and the right face image, carrying out depth change processing on the target parallax image to obtain a target depth image, and extracting the depth change of the target depth image;
the target feature determining module is used for extracting features of the left face image and the right face image to obtain target features if the depth change is larger than the preset change;
The storage cabinet number searching module is used for searching a preset database according to the target characteristics to obtain the storage cabinet number, and opening the corresponding storage cabinet according to the storage cabinet number.
According to the informationized registering management device for tourist attractions, provided by the embodiment of the invention, the face area is obtained by carrying out face image recognition on the images acquired by the binocular cameras, the data processing capacity is reduced, the detection speed is improved, the depth change is finally obtained by determining the parallax image through the face images, the human face identification object is a true person or a photo, the safety is improved, and finally the corresponding bin number is accurately found and the unlocking operation is carried out by extracting and matching the face depth characteristics, so that the access process is simplified, and the safety is ensured.
In one embodiment, the face image acquisition module includes:
the image enhancement module is used for preprocessing the initial face image to obtain a preprocessed image and enhancing the preprocessed image to obtain a target image;
The target detection frame determining module is used for substituting the target image into a preset face detection model to obtain a target detection frame, wherein the target detection frame is a first target area if the initial face image is a left image, and the target detection frame is a second target area if the initial face image is a right image.
In one embodiment, the depth change determination module includes:
The key point identification module is used for carrying out key point identification on the left face image and the right face image through a key point detection algorithm to obtain a first key point set and a second key point set;
the key point matching module is used for matching key points in the first key point set and the second key point set to obtain a key point combination set, and aiming at each key point combination, the key point combination parallax value is obtained through a semi-global block matching algorithm;
and the target parallax map determining module is used for substituting the left face image and the right face image into the real-time stereo matching network to obtain a global parallax map, and correcting the global parallax map according to the combined parallax values of all the key points to obtain the target parallax map.
In one embodiment, the target feature determination module comprises:
The image segmentation module is used for segmenting the left face image and the right face image through the target grid aiming at the left face image and the right face image to obtain a left face square block set and a right face square block set;
The block feature extraction module is used for extracting features of each block in the left face block set and the right face block set to obtain block features;
the histogram generation module is used for generating a left face histogram according to all the square features in the left face square set and generating a right face histogram according to all the square features in the right face square set;
the histogram feature fusion module is used for splicing the left face histogram and the right face histogram to obtain a target histogram, and substituting the left face histogram, the right face histogram and the target histogram into a preset feature fusion model to obtain target features.
In one embodiment, the method further comprises:
The object storage image acquisition module is used for starting the binocular camera to acquire images in the target area when an object storage instruction is received to obtain a first left image and a first right image, and acquiring object information;
The second face image acquisition module is used for respectively carrying out face region detection on the first left image and the first right image to obtain a first target acquisition region and a second target acquisition region, cutting the first left image according to the first target acquisition region to obtain a left face acquisition image, and cutting the first right image according to the second target acquisition region to obtain a right face acquisition image;
the express cabinet number generation module is used for substituting the left face acquisition image and the right face acquisition image into a preset model to obtain target characteristics, and searching the express cabinet according to the article information to obtain the express cabinet number;
and the data storage module is used for binding the number of the express cabinet and the target characteristic and then storing the number of the express cabinet and the target characteristic into a preset database.
The foregoing describes one embodiment of the present invention in detail, but the disclosure is only a preferred embodiment of the present invention and should not be construed as limiting the scope of the invention. All equivalent changes and modifications within the scope of the present invention are intended to be covered by the present invention.
Claims (9)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411648676.1A CN119580392B (en) | 2024-11-19 | 2024-11-19 | Informationized registering management device and method for tourist attraction |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411648676.1A CN119580392B (en) | 2024-11-19 | 2024-11-19 | Informationized registering management device and method for tourist attraction |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119580392A CN119580392A (en) | 2025-03-07 |
| CN119580392B true CN119580392B (en) | 2025-07-18 |
Family
ID=94800472
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411648676.1A Active CN119580392B (en) | 2024-11-19 | 2024-11-19 | Informationized registering management device and method for tourist attraction |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119580392B (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106910222A (en) * | 2017-02-15 | 2017-06-30 | 中国科学院半导体研究所 | Face three-dimensional rebuilding method based on binocular stereo vision |
| CN207302222U (en) * | 2017-05-25 | 2018-05-01 | 新石器龙码(北京)科技有限公司 | A kind of cabinet based on binocular stereo vision identification face |
| CN114663951A (en) * | 2022-03-28 | 2022-06-24 | 深圳市赛为智能股份有限公司 | Low illumination face detection method, device, computer equipment and storage medium |
| CN115909446A (en) * | 2022-11-14 | 2023-04-04 | 华南理工大学 | Method, device and storage medium for binocular face liveness discrimination |
| CN116363718A (en) * | 2021-12-24 | 2023-06-30 | 北京达佳互联信息技术有限公司 | Face recognition method and device, electronic equipment and storage medium |
Family Cites Families (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8121400B2 (en) * | 2009-09-24 | 2012-02-21 | Huper Laboratories Co., Ltd. | Method of comparing similarity of 3D visual objects |
| KR101547281B1 (en) * | 2013-04-09 | 2015-08-26 | (주)쉬프트플러스 | Method and apparatus for generating multiview 3d image signal |
| CN106897675B (en) * | 2017-01-24 | 2021-08-17 | 上海交通大学 | A face detection method based on the combination of binocular visual depth feature and apparent feature |
| CN108764091B (en) * | 2018-05-18 | 2020-11-17 | 北京市商汤科技开发有限公司 | Living body detection method and apparatus, electronic device, and storage medium |
| CN109886197A (en) * | 2019-02-21 | 2019-06-14 | 北京超维度计算科技有限公司 | A kind of recognition of face binocular three-dimensional camera |
| CN213634692U (en) * | 2020-09-18 | 2021-07-06 | 苏州睿知慧识智能科技有限公司 | Face recognition intelligent cabinet |
| CN112052831B (en) * | 2020-09-25 | 2023-08-08 | 北京百度网讯科技有限公司 | Method, device and computer storage medium for face detection |
| CN112347904B (en) * | 2020-11-04 | 2023-08-01 | 杭州锐颖科技有限公司 | Living body detection method, device and medium based on binocular depth and picture structure |
| CN113299014A (en) * | 2021-05-21 | 2021-08-24 | 中国工商银行股份有限公司 | Intelligent cabinet |
| CN116343291A (en) * | 2023-02-27 | 2023-06-27 | 河南中光学集团有限公司 | Intelligent aiming method for fusing 2D and 3D faces |
| CN116524606A (en) * | 2023-03-27 | 2023-08-01 | 深圳先进技术研究院 | Face living body recognition method, device, electronic equipment and storage medium |
-
2024
- 2024-11-19 CN CN202411648676.1A patent/CN119580392B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106910222A (en) * | 2017-02-15 | 2017-06-30 | 中国科学院半导体研究所 | Face three-dimensional rebuilding method based on binocular stereo vision |
| CN207302222U (en) * | 2017-05-25 | 2018-05-01 | 新石器龙码(北京)科技有限公司 | A kind of cabinet based on binocular stereo vision identification face |
| CN116363718A (en) * | 2021-12-24 | 2023-06-30 | 北京达佳互联信息技术有限公司 | Face recognition method and device, electronic equipment and storage medium |
| CN114663951A (en) * | 2022-03-28 | 2022-06-24 | 深圳市赛为智能股份有限公司 | Low illumination face detection method, device, computer equipment and storage medium |
| CN115909446A (en) * | 2022-11-14 | 2023-04-04 | 华南理工大学 | Method, device and storage medium for binocular face liveness discrimination |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119580392A (en) | 2025-03-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR102324706B1 (en) | Face recognition unlock method and device, device, medium | |
| CN109711243B (en) | A Deep Learning-Based Static 3D Face Liveness Detection Method | |
| CN107093171B (en) | Image processing method, device and system | |
| CN108985134B (en) | Face living body detection and face brushing transaction method and system based on binocular camera | |
| TW202006602A (en) | Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses | |
| US9305206B2 (en) | Method for enhancing depth maps | |
| EP3642756B1 (en) | Detecting artificial facial images using facial landmarks | |
| KR101781358B1 (en) | Personal Identification System And Method By Face Recognition In Digital Image | |
| CN105243376A (en) | Living body detection method and device | |
| CN112528902B (en) | Video monitoring dynamic face recognition method and device based on 3D face model | |
| CN108573231B (en) | Human body behavior identification method of depth motion map generated based on motion history point cloud | |
| US10915739B2 (en) | Face recognition device, face recognition method, and computer readable storage medium | |
| CN107944416A (en) | A kind of method that true man's verification is carried out by video | |
| CN107609515B (en) | A dual verification face comparison system and method based on Feiteng platform | |
| Ji et al. | LFHOG: A discriminative descriptor for live face detection from light field image | |
| CN114299569A (en) | Safe face authentication method based on eyeball motion | |
| CN111767879A (en) | Living body detection method | |
| CN112686191A (en) | Living body anti-counterfeiting method, system, terminal and medium based on face three-dimensional information | |
| CN111767839B (en) | Vehicle driving track determining method, device, equipment and medium | |
| CN116524606A (en) | Face living body recognition method, device, electronic equipment and storage medium | |
| CN120124032A (en) | Intelligent lock unlocking method and system based on face image processing | |
| US7653219B2 (en) | System and method for image attribute recording an analysis for biometric applications | |
| CN119580392B (en) | Informationized registering management device and method for tourist attraction | |
| CN112668370A (en) | Biological feature living body identification detection method and device based on depth image | |
| CN114882551B (en) | A face recognition processing method, device and equipment based on machine dimension |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |