WO1997005566A1 - Systeme de verification et d'identification d'objet - Google Patents
Systeme de verification et d'identification d'objet Download PDFInfo
- Publication number
- WO1997005566A1 WO1997005566A1 PCT/US1996/011996 US9611996W WO9705566A1 WO 1997005566 A1 WO1997005566 A1 WO 1997005566A1 US 9611996 W US9611996 W US 9611996W WO 9705566 A1 WO9705566 A1 WO 9705566A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- person
- data
- verification
- data sets
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/20—Individual registration on entry or exit involving the use of a pass
- G07C9/22—Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder
- G07C9/25—Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition
- G07C9/253—Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition visually
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
Definitions
- This invention relates generally to information storage and retrieval computer and camera systems for obtaining and storing certain data relating to specific images and providing rapid access to that data for purposes of object targeting, enrollment, identification, verification, and classification.
- image recognition provides a system that captures data relating to- an image area of an object or person and then compares that captured image data to information stored in a computer memory.
- Kodak's card-based facial security system An example of a system exhibiting these drawbacks is Kodak's card-based facial security system.
- the Kodak system classifies fifty areas of the card holder's face, identifying each area with a 2-byte code. That data is then stored on the stripe of a magnetic card. The card user must then have his/her face compared with the stored facial code in order for a match to be made.
- a drawback to this technique is that it requires the local computer to recognize multiple areas of the face, then classify each of these areas, and then compare the instant classification to the code stored on the card. Hence, a significant processing burden is required at the recognition station.
- the Kodak system is relatively inflexible in the sense that it is limited only to those things that have been classified. Thus, for the Kodak system to operate on other objects, a whole new classification scheme needs to the developed. The classification effort is a significant, labor-intensive process.
- portable media such as magnetic stripe cards, magnetic discs, printed bar codes, semi-conductor devices such as smart cards, or in data bases.
- the comparison process involves the verification of data present on the ID card, with the data sets generated from the video or other image of the object that has been registered through an input device such as an electronic camera.
- the comparison process utilizes a neural network that has been trained so as to recognize or identify a particular data set
- the training of the neural network is based on a process of polling the various attributes that are obtained at the identification station by the computer, against the component attribute data sets that are present on the ID card.
- the polling assumes that certain distinctive features, if in agreement with the data sets on the ID card, can override other less distinctive attributes.
- the polling process increases the required precision of the comparisons.
- a program is provided for reducing the characteristics of an object image, for example, a human face, to a set of characteristic numbers; later recalling that set of characteristics for comparison with an input from an external source.
- Keys to this set of numbers are encoded as indices and are stored on local source objects, such as 3-3/8" x 2" computer input cards.
- the indices having been electronically posted to a central computer program, point to a second set of data retrieved from the computer program. A comparison of that set then occurs with a second set of similar stored data retrieved from a local source, such as the card.
- FIG. 1 is a block diagram of the enrollment station forming the present invention
- FIG. 2 is a block diagram of the verification station forming the present invention
- FIGS . 3A and 3B are flow-chart diagrams showing the functions of the enrollment process
- FIG. 4 is a flow chart diagram illustrating the one-on-one preprocessing steps of the enrollment process shown in FIGS. 3A-3B;
- FIG. 5 is a flow chart diagram of the binarization routine of the enrollment process shown in Figs. 3A-3B;
- FIG. 6 is a flow chart diagram of a first embodiment of the targeting process for the enrollment process shown in Figs. 3A-3B;
- FIG. 7 is a flow chart diagram of the UD and CP coordinate estimation functions of the enrollment process shown in FIGS. 3A-3B;
- FIG 8 is a flow chart diagram illustrating the area of interest defining function of the enrollment process shown in FIGS. 3A-3B;
- FIG. 9 is a flow chart diagram illustrating the normalization procedure of the enrollment process shown in Figs. 3A-3B;
- FIG. 10 is a flow chart diagram showing the transform step of the enrollment process shown in Figs. 3A-3B;
- FIG. 11 is a flow chart of the second transformation process for the enrollment process of Figs. 3A-3B;
- FIG. 12 is a flow chart of the output coding function for the enrollment process of Figs. 3A-3B;
- FIG. 13 is a flow chart illustrating the encrypt function for determining useful parameter vectors for the enrollment process of Figs. 3A-3B;
- FIGS. 14A-14B are flow charts of the process for image verification of the present invention.
- FIG. 15 is a flow chart of the image verification pre-processing function
- FIG. 16 is a flow chart of the image verification setup control function
- FIG. 17 is a flow chart of the image verification data decryption function
- FIG. 18 is a flow chart of the image verification parameter value comparison function
- FIG. 19 is a flow chart of the image verification identity decision function
- FIG. 20 is a diagram showing the dimensional breakdown of the face
- FIG. 21 is a top view of the array of infra-red light emitting diodes used to light a mini-camera apparatus;
- FIG. 22 is a perspective transparent diagram of the mini-camera and infra-red lighting device and components thereof of the invention
- FIG. 23 is a circuit schematic diagram for the light array of FIG. 21;
- FIG. 24 is a circuit schematic diagram for the mini-camera of FIG. 21;
- FIGS. 25 (a) -(c) are respectively front, side and perspective views of a first embodiment of a housing for the mini-camera arrangement shown in FIG. 22;
- FIGS. 26 (a) -(c) are respectively front, side and perspective views of a second embodiment of a housing for the mini-camera arrangement shown in FIG. 22;
- FIGS. 27 (a) -(c) are respectively front, side and perspective views of a third embodiment of the housing for the minicamera arrangement of FIG. 22; and
- FIG. 28 is a flow chart illustrating a second embodiment of the targetting process for the present invention.
- the first embodiment of present system is composed of two processes and two hardware systems.
- the first process of this first embodiment is the enrollment process and is shown in Figs. 3A-13.
- the purpose of the enrollment process is to code the image of the person or object and to reduce that image to a portable form, and format, such as a card with a magnetic strip, or in a database.
- the second process of the first embodiment is shown in Figs. 14A-15.
- This process is the verification process.
- the verification process performs the tasks of taking a picture or image of the person or object, and comparing the captured image to the coded image obtained from the enrollment process. If the image of the person or object obtained during verification and the coded information obtained from the enrollment process match, identity is verified by the verification process.
- the enrollment process and the verification process have elements in common, which will be described further below.
- the two hardware systems used in the present invention are the enrollment station, shown in Fig. 1, and the verification station, shown in Fig. 2.
- Fig. 1 is a block diagram of the enrollment station 100.
- the object 10 represents the object that will be coded in the enrollment process for later verification during the verification process.
- the object 10 is a face of a person.
- the object under consideration may constitute a machine part under inspection, or a house, or a car key, or an automobile, or a hand.
- the label on the object or its container carries data related to features of the object. These data permit an automatic verification that the object is correctly labeled, and therefore will be properly stored, packaged, and shipped.
- Object types can vary, as long as an identifiable characteristic can be extracted, and is stored in the enrollment process.
- one or more video cameras 20 or other cameras as set forth in this application and equipped with lighting devices 30, are used to record an image of the object 10.
- the video camera or cameras 20 and lighting device 30 are ordinary devices that are readily available.
- Panasonic camera, CCD Model No. GP-MF602, or similar devices are used with either flash or continuous light sources.
- the lighting devices can in this first embodiment comprise a ring lamp such as the MICROFLASH 5000, manufactured by VIVITAR Corp., located at Chatworth, California, or a standard photography lighting fixture.
- Other camera devices and lighting devices can be substituted.
- a flash device can be employed with a Panasonic camera.
- a flash unit is the SUNPAK Softlite 1400M, manufactured by TOCAD Company, Tokyo, Japan.
- a continuous incandescent light source can be employed. This type of lighting device is particularly useful in conjunction with object identification/verification for quality control applications.
- an LED lighting device can be employed in conjunction with a mini infra-red camera.
- the output of the video camera 20 is connected via a port to computer 40.
- the computer 40 can be a personal computer that includes a digitizer card 42 in an expansion port.
- the computer also includes other standard components such as a CPU 44, a random access memory 46, and a permanent storage, in the form of a magnetic disk storage 48, a monitor 50, and a keyboard 52.
- a personal computer having an Intel ® Pentium ® microprocessor designed to have a minimum processor clock speed of 90 MHz.
- the computer has in this example 32 MBytes of Random Access memory and at least 1 Gigabyte of static storage.
- a conventional hard-drive can be used, although other static storage units (e.g., writable optical drives, EPROM) are also acceptable.
- a Microsoft Windows® operating system is used. It should be noted that the present invention is designed so that any computer having adequate processor clock speed
- a clock speed of at least 60 MHz i.e. preferably a clock speed of at least 60 MHz
- a sufficient RAM memory size i.e., 16 megabytes of RAM
- the digitizer 42 used is a frame grabber card for transforming camera signals from analog to digital form. This card is inserted in the computer 40.
- the card used is a CX100 Imagination Board, manufactured by Image National Corporation, located in Beaverton, Oregon.
- any digitizer device for video input including direct input from digital video cameras can be used with the invention.
- the output device 60 receives the data from the computer 40 which is a coded representation of the object 10.
- Device 60 transforms the coded data into an appropriate signal and code for placement on a central storage 72 or portable memory device.
- the output device 60 receives the data from the computer 40 which is a coded representation of the object 10.
- Device 60 transforms the coded data into an appropriate signal and code for placement on a central storage 72 or portable memory device 70.
- the output device 60 in the preferred embodiment is a card reader/writer, or a printer. Examples of output devices include magnetic or optical reader/writers or smart card reader/writers or barcode printers, etc. In the preferred embodiment, that memory device
- the output device 60 and card 70 are well known in the art. However, other portable memory devices such as optical cards, S-RAM storage devices, or carriers containing EPROM or UVPROM memories, can also be used. In addition, known barcoding schemes can be employed to bar-code the information, where appropriate.
- Fig. 2 is a block diagram of the verification station hardware 200.
- the data of an enrolled person or object 210 appropriately lighted by lighting 30 will be compared to the coded representation of the object on the portable memory 70.
- the object 210 represents the same object as the enrolled object 10 in Fig. 1.
- the purpose of the verification station 200 is to output to an appropriate security system, a signal via external control 230.
- the external control 230 can be an electronic access device, such as an electronic door lock, or a motor-driven gate, or any other mechanism.
- the components in the verification station 200 are the same as the components of the enrollment station 100 with the exception of the input device 220 and the external control unit 230.
- the card 70 is inserted in the input device 220, such as a magnetic or optical card or a barcode reader, while the object 210 has its image recorded and processed by the computer 40. Image recordation and processing is done in an identical way as in the enrollment station discussed above A program in the computer 40 for the verification station compares the image data of the object 210 with the coded image data on the card 70. If there is a satisfactory match, the verification signal 230 is outputted indicating a match, or a failure to match.
- the input device 220 such as a magnetic or optical card or a barcode reader
- Figs . 3A and 3B are overview flow chart diagrams showing the process used by the enrollment station 100 in order to encode enrollment data onto the portable storage medium 70 or central database 72.
- Cameras 20 provide input for the computer 40 which executes the preprocessing function 302.
- This preprocessing function 302 is described in detail in Fig. 4.
- the preprocessing function 302 is performed by the computer 40 in combination with a frame grabber digitizer 42.
- the frame grabber digitizer 42 first transforms the analog signal at step 3002, then filters the digitized data at step 3004, and enhances the image at step 3006.
- the filtering step 3004 filters out image noise using conventionally known techniques.
- the enhancement step 3006 enhances image contrast according to the lighting conditions, using known techniques.
- the output of these functions is the complete image 305 which is composed of a standardized noise-free, digital image matrix of 512 x 480 pixels.
- the complete image 305 is used as an input to various other subroutines in the enrollment process which will be described below.
- step 310 receives as the input matrix the complete image 305.
- step 3102 a center image is taken from the complete image.
- the coordinates of the upper left hand corner of the center matrix is defined as coordinates 128 x 120 and the coordinates of the bottom right hand corner of the center matrix is defined as coordinates 384 x 360.
- This central image is then binarized. This process results in an image where each pixel is either black or white, depending on whether a pixel exceeds or falls below a preset threshold.
- the coordinates are chosen for the preferred embodiment to focus on the center of the image. However, if the object does not have distinguishing features in its central area, different coordinates can be used.
- This output image is then made available to the targeting procedure 320 shown in Fig. 3A of the enrollment process.
- the targeting procedure 320 is shown in more detail in Fig. 6.
- the purpose of the targeting procedure is to find a distinguishing feature in the object in order to detect the presence of the object and determine the location of the object in the image matrix.
- the distinguishing features looked at are the two irises of the eyes.
- the input to the targeting function 320 is the binarized image 3104 which is then processed by the labeling function 3202.
- the labeling function 3202 locates and labels all areas that exhibit similar characteristics as irises.
- the threshold set for the binarization process 3102 is set to filter out gray scale levels that are not relevant.
- the gray scale color typically associated with irises can be used as the indicator for the threshold.
- the output of the labeling process 3202 comprises the object labels 3204.
- each labeled object produced at step 3204 has the XY coordinates calculated for placement in the complete image matrix 305. This provides a geometric center of each object that was labeled in the previous step.
- step 3206 the irises are located and are distinguished by their contrast with the surrounding area.
- other contrasting areas may also be labeled. These contrasting areas, are for example in this exemplified application, nostrils or lips.
- Step 3208 involves looking at the XY coordinates of a pair of objects and then determining whether their absolute and relative locations are valid.
- the validation step 3208 assumes that labeled objects, such as the irises, are appropriately positioned on a face. For example, the eyes cannot be on top of each other and must fall within acceptable distances from each other.
- the validate coordinate step 3208 function determines those pairs of labeled objects that can possibly be irises.
- the calculations for iris targeting consists of comparing the XY coordinates for each iris to determine if they are within a preset distance apart and on approximately the same horizontal axis.
- the difference in the X coordinates are measured and compared to a prestored value to make sure that the irises are located at certain specific locations.
- the coordinates Yl and Y2 represent the horizontal coordinates
- XI and X2 represent the vertical coordinates.
- Y2 and X2 in the preferred embodiment represent the left iris coordinate
- Yl and XI the right iris.
- a first calculation determines if Y2 is greater than Yl .
- the result of Y2 minus Yl should be greater than a value of 40 pixels.
- the third calculation determines the absolute value of Xl minus X2. In the preferred embodiment, that value should be less than 16 pixels. If all three equations are met, then at step 3208 the object's pair of irises is confirmed, and processing passes to step 3216.
- an output message is sent at step 3212 to monitor 50 (Fig. 1) stating that the process has been unable to target the eyes.
- a new image is acquired again and reprocessed, beginning at step 302.
- the next step 3216 is to validate the object. This step compares the spots with the eye template to determine whether a cross correlation coefficient fits. If so, it confirms that the system successfully targeted the eyes.
- any one input to the validate object step 3216 is determined at step 315.
- This input is an average eye template value, which is an average of the iris position on the face across a wide population.
- the other input, determined at step 305, which was discussed previously, is the complete image.
- the complete image is a reference standardized noise-free image matrix of 512 x 480 pixels, 8-bit gray scale.
- the validate object step 3216 performs a gray scale correlation using the complete image 305 and the average eye template 315 and the valid object XY coordinates. This complete image is compared to the average eye template at the valid XY coordinates. If the maximum correlation is above a preset threshold, the object is identified as an eye.
- the correlation coefficient of two areas ⁇ A.- ⁇ and ⁇ b.. ⁇ is calculated as:
- A.- and b.- are pixels of the two areas.
- the threshold of correlation is 0.9.
- the outputs of this comparison are two "valid" iris values with the associated XY coordinates in the complete image matrix 305.
- the outputted values are provided at 3218.
- the system retrieves the calculated unit distance/center point by initiating the process set forth at step 325.
- a detailed flow chart of this process is shown in Fig. 7.
- the calculate unit distance and center point routine 325 establishes a unit distance for the center point in the image based on the coordinates of the iris provided from the targeting step 320.
- the unit distance (UD) equals ((X1-X2) exp 2 + (Y1-Y2) exp 2) exp .
- the next step of the enrollment process shown in Fig. 3A is to define an area of interest at step 330.
- the area of interest procedure 330 is shown in detail in the flow chart diagram of Figure 8.
- the function of step 3301 is to define the areas of interest on the object in relation to the unit distance (UD) and center point values (CX and CY) .
- the areas of interest are predetermined depending on the object to be identified. In the preferred embodiment, eight areas of interest have been selected. These areas of interest are a one-dimensional horizontal bar on the forehead, a one dimensional vertical bar over the center of the face, a two-dimensional right and left eye section, a two-dimensional right and left eyebrow section, and a two-dimensional right and left cheek section.
- the areas of interest for a face in the preferred embodiment are dissected into two one-dimensional areas and six two-dimensional areas of interest (see Fig. 20) .
- Step 335 resizes the area of interest to a standard pixel size.
- the standard pixel size for the one-dimensional pixel area of interest is 8 x 64 pixels.
- the standard pixel size is 64 x 64 pixels.
- the purpose of this normalization procedure step is to standardize the input to the transform procedures.
- Step 340 shown in Fig. 3A then performs several transforms, each of which is applicable to particular areas of interest.
- One of these transform processes is step 342.
- step 342 applies transforms to the one- dimensional pixel arrays (representing the eight areas of interest) outputted from the resized areas of interest step.
- eight fast-fourier transforms (FFTs) are performed at step 3420 on the one-dimensional pixel arrays.
- the results of these transforms are averaged at step 3422 into a 1 X 64 vector array representing the spatial distribution of that area of interest. Processing then returns to Fig. 3A.
- DCT discrete cosine transform
- each 64 x 64 pixel array is divided into 64 separate pixel arrays of 8 x 8 pixels at step 3440. Then each 8 x 8 pixel array is compressed using the DCT at step 3442. The output of the DCT for each 8 x 8 pixel array is a transformed array with the most significant cell in the upper left hand corner. Using all the 8 x 8 transformed arrays, ten 1 x 64 vectors arrays of the most significant cells are then created at step 3444. Other techniques can be employed, such as edge detection, Kohonen' s and/or geometrical analysis. Step 346 in Fig. 3A, depicts these other alternative transforms which can be used to compress and analyze identified areas of interest.
- each of the 64 transformed arrays comprises the first 1 x 64 vector array
- the second most significant cells comprises the second 1 x 64 array
- each 64 x 64 pixel area of interest is transformed into ten 1 x 64 vector arrays of the most significant transformed cells.
- These arrays are then sent to the coding routine 350.
- each layer can be binarized, so that if each cell's coefficient is greater than zero, then the value for that cell is equal to one. If that cell's value is less than zero, then it's binarized value is equal to zero. As a consequence, relatively few bytes for multiple layers are necessary.
- Fig. 12 sets forth routine 350 in more detail.
- input to the coding routine array are the sixty-two, 1 x 64 vector arrays produced by the transform routine at step 340 (Fig.
- the other input 355 is the eigenspace.
- eigenspaces are well-known in the art as a method for determining the characteristics of an individual observation to a sample of the general population. See, for example, Kirby, M and Sirovich, L. Application for Karhunen-Loeve Procedure for the Characterization of Human Faces, IEEE Trans. Patt. Anal. Machine Intell., Vol. 12, pp. 103-108, 1990; Turk, M. and Pentland, A., Eigenfaces for Recognition Journal and Cognitive
- the first coding step 3502 calculates residuals of the vectors.
- the residuals are the differences between the sixty-two vectors and the mean vectors estimated for a general population. Next, these residuals are projected into their sixty-two separate eigenspaces, permitting one per parameter. The result of this process provides the two most significant coordinates, per parameter, in their respective eigenspaces. In total, 124 coordinates are calculated.
- Process step 3504 is repeated several times to insure a statistically appropriate sampling of the enrollment images calculates the mean and standard deviation of the 124 parameters coordinates generated at step 3504.
- Step 3508 evaluates the coordinates with the smallest standard deviation and highest coefficient with the average of the population. Based on those criteria, the coordinates and their respective weights are then passed to the encryption process 370.
- the encryption routine 370 of Fig. 3B is shown in detail in the flow chart of Fig. 13.
- Such a routine is well known in the art.
- the encryption algorithm shown at step 3702 determines usable parameters according to encryption criteria 3704 which are related to the mean and the standard deviation -of the parameter coordinates.
- the result is the encryption key and verification data which are written at step 3706 onto the portable storage 70.
- a code or any other well known technique in .the art of recording information can be used. Therefore, the card 70 contains the coded information pertaining to the object that was enrolled. The enrollment process is now complete.
- Fig. 14A & Fig. 14B show an overview of the verification process using the verification station hardware shown in Fig. 2. Most of the procedures in the verification process are similar to the procedures previously discussed regarding the enrollment process. Thus, a detailed explanation is reserved for those processes that differ. A detailed description of the verification steps are set forth in Figs. 15-19.
- a prerequisite to the verification process 400 is for the enrollment process to be repeated, up to a certain point.
- the person that needs to be verified would go through step 302 (preprocessing) through step 350.
- the output of step 350 in the verification process provides parameter values corresponding to the images of the person or object to be verified.
- card 70 which contains the data from the enrollment process, is inserted into a reader device which then decrypts the magnetic data, yielding process control data (Fig. 16 step 410) and parameter values that correspond to each area of interest (Fig. 17, process 420) .
- the process control data instructs the machine on how to accomplish verification.
- the parameters- values determine what is to be verified.
- the parameter values are compared to the output of the coding step 350 in the verification process (Fig.
- Statistical analyses as is generally known in the art, is used to accomplish verification. Other methods, however, such as ruled-based identification decision processes or fuzzy logic systems, may be used in addition to the straight-forward statistical analyses. Furthermore, a degree of correlation between the two values can be varied depending on the degree of sophistication of the verification technique desired. If at the decision making step (Fig. 19 process 440) the parameters of the card match the parameters of the photograph image, verification has been achieved and an output is made to the external control unit process 450.
- One verification methodology for example can rely on increasing the Hamming distance between the enrolled image and the image to be verified. In this instance, the image vector stored in the card, or other non-volatile storage media is lined up, bit-by-bit to the generated image vector.
- n total number of bits in the vector
- N total length of vecot
- the HD value can be used as a threshold value, from which system sensitivity can be varied. If the values are set at around, for example .23, other sensitivities for retest, for example, or for reject can also be set. If for example, accept is .23 (HD) , retest is .24 - .74 and reject is .75 or greater, then it is possible that over time the retests will migrate to either direction (i.e., accept, reject) .
- Figs. 21-27 respectively illustrate a micro-camera assembly and LED lighting apparatus which provide numerous operational advantages both to the various embodiment of the invention, as well as to any other known image enrollment/recognition systems.
- Fig. 21 illustrates a front-view of an array of light emitting diodes ("LEDs") 2102 located along the same plane of a plate 2104.
- the arrangement of the LED's has a specific size, and intensity, to optimize lighting intensity of the target and capture of the image.
- a configuration is shown for maximizing iris capture.
- the LED's are designed to light the iris at a lower visible infra-red spectrum, rather than at the "heat detecting" spectrum.
- the spectrum tends to have a wave length of approximately 880nm, although other low visible spectra are considered applicable. Infra-red spectrum light has been found to be optimal, since flash lighting is distracting, if not painful.
- infra-red at low levels, standardizes facial fill so that all features of the face are equally exposed.
- a further advantage to low level infra-red is that, when combined with an appropriate filter, ambient light is cut out altogether. Consequently, image wash-out is avoided.
- the higher spectrum heat detecting level has been found to be less accurate in measuring biometric characteristics.
- an array of nine LED's 2106 are arranged in a square that is angled a 45° relative to the horizontal axis 2108 of plate 2104.
- Fig. 22 is a transparent perspective view of the microcamera device 2200 incorporating the aforedescribed LED array 2100.
- the device includes four basic elements: a micro-camera lens 2202, a microcamera circuit board 2204, the aforedescribed IR LED array 2100 and the LED circuit board 2206. These four elements are contained in a housing 2208.
- the housing 2208 is designed so that the lens and LED array are held in a potting material in order that the microcamera unit may be contained, and sealed. As a consequence, the microcamera can be used in underwater applications without substantial water leakage to circuit boards 2204 and 2205.
- the potting has a sufficient circumferential clearance around the lens element 2202 in order to allow the lens to freely rotate.
- the top surface of the housing 2210 contains a recess 2210 the top surface of which is co-planar with the top surfaces of the lens 2202 and LEDs 2102.
- a pair of flanges are arranged parallel to each other and the longitudinal axis of housing 2208 so that a flat filter element (not shown) that is sized to fit between the flanges can slide across the top surface 2210 and be held in place by the flanges.
- the filter comprises a sheet of mirrored glass or plastic that is near infra-red. The filter is thus able to cut off the visible light spectrum.
- the mini camera housing includes a communications port 2220 which provides the image output to an external frame grabber.
- the port 220 also connects to an external power supply for the mini-camera.
- the port 220 may use any optimal wiring-confirmation which in this embodiment a 9 pin DIN that can connect to the I/O port of a PC.
- the camera device 220 has no potting.
- a wall 2222 would be placed between the lens 2202 and LED's 2102 to avoid direct reflection on the filter by the LED's.
- the camera lens 2202 and camera circuit 2204 are manufactured by Sony Corp. as a 1/3 CCD, 12 VDC Board Camera, Model No. UL-BC460/T8.
- a schematic circuit board layout 2300 is shown for the LED array 2100 (elements D1-D8) .
- the lighting for diodes D1-D8 is continuous but has a quick "on-off" to cover a video view. This on-off cycle is approximately l/30th of a second.
- the flash component of the video view period is l/7000 tn of a second. Since the period of lighting is so brief, the flash and the lighting exposure render sensitivity to movement of the subject pratically irrelevant.
- FIG. 24 is an illustration of the circuitry supporting the camera electronics 2400.
- a constant power source of about 100 million amps is provided.
- a 12 volt power supply is used along with a 5V control power supply.
- a micro camera arrangement for image capture is created whereby the lighting is located below the camera. Moreover, the position at which the lighting is below such camera is critical, since a subject farther than 3 feet away from the lens will not be captured. Placement of the camera is also sensitive since direct sunlight, incandescent or halogen light will wash out features. Thus any direct light to the camera is problematic.
- FIGs 25a-c are different views (front, side, and perspective) of a housing designed to contain the camera-LED unit.
- a recess 2502 is shown in the unit, through which the entire housing 2200 can be inserted.
- the modular plug 2220 (Fig. 22) would also be connected through cable 2504 (Fig. 25c) to the PC I/O port (not shown) .
- the housing 2504 includes a stand 2506 which pivots about axle 2508 in the direction of arrow 2510.
- the camera can be supported in a substantially upright position (Fig. 25(c)) when placed with the stand in an extended position on a horizontal surface.
- Figs. 26a-c show a second embodiment of the mini-camera housing 2600.
- the housing includes a stand 2602, which in a closed position (as shown in Figs. 26a, 26b), completely covers the camera lens and LED's. When fully opened, however, which is accomplished by rotating the stand 2602 about axis
- Figs. 27a-c are views of a third embodiment of the minicamera housing 2700.
- a stand 2702 is partially cut away, to expose the camera lens only.
- the LED array and the camera 2202 are both exposed for use.
- Fig. 28 illustrates a second or alternative embodiment for the targeting process set forth in Fig. 6 of this invention.
- the advantage of the alternative technique is that it allows targeting without reference to fixed areas, by dynamically finding the image centers.
- the process, 2800 begins at step 2802 where a desired area is isolated and captured by the camera. A histogram for this captured area is then computed by the computer 40. The computer then dynamically determines thresholds by calculating desired threshold barriers that are preprogrammed into the computer 40. For example, high and low rejects can be set to be above the lowest 5% and below the highest 5%, and high and low thresholds between the bottom 45% and below the top 45%. As a consequence, when the threshold is compared to the histogram at step 2808, a 10% middle portion of the histogram can be defined reflecting particular gray-scale characteristics.
- the below, between, and above threshold values are then binarized at binarization step 2810 as shown in 28 (c) .
- the targeted area is then geometrically tested at step 2814 on two candidate points based on preset values which define appropriate quadrants.
- the points X IA Y I and x 2 ,y 2 can be isolated based on preset template values. For example, if iris targeting is desired, eye templates can be set so that
- an iteration loop can take three (3) images, binarize those values, average those binarized value and store the averaged value in the portable memory.
- steps 2822 and 2824 a high percentage of accuracy is achieved dynamically.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Ophthalmology & Optometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Collating Specific Patterns (AREA)
Abstract
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU65909/96A AU6590996A (en) | 1995-07-26 | 1996-07-19 | System for object verification and identification |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US50752895A | 1995-07-26 | 1995-07-26 | |
| US08/507,528 | 1996-07-19 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO1997005566A1 true WO1997005566A1 (fr) | 1997-02-13 |
| WO1997005566A9 WO1997005566A9 (fr) | 1997-04-03 |
Family
ID=24018987
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US1996/011996 WO1997005566A1 (fr) | 1995-07-26 | 1996-07-19 | Systeme de verification et d'identification d'objet |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO1997005566A1 (fr) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2004040531A1 (fr) * | 2002-10-28 | 2004-05-13 | Morris Steffin | Method and apparatus for detection of drownsiness and for monitoring biological processes |
| EP1522962A4 (fr) * | 2002-07-16 | 2007-12-26 | Nec Corp | Procede et dispositif d'extraction caracteristique de motifs |
| ES2296443A1 (es) * | 2005-04-15 | 2008-04-16 | Universidad Rey Juan Carlos | Sistema de verificacion facial. |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5351303A (en) * | 1993-02-25 | 1994-09-27 | Willmore Michael R | Infra-red imaging and pattern recognition system |
| US5497430A (en) * | 1994-11-07 | 1996-03-05 | Physical Optics Corporation | Method and apparatus for image recognition using invariant feature signals |
-
1996
- 1996-07-19 WO PCT/US1996/011996 patent/WO1997005566A1/fr active Application Filing
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5351303A (en) * | 1993-02-25 | 1994-09-27 | Willmore Michael R | Infra-red imaging and pattern recognition system |
| US5497430A (en) * | 1994-11-07 | 1996-03-05 | Physical Optics Corporation | Method and apparatus for image recognition using invariant feature signals |
Non-Patent Citations (3)
| Title |
|---|
| CARNAHAN CONFERENCE ON SECURITY TECHNOLOGY, 1992, F. PROKOSKI et al., "Identification of Individuals by Means of Facial Thermography", pages 120-125. * |
| IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, Volume 12, No. 1, January 1990, M. KIRBY and L. SIROVICH, "Application of the Karhunen-Loeve Procedure for the Characterization of Human Faces", pages 103-108. * |
| JOURNAL OF COGNITIVE NEUROSCIENCE, Volume 3, Number 1, 1991, M. TURK and A. PENTLAND, "Eigenfaces for Recognition", pages 71-86. * |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1522962A4 (fr) * | 2002-07-16 | 2007-12-26 | Nec Corp | Procede et dispositif d'extraction caracteristique de motifs |
| US8116571B2 (en) | 2002-07-16 | 2012-02-14 | Nec Corporation | Pattern feature extraction via fourier amplitudes of a block image |
| WO2004040531A1 (fr) * | 2002-10-28 | 2004-05-13 | Morris Steffin | Method and apparatus for detection of drownsiness and for monitoring biological processes |
| US7336804B2 (en) | 2002-10-28 | 2008-02-26 | Morris Steffin | Method and apparatus for detection of drowsiness and quantitative control of biological processes |
| US7680302B2 (en) | 2002-10-28 | 2010-03-16 | Morris Steffin | Method and apparatus for detection of drowsiness and quantitative control of biological processes |
| ES2296443A1 (es) * | 2005-04-15 | 2008-04-16 | Universidad Rey Juan Carlos | Sistema de verificacion facial. |
| ES2296443B1 (es) * | 2005-04-15 | 2009-03-01 | Universidad Rey Juan Carlos | Sistema de verificacion facial. |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Hamouz et al. | Feature-based affine-invariant localization of faces | |
| US7715596B2 (en) | Method for controlling photographs of people | |
| Datta et al. | Face detection and recognition: theory and practice | |
| Beymer | Face recognition under varying pose | |
| JP3975248B2 (ja) | ニューラルネットワーク分類を使用する生物測定認識 | |
| Beymer | Pose-Invariant face recognition using real and virtual views | |
| JP5955133B2 (ja) | 顔画像認証装置 | |
| Zhao | Robust image based 3D face recognition | |
| Michael et al. | A contactless biometric system using palm print and palm vein features | |
| KR100756047B1 (ko) | 생체 얼굴 인식 장치 및 그 방법 | |
| Bagherian et al. | Facial feature extraction for face recognition: a review | |
| Akarun et al. | 3D face recognition for biometric applications | |
| Tsai et al. | Face detection using eigenface and neural network | |
| Aydın et al. | Face recognition approach by using dlib and k-nn | |
| Hamouz et al. | Affine-invariant face detection and localization using gmm-based feature detector and enhanced appearance model | |
| WO2002009024A1 (fr) | Systemes d'identite | |
| Abushanap et al. | A survey of human face recognition for partial face view | |
| CN118230395B (zh) | 一种基于InsightFace与LIS文件管理的人脸识别方法及装置 | |
| WO1997005566A1 (fr) | Systeme de verification et d'identification d'objet | |
| WO1998003966A2 (fr) | Systeme de verification et d'identification d'objets | |
| WO1997005566A9 (fr) | Systeme de verification et d'identification d'objet | |
| Ibitayo et al. | Development Of Iris Based Age And Gender Detection System | |
| Mekami et al. | Towards a new approach for real time face detection and normalization | |
| EP1615160A2 (fr) | Appareil et procédé d'extraction de caractéristiques pour la reconnaissance d'images | |
| JP4606955B2 (ja) | 映像認識システム、映像認識方法、映像補正システムおよび映像補正方法 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A1 Designated state(s): AL AM AT AU AZ BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG UZ VN AM AZ BY KG KZ MD RU TJ TM |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA |
|
| DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
| COP | Corrected version of pamphlet |
Free format text: PAGES 1-38,DESCRIPTION,AND PAGES 39-40,CLAIMS,REPLACED BY NEW PAGES BEARING THE SAME NUMBER;PAGES 1/30-30/30,DRAWINGS,REPLACED BY NEW PAGES 1/26-26/26;DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE |
|
| CFP | Corrected version of a pamphlet front page | ||
| CR1 | Correction of entry in section i | ||
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
| 122 | Ep: pct application non-entry in european phase | ||
| NENP | Non-entry into the national phase |
Ref country code: CA |