CN113792582A - Infrared light source driving method and device, computer equipment and storage medium - Google Patents
Infrared light source driving method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN113792582A CN113792582A CN202110881958.6A CN202110881958A CN113792582A CN 113792582 A CN113792582 A CN 113792582A CN 202110881958 A CN202110881958 A CN 202110881958A CN 113792582 A CN113792582 A CN 113792582A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- visible light
- infrared light
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000001514 detection method Methods 0.000 claims abstract description 59
- 210000000056 organ Anatomy 0.000 claims description 73
- 238000004590 computer program Methods 0.000 claims description 22
- 210000004709 eyebrow Anatomy 0.000 claims description 10
- 230000000875 corresponding effect Effects 0.000 claims 3
- 230000002596 correlated effect Effects 0.000 claims 1
- 238000001727 in vivo Methods 0.000 abstract description 6
- 239000002699 waste material Substances 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 26
- 210000001508 eye Anatomy 0.000 description 10
- 230000001815 facial effect Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 5
- 239000013589 supplement Substances 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 210000004209 hair Anatomy 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Landscapes
- Studio Devices (AREA)
Abstract
The application discloses an infrared light source driving method and device, computer equipment and a storage medium, and belongs to the technical field of light source driving. The method can adjust the target power of the infrared light source according to the size of the face image relative to the size of the visible light image, so that the target power of the infrared light source is adjusted according to the distance between the face and the face recognition device, and therefore the overexposure phenomenon cannot occur when the face is close to the face recognition device, the underexposure phenomenon cannot occur when the face is far away from the face recognition device, and the accuracy of in-vivo detection is improved. Meanwhile, the infrared light source driving method adjusts the target power of the infrared light source to be smaller when the face is closer to the face recognition device, and energy waste is avoided.
Description
Technical Field
The present disclosure relates to the field of light source driving technologies, and in particular, to an infrared light source driving method and apparatus, a computer device, and a storage medium.
Background
Face recognition devices typically include a visible light camera, an infrared light source, and an infrared light camera. The visible light camera is used for shooting visible light images, and the visible light images are used for face detection. The infrared camera is used for shooting an infrared image, and the infrared image is used for carrying out living body detection. The infrared light source is used for exposure when an infrared light image is taken.
In the related art, the infrared light source operates at a constant power. In this case, when the face is close to the face recognition apparatus, an overexposure phenomenon may occur, and when the face is far from the face recognition apparatus, an underexposure phenomenon may occur, which may reduce the accuracy of the living body detection.
Disclosure of Invention
The application provides an infrared light source driving method, an infrared light source driving device, computer equipment and a storage medium, which can improve the accuracy of in-vivo detection. The technical scheme is as follows:
in a first aspect, a method for driving an infrared light source is provided, and is applied to a face recognition device, and the method includes:
acquiring a visible light image obtained by shooting a human face;
carrying out face detection on the visible light image to obtain a face image;
determining the size of the face image relative to the visible light image;
determining the target power of an infrared light source according to the size of the face image relative to the size of the visible light image, wherein the larger the size of the face image relative to the size of the visible light image is, the smaller the target power is;
and driving the infrared light source to emit infrared light according to the target power so as to perform living body detection on the human face.
In the method and the device, after the face is shot to obtain the visible light image, the face detection is carried out on the visible light image, so that the face image in the visible light image is obtained, and the size of the face image relative to the visible light image is determined. The smaller the face image is relative to the visible light image, the farther the face is away from the face recognition equipment; the larger the face image is relative to the visible light image, the closer the face is to the face recognition device is. Accordingly, the target power of the infrared light source is determined according to the size of the face image relative to the size of the visible light image, so that the farther the face is away from the face recognition device, the larger the target power of the infrared light source is; the closer the face is to the face recognition device, the smaller the target power. And then, driving the infrared light source to emit infrared light according to the target power, thereby carrying out living body detection on the human face. According to the infrared light source driving method, the target power of the infrared light source can be adjusted according to the size of the face image relative to the size of the visible light image, and therefore the target power of the infrared light source is adjusted according to the distance between the face and the face recognition device, so that the over-exposure phenomenon cannot occur when the face is close to the face recognition device, the under-exposure phenomenon cannot occur when the face is far away from the face recognition device, and therefore the accuracy of in-vivo detection is improved. Meanwhile, the infrared light source driving method adjusts the target power of the infrared light source to be smaller when the face is closer to the face recognition device, and energy waste is avoided.
Optionally, the determining the size of the face image relative to the visible light image includes:
performing organ recognition on the face image to obtain a plurality of organ images, wherein the organ images comprise eyebrow images, eye images, nose images and mouth images;
generating a face frame containing the plurality of organ images in the face image;
and determining the size of the face image relative to the visible light image according to the number of the pixel points in the face frame.
Optionally, the generating a face frame containing the plurality of organ images in the face image includes:
determining a coordinate range of each of the plurality of organ images in the visible light image;
acquiring a maximum ordinate, a minimum ordinate, a maximum abscissa and a minimum abscissa in the coordinate range of the organ images;
and generating a rectangular face frame containing the plurality of organ images in the face image by taking the difference value between the maximum ordinate and the minimum ordinate as the height and taking the difference value between the maximum abscissa and the minimum abscissa as the width.
Optionally, the determining the target power of the infrared light source according to the size of the face image relative to the visible light image includes:
acquiring the duty ratio of a corresponding pulse width modulation signal from a preset corresponding relation according to the size of the face image relative to the visible light image, wherein the preset corresponding relation is the corresponding relation between the relative size of the face image and the duty ratio of the pulse width modulation signal, and the relative size of the face image in the preset corresponding relation is in a negative correlation relation with the duty ratio of the pulse width modulation signal;
and multiplying the duty ratio of the acquired pulse width modulation signal by the rated power of the infrared light source to obtain the target power of the infrared light source.
Optionally, the number of the face images is multiple;
the determining the size of the face image relative to the visible light image includes:
determining the size of each face image in the plurality of face images relative to the visible light image to obtain the relative sizes of the plurality of face images;
the determining the target power of the infrared light source according to the size of the face image relative to the visible light image comprises the following steps:
and determining the target power of the infrared light source according to the maximum value in the relative sizes of the face images.
In a second aspect, an infrared light source driving apparatus is provided, which is applied to a face recognition device, and the apparatus includes:
the acquisition module is used for acquiring a visible light image obtained by shooting a face;
the detection module is used for carrying out face detection on the visible light image to obtain a face image;
the first determining module is used for determining the size of the face image relative to the visible light image;
the second determining module is used for determining the target power of the infrared light source according to the size of the face image relative to the visible light image, and the larger the size of the face image relative to the visible light image is, the smaller the target power is;
and the driving module is used for driving the infrared light source to emit infrared light according to the target power so as to carry out living body detection on the human face.
Optionally, the first determining module is configured to:
performing organ recognition on the face image to obtain a plurality of organ images, wherein the organ images comprise eyebrow images, eye images, nose images and mouth images;
generating a face frame containing the plurality of organ images in the face image;
and determining the size of the face image relative to the visible light image according to the number of the pixel points in the face frame.
Optionally, the first determining module is configured to:
determining a coordinate range of each of the plurality of organ images in the visible light image;
acquiring a maximum ordinate, a minimum ordinate, a maximum abscissa and a minimum abscissa in the coordinate range of the organ images;
and generating a rectangular face frame containing the plurality of organ images in the face image by taking the difference value between the maximum ordinate and the minimum ordinate as the height and taking the difference value between the maximum abscissa and the minimum abscissa as the width.
Optionally, the second determining module is configured to:
acquiring the duty ratio of a corresponding pulse width modulation signal from a preset corresponding relation according to the size of the face image relative to the visible light image, wherein the preset corresponding relation is the corresponding relation between the relative size of the face image and the duty ratio of the pulse width modulation signal, and the relative size of the face image in the preset corresponding relation is in a negative correlation relation with the duty ratio of the pulse width modulation signal;
and multiplying the duty ratio of the acquired pulse width modulation signal by the rated power of the infrared light source to obtain the target power of the infrared light source.
Optionally, the number of the face images is multiple;
the first determination module is to: determining the size of each face image in the plurality of face images relative to the visible light image to obtain the relative sizes of the plurality of face images;
the second determination module is to: and determining the target power of the infrared light source according to the maximum value in the relative sizes of the face images.
In a third aspect, a computer device is provided, the computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the computer program, when executed by the processor, implementing the method according to the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, the computer-readable storage medium storing a computer program which, when executed by a processor, implements the method according to the first aspect.
It is understood that, for the beneficial effects of the second aspect, the third aspect and the fourth aspect, reference may be made to the description of the first aspect, and details are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a face recognition device according to an embodiment of the present application;
fig. 2 is a flowchart of a driving method of an infrared light source according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a first visible light image provided by an embodiment of the present application;
fig. 4 is a schematic diagram of a first face image provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a second type of face image provided in an embodiment of the present application;
fig. 6 is a schematic diagram of a third face image provided in the embodiment of the present application;
fig. 7 is a schematic distance range diagram of a face and a face recognition device according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a second visible light image provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of a third visible light image provided by the embodiment of the present application;
fig. 10 is a schematic diagram illustrating a correspondence between a size of a face frame and a distance according to an embodiment of the present application;
fig. 11 is a schematic diagram of a first preset correspondence provided in the embodiment of the present application;
fig. 12 is a schematic diagram of a second preset correspondence provided in the embodiment of the present application;
fig. 13 is a circuit configuration diagram of a face recognition device according to an embodiment of the present application;
FIG. 14 is a schematic diagram of a fourth visible light image provided by the embodiment of the present application;
fig. 15 is a schematic structural diagram of an infrared light source driving apparatus according to an embodiment of the present disclosure;
fig. 16 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Wherein, the meanings represented by the reference numerals of the figures are respectively as follows:
10. a face recognition device;
110. a visible light camera;
120. an infrared light source;
130. an infrared camera;
140. a power source;
150. a direct current chopper;
22. a visible light image;
24. a face image;
242. a first face image;
244. a second face image;
26. a face frame;
262. a first face frame;
264. and a second face frame.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that reference to "a plurality" in this application means two or more. In the description of the present application, "/" means "or" unless otherwise stated, for example, a/B may mean a or B; "and/or" herein is only an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, for the convenience of clearly describing the technical solutions of the present application, the terms "first", "second", and the like are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
Before explaining the embodiments of the present application in detail, an application scenario of the embodiments of the present application will be described.
Fig. 1 is a schematic structural diagram of a face recognition device according to an embodiment of the present application. As shown in fig. 1, the face recognition device 10 generally includes a visible light camera 110, an infrared light source 120, and an infrared light camera 130. The visible light camera 110 is configured to capture a visible light image, and the visible light image is used for face detection. The infrared camera 130 is used to take an infrared light image, which is used for the living body detection. The infrared light source 120 is used for exposure when an infrared light image is captured.
In the related art, the infrared light source 120 operates at a constant power. In this case, when the face is close to the face recognition apparatus 10, an overexposure phenomenon may occur, and when the face is far from the face recognition apparatus 10, an underexposure phenomenon may occur, which may reduce the accuracy of the living body detection.
Therefore, the embodiment of the application provides an infrared light source driving method, an infrared light source driving device, a computer device and a storage medium, and the accuracy of living body detection can be improved.
The following explains the driving method of the infrared light source provided in the embodiments of the present application in detail. The infrared light source driving method is applied to the face recognition apparatus 10. The face recognition device 10 may include a visible light camera 110, an infrared light source 120, and an infrared camera 130, and a controller (not shown) connected to the visible light camera 110, the infrared light source 120, and the infrared camera 130. Here, the connection means electrical connection, and the electrical connection means connection by a wire to realize transmission of electrical signals. The visible light camera 110 is used for shooting a human face or an object to obtain a visible light image 22, and the visible light image 22 can be used for human face detection. The infrared light source 120 is used to photograph a human face or an object to obtain an infrared light image, and the infrared light image is used for living body detection. Generally, since infrared light emitted from a human body is weak, when an infrared light image is captured for living body detection, the infrared light source 120 is required to be exposed, thereby achieving a light supplement purpose. The controller is used for controlling the visible light camera 110, the infrared light source 120 and the infrared light camera 130 to work.
Fig. 2 is a flowchart of a driving method of an infrared light source according to an embodiment of the present disclosure. Referring to fig. 2, the method includes the following steps S110 to S150.
S110, the controller acquires the visible light image 22 obtained by shooting the human face.
The controller can be a single chip with data processing and storage functions. The visible light camera 110 may be a CMOS (Complementary Metal Oxide Semiconductor) camera or a CCD (Charge Coupled Device) camera. Visible light refers to the portion of the electromagnetic spectrum that is perceived by the human eye, typically having a wavelength between 380nm (nanometers) and 780 nm. After triggering the face recognition device 10, the controller may control the visible light camera 110 to shoot the face, so as to obtain the visible light image 22. The controller is also used for acquiring the visible light image 22 shot by the visible light camera 110, so as to process the visible light image 22. Fig. 3 is a schematic diagram of a visible light image 22 provided by an embodiment of the present application, and as shown in fig. 3, the visible light image 22 may be generally rectangular.
And S120, the controller performs face detection on the visible light image 22 to obtain a face image 24.
As shown in fig. 3, after the controller acquires the visible light image 22, the controller may perform face detection on the visible light image 22, so as to obtain a face image 24 in the visible light image 22.
S130, the controller determines the size of the face image 24 relative to the visible light image 22.
After the controller performs face detection on the visible light image 22 to obtain a face image 24 in the visible light image 22, the size of the face image 24 relative to the visible light image 22 is determined. In some embodiments, the controller may determine the size of the facial image 24 relative to the visible light image 22 by determining the area of the facial image 24 in the visible light image 22 when the size of the visible light image 22 is a predetermined size. The predetermined size here may be, for example, one inch or two inches. In other embodiments, the controller may also obtain the size of the face image 24 relative to the visible light image 22 according to the number of pixel points in the face image 24. Specifically, the visible light camera 110 is configured by a plurality of photosensitive elements. The visible light image 22 captured by the visible light camera 110 also includes a plurality of pixels, and the plurality of pixels have different brightness, thereby forming an image. For example, in the embodiment shown in fig. 3, the visible light image 22 includes 1280 × 720 pixels (pixels).
In some embodiments, when the controller "obtains the size of the face image 24 relative to the visible light image 22 according to the number of the pixel points in the face image 24", the step S130 may include the following steps S132 to S136.
S132, the controller performs organ recognition on the face image 24 to obtain a plurality of organ images, including a brow image, an eye image, a nose image, and a mouth image.
After the controller obtains the face image 24 in the visible light image 22, the controller performs organ recognition on the face image 24 to obtain a plurality of organ images. In some embodiments, the plurality of organ images includes a brow image, an eye image, a nose image, a mouth image (not shown). Wherein, the eyebrow image refers to the image corresponding to two eyebrows on the face; the eye image refers to images corresponding to two eyes on the human face; the nose image refers to an image corresponding to a nose on a human face; the mouth image refers to an image corresponding to the mouth (including the tongue) on the face of a person. In other embodiments, the plurality of organ images may further include ear images, i.e., images corresponding to two ears on the face of a person.
S134, the controller generates a face frame 26 containing a plurality of organ images in the face image 24.
The "face frame 26 containing a plurality of organ images" means the face frame 26 containing all the organ images in the "plurality of organ images" obtained in step S132. Fig. 4 is a schematic diagram of a face image 24 according to an embodiment of the present application. As shown in fig. 4, when the plurality of organ images obtained in step S132 include a brow image, an eye image, a nose image, and a mouth image, the face frame 26 also contains the brow image, the eye image, the nose image, and the mouth image. In other words, each of the plurality of organ images obtained in step S132 is located within the face frame 26.
In some embodiments, step S134 may specifically include steps S1342 to S1346 as follows.
S1342, the controller determines a coordinate range of each of the plurality of organ images in the visible light image 22.
As is known from the above description, the visible light image 22 includes a plurality of pixel points. After the controller sequentially obtains the visible light image 22, the face image 24 and the plurality of organ images, a rectangular coordinate system can be established by using any one pixel point in the visible light image 22 as an origin. In the rectangular coordinate system, the distance between every two adjacent pixels may be taken as a unit. Since each organ image of the plurality of organ images is also composed of a plurality of pixel points, after the orthogonal coordinate system is established, the controller can determine the coordinate range of each organ image in the visible light image 22.
S1344, the controller acquires a maximum ordinate, a minimum ordinate, a maximum abscissa, and a minimum abscissa in the coordinate range of the plurality of organ images.
In general, in the embodiment shown in fig. 4, the maximum ordinate in the coordinate range of the plurality of organ images may be the ordinate of the highest point of the eyebrow image in the paper surface direction; the minimum ordinate may be an ordinate of the lowest point of the mouth image in the paper surface direction; the maximum abscissa may be the abscissa of the rightmost point of the eyebrow image in the paper surface direction; the minimum abscissa may be the abscissa of the leftmost point of the eyebrow image in the paper direction.
S1346, the controller generates a rectangular face frame 26 containing a plurality of organ images in the face image 24 by using the difference between the maximum ordinate and the minimum ordinate as the height and the difference between the maximum abscissa and the minimum abscissa as the width.
The human face frame 26 is generated by taking the difference value between the maximum vertical coordinate and the minimum vertical coordinate as the height, so that the multiple organ images can be completely contained in the range of the human face frame 26 along the longitudinal direction of the paper surface. The human face frame 26 is generated by taking the difference between the maximum abscissa and the minimum abscissa as the width, so that the plurality of organ images can be completely contained in the range of the human face frame 26 along the transverse direction of the paper. In this way, a rectangular face frame 26 containing a plurality of organ images is generated in the face image 24.
It should be noted that in the embodiment shown in fig. 4 and described above, the face image 24 in the visible light image 22 is forward. In other embodiments, the face image 24 in the visible light image 22 may be tilted or inverted in the direction of the paper. In one embodiment, as shown in fig. 5, when the face image 24 in the visible light image 22 is tilted along the paper surface, the controller may still generate the rectangular face frame 26 according to the above steps S1342 to S1346. In another embodiment, as shown in fig. 6, when the face image 24 in the visible light image 22 is tilted along the paper surface direction, the controller may first calculate the tilt angle of the face image 24, and then combine the tilt angle and the above steps S1342 to S1346 to generate the rectangular face frame 26 having the same tilt direction as the face.
S136, the controller determines the size of the face image 24 relative to the visible light image 22 according to the number of the pixel points in the face frame 26.
Since the visible light image 22 is composed of a plurality of pixel points, the number of the pixel points in the face frame 26 can be obtained after the face frame 26 is generated in the face image 24. Thus, the size of the face image 24 relative to the visible light image 22 can be determined according to the number of the pixels in the face frame 26. In some embodiments, the visible light camera 110 of the face recognition device 10 is a specific camera with fixed pixels, i.e., the number of pixels in the visible light image 22 is fixed, and at this time, the size of the face image 24 relative to the visible light image 22 can be directly determined according to the number of pixels in the face frame 26. In other embodiments, the visible light camera 110 of the face recognition device 10 can be switched among multiple cameras, i.e., the number of pixels of the visible light image 22 can be varied. At this time, the size of the face image 24 relative to the visible light image 22 can be determined according to the ratio of the number of the pixel points in the face frame 26 to the number of the pixel points in the visible light image 22.
Fig. 7 is a schematic distance range diagram of a human face and the human face recognition apparatus 10 according to an embodiment of the present application. As shown in fig. 7, it is assumed that the visible light image 22 includes 1280 × 720 pixels, and the farthest recognition distance of the face recognition device 10 is 120cm (centimeter), and the closest recognition distance is 40 cm. When the distance between the face and the face recognition device 10 is greater than 120cm, the face recognition device 10 cannot clearly shoot the face image 24; when the distance between the face and the face recognition device 10 is less than 40cm, the face recognition device 10 cannot capture the complete face image 24. When the distance between the human face and the face recognition device 10 is equal to 120cm, the visible light image 22 may be as shown in fig. 8 or fig. 9. At this time, the position of the face image 24 in the visible light image 22 may be different according to the position of the face relative to the visible light camera 110, the face frame 26 obtained in the above step S134 is the smallest, and the width of the face frame 26 may be 40 pixels. When the distance between the face and the face recognition device 10 is equal to 40cm, the face frame 26 obtained in the step S134 is the largest, and the width of the face frame 26 may be 720 pixels. Fig. 10 is a schematic diagram of a correspondence relationship between the size and the distance of a face frame 26 according to an embodiment of the present application, that is, the farther a face is away from the face recognition device 10, the smaller the face frame 26 is, that is, the smaller the face image 24 is relative to the visible light image 22 is; the closer the face is to the face recognition device 10, the larger the face frame 26, i.e., the larger the face image 24 relative to the visible light image 22.
S140, the controller determines the target power of the infrared light source 120 according to the size of the face image 24 relative to the visible light image 22, and the larger the size of the face image 24 relative to the visible light image 22 is, the smaller the target power is.
The target power of the infrared light source 120 is determined according to the size of the face image 24 relative to the visible light image 22. When the face image 24 is smaller relative to the visible light image 22, it is indicated that the face is farther away from the face recognition device 10. In order to fully expose the face to the infrared light source 120, the power of the infrared light source 120 should be larger, and the controller sets the target power of the infrared light source 120 to be larger. The larger the face image 24 relative to the visible light image 22, the closer the face is to the face recognition device 10. To avoid overexposure of infrared light source 120 to the face, the power of infrared light source 120 should be smaller, and the target power of infrared light source 120 set by the controller is smaller.
In some embodiments, step S140 may specifically include the following steps S142 and S144.
And S142, the controller acquires the duty ratio of the corresponding pulse width modulation signal from a preset corresponding relation according to the size of the face image 24 relative to the visible light image 22, wherein the preset corresponding relation is the corresponding relation between the relative size of the face image 24 and the duty ratio of the pulse width modulation signal, and the relative size of the face image 24 in the preset corresponding relation is in a negative correlation with the duty ratio of the pulse width modulation signal.
As described above, the size of the face image 24 relative to the visible light image 22 may be represented by a percentage, such as 20%, 40%, 60%, or may be directly represented by the number of pixels in the face frame 26. The duty cycle of a Pulse Width Modulation (PWM) signal is a percentage. The preset correspondence is a correspondence between a relative size of the face image 24 and a duty ratio of the pulse width modulation signal, where the relative size of the face image 24 refers to a size of the face image 24 relative to the visible light image 22.
Two implementation manners of presetting the corresponding relationship are explained in detail below.
Fig. 11 is a schematic diagram of a preset correspondence relationship provided in an embodiment of the present application. As shown in fig. 11, the relative size of the face image 24 in the preset corresponding relationship is in a negative correlation with the duty ratio of the pulse width modulation signal. In this case, the controller may determine the size of the face image 24 relative to the visible-light image 22 according to the area of the face image 24 in the visible-light image 22 when the size of the visible-light image 22 is a preset size, so as to obtain the size of the face image 24 relative to the visible-light image 22. Alternatively, the controller may determine the size of the face image 24 relative to the visible light image 22 according to the ratio of the number of pixels in the face frame 26 to the number of pixels in the visible light image 22.
Fig. 12 is a schematic diagram of another preset correspondence relationship provided in the embodiment of the present application. As shown in fig. 12, the relative size of the face image 24 with the preset correspondence relationship is in a negative correlation with the duty ratio of the pulse width modulation signal. In this case, when determining the size of the face image 24 relative to the visible light image 22, the controller may determine the size of the face image 24 relative to the visible light image 22 directly according to the number of the pixel points in the face frame 26.
And S144, multiplying the duty ratio of the acquired pulse width modulation signal by the rated power of the infrared light source 120 by the controller to obtain the target power of the infrared light source 120.
The power rating of infrared light source 120 refers to the maximum power at which infrared light source 120 operates. After the controller obtains the duty ratio of the pulse width modulation signal, the duty ratio of the pulse width modulation signal may be multiplied by the rated power of the infrared light source 120 to obtain the target power of the infrared light source 120.
S150, the controller drives the infrared light source 120 to emit infrared light according to the target power so as to perform living body detection on the human face.
The controller drives the infrared light source 120 to emit infrared light according to the target power to supplement light to the face, so that the face recognition device 10 performs living body detection on the face.
Fig. 13 is a circuit configuration diagram of a face recognition device 10 according to an embodiment of the present application. As shown in fig. 13, the face recognition device 10 may include a power supply 140, a dc chopper 150, and an infrared light source 120. Dc chopper 150 has a first input terminal, a second input terminal, a first output terminal, and a second output terminal. Infrared light source 120 is connected between a first output terminal and a second output terminal of dc chopper 150. The power supply 140 is connected to a first input terminal of the dc chopper 150, and a controller (not shown) is connected to a second input terminal of the dc chopper 150 for inputting a PWM (pulse width modulation) signal to the second input terminal of the dc chopper 150. When the face recognition device 10 is in operation, the power supply 140 supplies power to the infrared light source 120, and the controller adjusts the light emitting power of the infrared light source 120 by outputting a pulse width modulation signal. In this case, steps S144 and S150 described above are also directly combined as: the controller drives the infrared light source 120 to emit light according to the duty ratio of the acquired pulse width modulation signal, so as to perform living body detection on the human face. Thus, the purpose of determining the target power of the infrared light source 120 according to the size of the face image 24 relative to the visible light image 22 and driving the infrared light source 120 to emit infrared light according to the target power can be achieved.
In the embodiment of the present application, the infrared light source driving method may adjust the target power of the infrared light source 120 according to the size of the face image 24 relative to the visible light image 22, so as to adjust the target power of the infrared light source 120 according to the distance between the face and the face recognition device 10, so that an overexposure phenomenon will not occur when the face is close to the face recognition device 10, and an underexposure phenomenon will not occur when the face is far from the face recognition device 10, thereby improving the accuracy of living body detection. Meanwhile, the infrared light source driving method adjusts the target power of the infrared light source 120 to be smaller when the face is closer to the face recognition device 10, and energy waste is not caused.
The face recognition device 10 typically performs face detection based on human face components. The infrared light source driving method generates a face frame 26 containing a plurality of organ images in the face image 24, and determines the size of the face image 24 relative to the visible light image 22 according to the number of pixel points in the face frame 26. In this way, human tissue such as hair that does not contribute to face detection does not affect the target power of the infrared light source 120, and the accuracy of the living body detection can be further improved. When the face frame 26 is generated, the face frame 26 is generated according to the maximum ordinate, the minimum ordinate, the maximum abscissa, and the minimum abscissa in the coordinate ranges of the plurality of organ images, so that it is possible to prevent human tissues having no effect on face detection from affecting the target power of the infrared light source 120, and at the same time, since the organ image features are obvious, it is possible to improve the generation accuracy of the face frame 26, thereby improving the accuracy of living body detection.
Fig. 14 is a schematic diagram of another visible light image 22 provided by the embodiment of the present application. As shown in fig. 14, in some embodiments, the number of face images 24 in the visible light image 22 is multiple. In this case, step S130 may specifically be: the controller determines the relative size of each of the plurality of facial images 24 to the visible light image 22 to obtain the relative sizes of the plurality of facial images 24. Step S140 may specifically be: the controller determines the target power of the infrared light source 120 based on the maximum of the relative sizes of the plurality of face images 24.
That is, when the visible light image 22 captured by the visible light camera 110 includes a plurality of face images 24, the controller needs to determine the size of each face image 24 relative to the visible light image 22 according to the plurality of face images 24, so as to obtain the relative sizes of the plurality of face images 24. Thereafter, the controller determines the target power of the infrared light source 120 based on the maximum of the relative sizes of the plurality of facial images 24, i.e., the maximum of the size of each facial image 24 of the plurality of facial images 24 relative to the visible light image 22. For example, in the embodiment shown in fig. 12, a first face image 242, a second face image 244, a first face frame 262, and a second face frame 264 are included. The number of the pixels in the first face frame 262 is used to represent the size of the first face image 242 relative to the visible light image 22; the number of pixels in the second face frame 264 is used to represent the size of the second face image 244 relative to the visible light image 22. As can be seen therein, the size of the first facial image 242 relative to the visible-light image 22 is larger than the size of the second facial image 244 relative to the visible-light image 22. At this time, in executing step S140, the target power of the infrared light source 120 needs to be determined according to the size of the first face image 242 with respect to the visible light image 22. Thus, when there are a plurality of faces in front of the face recognition device 10, the face closest to the face recognition device 10 can be used as a light supplement object for exposure, so as to improve the accuracy of living body detection and reduce the actual power of the infrared light source 120.
In the embodiment of the present application, after the visible light image 22 is obtained by shooting a human face, human face detection is performed on the visible light image 22, so as to obtain a human face image 24 in the visible light image 22, and the size of the human face image 24 relative to the visible light image 22 is determined. The smaller the face image 24 is relative to the visible light image 22, the farther the face is from the face recognition device 10; the larger the face image 24 relative to the visible light image 22, the closer the face is to the face recognition device 10. Accordingly, the target power of the infrared light source 120 is determined according to the size of the face image 24 relative to the visible light image 22, so that the farther the face is away from the face recognition device 10, the greater the target power of the infrared light source 120; the closer the face is to the face recognition device 10, the lower the target power. Then, the infrared light source 120 is driven to emit infrared light according to the target power, so that the living body of the human face is detected. According to the infrared light source driving method, the target power of the infrared light source 120 can be adjusted according to the size of the face image 24 relative to the size of the visible light image 22, so that the target power of the infrared light source 120 can be adjusted according to the distance between the face and the face recognition device 10, therefore, the overexposure phenomenon cannot occur when the face is close to the face recognition device 10, the underexposure phenomenon cannot occur when the face is far away from the face recognition device 10, and the accuracy of in vivo detection is improved. Meanwhile, the infrared light source driving method adjusts the target power of the infrared light source 120 to be smaller when the face is closer to the face recognition device 10, and energy waste is not caused.
The face recognition device 10 typically performs face detection based on human face components. The infrared light source driving method generates a face frame 26 containing a plurality of organ images in the face image 24, and determines the size of the face image 24 relative to the visible light image 22 according to the number of pixel points in the face frame 26. In this way, human tissue such as hair that does not contribute to face detection does not affect the target power of the infrared light source 120, and the accuracy of the living body detection can be further improved. When the face frame 26 is generated, the face frame 26 is generated according to the maximum ordinate, the minimum ordinate, the maximum abscissa, and the minimum abscissa in the coordinate ranges of the plurality of organ images, so that it is possible to prevent human tissues having no effect on face detection from affecting the target power of the infrared light source 120, and at the same time, since the organ image features are obvious, it is possible to improve the generation accuracy of the face frame 26, thereby improving the accuracy of living body detection. When the number of the face images 24 in the visible light image 22 is multiple, the size of each face image 24 in the multiple face images 24 relative to the visible light image 22 is determined, the target power of the infrared light source 120 is determined according to the maximum value in the relative sizes of the multiple face images 24, and the face closest to the face recognition device 10 can be used as a light supplement object for exposure, so that the accuracy of living body detection is improved, and the actual power of the infrared light source 120 is reduced.
Fig. 15 is a schematic structural diagram of an infrared light source driving apparatus 300 according to an embodiment of the present application. Referring to fig. 15, an infrared light source driving apparatus 300 is applied to a face recognition device 10, and includes: an acquisition module 301, a detection module 302, a first determination module 303, a second determination module 304, and a driving module 305.
An obtaining module 301, configured to obtain a visible light image obtained by shooting a face.
The detection module 302 is configured to perform face detection on the visible light image to obtain a face image.
The first determining module 303 is configured to determine a size of the face image relative to the visible light image.
The second determining module 304 is configured to determine a target power of the infrared light source according to a size of the face image relative to the visible light image, where the larger the size of the face image relative to the visible light image is, the smaller the target power is.
And a driving module 305, configured to drive the infrared light source to emit infrared light according to the target power, so as to perform living body detection on the human face.
Optionally, the first determining module 303 is configured to:
carrying out organ identification on the face image to obtain a plurality of organ images, wherein the organ images comprise eyebrow images, eye images, nose images and mouth images;
generating a face frame containing a plurality of organ images in the face image;
and determining the size of the face image relative to the visible light image according to the number of the pixel points in the face frame.
Optionally, the first determining module 303 is configured to:
determining a coordinate range of each organ image in the plurality of organ images in the visible light image;
acquiring a maximum ordinate, a minimum ordinate, a maximum abscissa and a minimum abscissa in a coordinate range of a plurality of organ images;
and generating a rectangular face frame containing a plurality of organ images in the face image by taking the difference value between the maximum ordinate and the minimum ordinate as the height and the difference value between the maximum abscissa and the minimum abscissa as the width.
Optionally, the second determining module 304 is configured to:
acquiring the duty ratio of a corresponding pulse width modulation signal from a preset corresponding relation according to the size of the face image relative to the visible light image, wherein the preset corresponding relation is the corresponding relation between the relative size of the face image and the duty ratio of the pulse width modulation signal, and the relative size of the face image in the preset corresponding relation is in a negative correlation relation with the duty ratio of the pulse width modulation signal;
and multiplying the duty ratio of the acquired pulse width modulation signal by the rated power of the infrared light source to obtain the target power of the infrared light source.
Optionally the number of face images is multiple.
The first determining module 303 is configured to: and determining the size of each face image in the plurality of face images relative to the visible light image to obtain the relative sizes of the plurality of face images.
The second determination module 304 is configured to: and determining the target power of the infrared light source according to the maximum value in the relative sizes of the plurality of face images.
In the embodiment of the application, after the face is shot to obtain the visible light image, the face detection is carried out on the visible light image, so that the face image in the visible light image is obtained, and the size of the face image relative to the visible light image is determined. The smaller the face image is relative to the visible light image, the farther the face is away from the face recognition equipment; the larger the face image is relative to the visible light image, the closer the face is to the face recognition device is. Accordingly, the target power of the infrared light source is determined according to the size of the face image relative to the size of the visible light image, so that the farther the face is away from the face recognition device, the larger the target power of the infrared light source is; the closer the face is to the face recognition device, the smaller the target power. And then, driving the infrared light source to emit infrared light according to the target power, thereby carrying out living body detection on the human face. According to the infrared light source driving method, the target power of the infrared light source can be adjusted according to the size of the face image relative to the size of the visible light image, and therefore the target power of the infrared light source is adjusted according to the distance between the face and the face recognition device, so that the over-exposure phenomenon cannot occur when the face is close to the face recognition device, the under-exposure phenomenon cannot occur when the face is far away from the face recognition device, and therefore the accuracy of in-vivo detection is improved. Meanwhile, the infrared light source driving method adjusts the target power of the infrared light source to be smaller when the face is closer to the face recognition device, and energy waste is avoided.
Face recognition devices typically perform face detection based on human face organs. The infrared light source driving method generates a face frame containing a plurality of organ images in the face image, and determines the size of the face image relative to the visible light image according to the number of pixel points in the face frame. Therefore, human tissues such as hair and the like which do not act on the face detection do not influence the target power of the infrared light source, and the accuracy of the living body detection can be further improved. When the face frame is generated, the face frame is generated according to the maximum ordinate, the minimum ordinate, the maximum abscissa and the minimum abscissa in the coordinate range of the multiple organ images, the situation that human tissues which do not act on face detection cannot influence the target power of the infrared light source can be avoided, and meanwhile, due to the obvious organ image characteristics, the generation accuracy of the face frame can be improved, so that the accuracy of in-vivo detection is improved. When the number of the face images in the visible light image is multiple, the size of each face image in the multiple face images relative to the visible light image is determined, the target power of the infrared light source is determined according to the maximum value in the relative sizes of the multiple face images, and the face closest to the face recognition device can be used as a light supplement object for exposure, so that the accuracy of living body detection is improved, and the actual power of the infrared light source is reduced.
It should be noted that: the infrared light source driving device 300 provided in the above embodiment is only illustrated by dividing the above functional modules when driving the infrared light source, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
Each functional unit and module in the above embodiments may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used to limit the protection scope of the embodiments of the present application.
The infrared light source driving apparatus 300 provided in the above embodiment and the infrared light source driving method embodiment belong to the same concept, and for specific working processes of units and modules and technical effects brought by the units and modules in the above embodiments, reference may be made to the method embodiment section, and details are not described here.
Fig. 16 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 16, the computer apparatus 400 includes: a processor 401, a memory 402 and a computer program 403 stored in the memory 402 and operable on the processor 401, the steps in the infrared light source driving method in the above-described embodiments being implemented when the processor 401 executes the computer program 403.
The storage 402 may be an internal storage unit of the computer device 400, such as a hard disk or a memory of the computer device 400, in some embodiments. The memory 402 may also be an external storage device of the computer device 400 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the computer device 400. Further, the memory 402 may also include both internal and external storage units of the computer device 400. The memory 402 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of a computer program. The memory 402 may also be used to temporarily store data that has been output or is to be output.
An embodiment of the present application further provides a computer device, where the computer device includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing the steps of any of the various method embodiments described above when executing the computer program.
The embodiments of the present application also provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments can be implemented.
The embodiments of the present application provide a computer program product, which when run on a computer causes the computer to perform the steps of the above-described method embodiments.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the above method embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and used by a processor to implement the steps of the above method embodiments. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or apparatus capable of carrying computer program code to a photographing apparatus/terminal device, a recording medium, computer Memory, ROM (Read-Only Memory), RAM (Random Access Memory), CD-ROM (Compact Disc Read-Only Memory), magnetic tape, floppy disk, optical data storage device, etc. The computer-readable storage medium referred to herein may be a non-volatile storage medium, in other words, a non-transitory storage medium.
It should be understood that all or part of the steps for implementing the above embodiments may be implemented by software, hardware, firmware or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The computer instructions may be stored in the computer-readable storage medium described above.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/computer device and method may be implemented in other ways. For example, the above-described apparatus/computer device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110881958.6A CN113792582A (en) | 2021-08-02 | 2021-08-02 | Infrared light source driving method and device, computer equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110881958.6A CN113792582A (en) | 2021-08-02 | 2021-08-02 | Infrared light source driving method and device, computer equipment and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN113792582A true CN113792582A (en) | 2021-12-14 |
Family
ID=78877082
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110881958.6A Pending CN113792582A (en) | 2021-08-02 | 2021-08-02 | Infrared light source driving method and device, computer equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113792582A (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108243311A (en) * | 2016-12-27 | 2018-07-03 | 杭州萤石网络有限公司 | The method of adjustment and picture pick-up device of infrared lamp power |
| CN108769509A (en) * | 2018-04-28 | 2018-11-06 | Oppo广东移动通信有限公司 | Method, device, electronic device and storage medium for controlling camera |
| CN110310963A (en) * | 2018-03-27 | 2019-10-08 | 恒景科技股份有限公司 | System for adjusting light source power |
| CN110956114A (en) * | 2019-11-25 | 2020-04-03 | 展讯通信(上海)有限公司 | Face living body detection method, device, detection system and storage medium |
| US20200250448A1 (en) * | 2019-02-06 | 2020-08-06 | Alibaba Group Holding Limited | Spoof detection using dual-band near-infrared (nir) imaging |
-
2021
- 2021-08-02 CN CN202110881958.6A patent/CN113792582A/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108243311A (en) * | 2016-12-27 | 2018-07-03 | 杭州萤石网络有限公司 | The method of adjustment and picture pick-up device of infrared lamp power |
| CN110310963A (en) * | 2018-03-27 | 2019-10-08 | 恒景科技股份有限公司 | System for adjusting light source power |
| CN108769509A (en) * | 2018-04-28 | 2018-11-06 | Oppo广东移动通信有限公司 | Method, device, electronic device and storage medium for controlling camera |
| US20200250448A1 (en) * | 2019-02-06 | 2020-08-06 | Alibaba Group Holding Limited | Spoof detection using dual-band near-infrared (nir) imaging |
| CN110956114A (en) * | 2019-11-25 | 2020-04-03 | 展讯通信(上海)有限公司 | Face living body detection method, device, detection system and storage medium |
Non-Patent Citations (1)
| Title |
|---|
| YUYANG GU ET AL: "High-sensitivity imaging of time-domain near- infrared light transducer", 《NATURE PHOTONICS 》, 20 May 2019 (2019-05-20), pages 1 - 10 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11948282B2 (en) | Image processing apparatus, image processing method, and storage medium for lighting processing on image using model data | |
| US11704775B2 (en) | Bright spot removal using a neural network | |
| CN110324521B (en) | Method, device, electronic device and storage medium for controlling camera | |
| US9712743B2 (en) | Digital image processing using face detection and skin tone information | |
| US10304164B2 (en) | Image processing apparatus, image processing method, and storage medium for performing lighting processing for image data | |
| JP5981053B2 (en) | Imaging device with scene-adaptive automatic exposure compensation | |
| CN111385482B (en) | Image processing apparatus, control method thereof, and machine-readable medium | |
| US8055090B2 (en) | Digital image processing using face detection information | |
| US8498446B2 (en) | Method of improving orientation and color balance of digital images using face detection information | |
| US10855885B2 (en) | Image processing apparatus, method therefor, and storage medium | |
| CN114827487B (en) | High dynamic range image synthesis method and electronic device | |
| KR20160090379A (en) | Photographing method for dual-camera device and dual-camera device | |
| CN113132613A (en) | Camera light supplementing device, electronic equipment and light supplementing method | |
| CN111601373B (en) | Backlight brightness control method, device, mobile terminal and storage medium | |
| US12114079B2 (en) | Electronic device for adjusting exposure value of image including face | |
| CN108737728A (en) | Image shooting method, terminal and computer storage medium | |
| JPWO2019078310A1 (en) | Face three-dimensional shape estimation device, face three-dimensional shape estimation method, and face three-dimensional shape estimation program | |
| CN111246093A (en) | Image processing method, image processing device, storage medium and electronic equipment | |
| US11710343B2 (en) | Image processing device, image processing method, and storage medium for correcting brightness | |
| US10535122B2 (en) | Composite image for flash artifact removal | |
| US11120533B2 (en) | Information processing method and information processing apparatus | |
| CN113792582A (en) | Infrared light source driving method and device, computer equipment and storage medium | |
| US11509797B2 (en) | Image processing apparatus, image processing method, and storage medium | |
| US11405562B2 (en) | Image processing apparatus, method of controlling the same, image capturing apparatus, and storage medium | |
| CN110245618B (en) | 3D recognition device and method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |